Troubleshooting โ
Common issues and solutions for LimbicDB.
Status: Stable (v5.0.2) - These are the most common "trust-breaking" issues.
Quick Diagnosis โ
If something isn't working as expected:
# 1. Run verification
npm run verify
# 2. Run examples
npx tsx examples/coding-agent-memory.ts
# 3. Check mode execution
npm run test -- --run test/semantic.test.ts1. Semantic/Hybrid Search Falls Back to Keyword โ
Symptom: You request mode: 'semantic' or mode: 'hybrid' but get keyword results with fallback: true in meta.
Why This Happens โ
Semantic/hybrid search requires embeddings. Fallback occurs when:
| Scenario | SQLite Backend | Memory Backend |
|---|---|---|
| No embedder configured | โ Fallback to keyword | โ Fallback to keyword |
| Embedder configured but no embeddings computed yet | โ Fallback to keyword | โ Fallback to keyword |
| Embeddings exist for some but not all memories | โ ๏ธ Partial fallback (only memories with embeddings) | โ ๏ธ Partial fallback |
| Embedder throws error | โ Fallback to keyword | โ Fallback to keyword |
Diagnosis Steps โ
const memory = open({
path: './agent.limbic',
embedder: { /* your embedder */ }
})
// 1. Check stats for embeddings
console.log(memory.stats.embeddingsCount) // Should be > 0
console.log(memory.stats.embeddingsDimensions) // Should match your embedder
// 2. Check if embeddings are computed
const result = await memory.recall('test', { mode: 'semantic' })
console.log(result.meta)
// Look for: fallback, pendingEmbeddings, executedModeSolutions โ
A. Wait for embeddings to compute
// Embeddings compute asynchronously
await memory.remember('Important fact') // Starts embedding computation
await new Promise(resolve => setTimeout(resolve, 1000)) // Wait a bit
// Now semantic search should workB. Force embedding computation (memory backend only)
// Memory backend: all embeddings compute immediately
const memoryBackend = open({
path: ':memory:',
embedder: { /* your embedder */ }
})
// No delay needed for memory backendC. Check embedder configuration
// Common mistake: wrong dimensions
const memory = open({
path: './agent.limbic',
embedder: {
async embed(text) {
// Must return number[]
return [0.1, 0.2, 0.3] // Example: 3 dimensions
},
dimensions: 3 // Must match actual vector length!
}
})2. SQLite vs Memory Backend Differences โ
Symptom: Code works with :memory: but not with ./agent.limbic, or performance differs significantly.
Key Differences โ
| Aspect | SQLite Backend (open('./agent.limbic')) | Memory Backend (open(':memory:')) |
|---|---|---|
| Storage | Persistent file | Volatile (RAM only) |
| Semantic search | MVP - requires embeddings in file | Full - embeddings in memory |
| Embedding availability | Async, eventual consistency | Immediate after computation |
| Performance (keyword) | Fast (FTS5) | Fast (in-memory matching) |
| Performance (semantic) | Slower (file I/O + vectors) | Faster (vectors in memory) |
| Snapshot/restore | With embeddings (MVP) | With embeddings (full) |
When to Choose Which โ
Use SQLite backend when:
- You need persistence across sessions
- Memory count > 1,000 (better disk management)
- You're OK with eventual consistency for embeddings
- You want inspectable
.limbicfile
Use Memory backend when:
- Testing/development
- Need immediate semantic search
- Memory count < 1,000
- Don't need persistence
Migration Between Backends โ
// From memory to SQLite (with embeddings)
const memoryBackend = open({ path: ':memory:', embedder })
await memoryBackend.remember('Important memory')
// Take snapshot
const snapshotId = await memoryBackend.snapshot()
// Open SQLite backend
const sqliteBackend = open({ path: './agent.limbic', embedder })
// Restore snapshot (embeddings included)
await sqliteBackend.restore(snapshotId)Note: Direct memory transfer isn't supported. Use snapshot/restore.
3. When Are Embeddings Available? โ
Symptom: You configured an embedder but semantic search still doesn't work.
Embedding Lifecycle โ
remember(text) โ [Async] โ embedding computed โ stored โ available for searchTimeline:
- Immediate: Memory stored (without embedding)
- Async (varies): Embedding computation starts
- Eventually: Embedding stored, available for semantic search
Checking Embedding Status โ
const memory = open({ path: './agent.limbic', embedder })
// Add memory
await memory.remember('User prefers dark mode')
// Check immediately (likely 0)
console.log(memory.stats.embeddingsCount) // Might be 0
// Check after delay
await new Promise(resolve => setTimeout(resolve, 2000))
console.log(memory.stats.embeddingsCount) // Should be 1
// Check specific memory
const results = await memory.recall('dark mode', { mode: 'semantic' })
console.log(results.meta.pendingEmbeddings) // Number waitingForcing Availability (Workarounds) โ
Option A: Wait and retry
async function recallWithRetry(query, options, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
const result = await memory.recall(query, options)
if (!result.meta.fallback || result.meta.pendingEmbeddings === 0) {
return result
}
await new Promise(resolve => setTimeout(resolve, 1000 * (i + 1)))
}
return await memory.recall(query, { ...options, mode: 'keyword' })
}Option B: Pre-compute embeddings
// If you control the embedder, compute before remembering
async function rememberWithEmbedding(memory, text) {
const embedding = await memory.config.embedder.embed(text)
// Store with pre-computed embedding (not yet supported via API)
// Workaround: Use memory backend, then snapshot to SQLite
}4. CJK Search Current Boundaries โ
Symptom: Chinese/Japanese/Korean text search returns unexpected results.
Current Implementation โ
LimbicDB uses hybrid search for CJK:
- FTS5 first: Standard full-text search (works well for English)
- LIKE fallback: If FTS5 returns insufficient results AND query contains CJK characters
What Works โ
| Query Type | Works? | Notes |
|---|---|---|
| Exact Chinese term | โ Yes | "็จๆท" matches "็จๆทๅๆฌข่ฟไธชๅ่ฝ" |
| Partial Chinese characters | โ Yes (via LIKE) | "ๅๆฌข" matches "็จๆทๅๆฌข่่ฒไธป้ข" |
| Mixed Chinese-English | โ Yes | "user ๅๆฌข" matches mixed content |
| Japanese text | โ Yes | "ใใใฏใในใใงใ" matches Japanese content |
| Korean text | โ Yes | "ํ ์คํธ์ ๋๋ค" matches Korean content |
Known Limitations โ
| Limitation | Status | Workaround |
|---|---|---|
| Word segmentation | โ Not supported | Search for full phrases |
| Synonyms | โ Not supported | Include multiple terms |
| Traditional/Simplified conversion | โ Not supported | Search both versions |
| Pinyin matching | โ Not supported | Use Chinese characters |
| Advanced ranking | โ Basic only | Use minStrength filter |
Testing CJK Search โ
// Test if CJK search works for your use case
const memory = open('./test.limbic')
// Add test memories
await memory.remember('็จๆทๅๆฌข่่ฒไธป้ข')
await memory.remember('่ฟๆฏๆต่ฏๅ
ๅฎน')
await memory.remember('This is English content')
// Test various queries
const queries = [
'็จๆท', // Exact match
'ๅๆฌข', // Partial match
'่่ฒ', // Another partial
'test', // English (should work)
'็จๆท test', // Mixed
]
for (const query of queries) {
const results = await memory.recall(query)
console.log(`"${query}": ${results.memories.length} results`)
}Improving CJK Results โ
Option A: Use tags for better filtering
await memory.remember('็จๆท็้ขๅบ่ฏฅ็ฎๆด', { tags: ['ui', 'chinese'] })
const results = await memory.recall('', { tags: ['chinese'] })Option B: Combine with other filters
// Get all Chinese content (empty query + kind filter)
const chineseContent = await memory.recall('', {
kind: 'fact', // or whatever kind you use
minStrength: 0.3
})5. Common Error Messages โ
"SQLite backend: semantic search not available, falling back to keyword" โ
Cause: No embeddings available yet.
Solution: Wait for embeddings to compute, or use keyword search.
"Embedding dimensions mismatch" โ
Cause: You changed embedder dimensions between sessions.
Solution: Use consistent embedder dimensions, or clear database and start fresh.
"Database is locked" (SQLite backend) โ
Cause: Multiple processes accessing same .limbic file.
Solution: Ensure only one LimbicDB instance per file, or use :memory: for testing.
"Expected vector of length X, got Y" โ
Cause: Embedder returns wrong vector size.
Solution: Check your embedder implementation returns correct dimensions.
6. Performance Issues โ
Semantic Search is Slow โ
Expected: Semantic search scales linearly with memory count.
Baseline performance (from npm run benchmark:baseline):
- 100 memories: ~5ms (SQLite), ~1ms (Memory)
- 1,000 memories: ~50ms (SQLite), ~1ms (Memory)
- 5,000 memories: ~250ms (SQLite), ~8ms (Memory)
If slower than baseline:
- Check embedder performance
- Use keyword search for large datasets
- Consider Memory backend for semantic-heavy use
Memory Usage High โ
SQLite backend: File grows with memories (~1KB per memory + embeddings).
Memory backend: All data in RAM.
Solution: Use forget() to prune old memories, or increase pruneThreshold.
7. Getting Help โ
If you're stuck:
- Check the examples -
examples/directory - Run verification -
npm run verify - Check existing issues - GitHub Issues
- Create minimal reproduction - Smallest code that shows the problem
When reporting issues, include:
- Backend type (SQLite or Memory)
- Embedder configuration (if any)
- Node.js version
- Exact error message
- Reproduction code
Last updated: 2026-04-10 (v5.0.2)