-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Memory semantic queue stalls on context_type=memory jobs; pending backlog grows while processed stays at 0 #864
Description
Summary
OpenViking is accepting memory writes successfully, but the background semantic worker appears to stall on context_type="memory" jobs. The semantic queue accumulates pending items while processed remains at 0.
This looks related to memory semantic reprocessing / queue behavior rather than a specific local LLM backend.
Environment
- OpenViking server:
0.2.9 ovCLI:0.2.6- Queue backend: SQLite
- Host: macOS
- OpenClaw integration
- VLM backends tested:
- Qwen via LM Studio
- Llama 3.3 via LM Studio
- Qwen GGUF via Ollama
What works
- Front-door memory writing works
- Chat / memory capture writes succeed
- OpenViking service starts and
/healthreturns OK when launched correctly under the normal launch agent
What fails
Background semantic processing stalls on memory jobs:
- queue fills with pending semantic jobs
processedstays at0- one in-progress item appears to sit indefinitely
- VLM token usage keeps increasing while queue progress does not move
Example observer state:
| Semantic | 269 | 1 | 0 | 0 | 270 |
Queue contents observed
The pending queue was dominated by memory jobs like:
viking://session/default/...withcontext_type="memory"- memory dirs such as:
viking://user/default/memories/entitiesviking://user/default/memories/preferencesviking://agent/.../memories/patterns
Relevant logs
When running via the normal launch agent, logs show semantic processing starting on a memory dir:
Processing semantic generation for: viking://user/default/memories/entities (recursive=True)
Processing semantic generation for: SemanticMsg(... context_type='memory', changes={'added': ['viking://user/default/memories/entities/mem_...md'], 'modified': [], 'deleted': []})
Parsed 0 existing summaries from overview.md for viking://user/default/memories/entities
[MemorySemantic] uri=viking://user/default/memories/entities files=119 existing_summaries=0 changed=1 deleted=0 has_changes=True
But queue completion still does not advance.
What was already ruled out
This does not appear to be fixed by swapping model/runtime:
- Qwen in LM Studio: same stall pattern
- Llama 3.3 in LM Studio: same stall pattern, just slower
- Qwen GGUF via Ollama: same stall pattern
That suggests this is likely not just an LM Studio / MLX problem.
Relation to existing issue
This may be related to the memory semantic reprocessing behavior described in #505.
Local mitigation attempted
To reduce churn, a local patch was tested to:
- skip session-level memory semantic jobs without changes
- skip no-change memory-dir reprocessing when
.overview.mdalready exists and no added/modified/deleted files are present
Also, pending queue rows matching session-level memory jobs were backed up and then removed from the queue DB to stop backlog growth.
This reduced the queue size, but it was cleanup, not proof of a full fix.
Main questions
- Is this a known bug in the
context_type="memory"semantic path beyond Memory extraction triggers O(n²) semantic reprocessing — token cost grows quadratically with memory count #505? - Are
viking://session/...memory semantic jobs expected, or should they be suppressed / handled differently? - Is there an official fix or recommended patch for preventing memory semantic queue stalls / self-reprocessing?
- Is there a supported way to safely requeue or rebuild only the valid semantic jobs after cleanup?
If helpful, exact queue DB samples and local patches can be provided.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status