-
Notifications
You must be signed in to change notification settings - Fork 1.4k
Description
Summary
When using memory-openviking in local mode, memory_store / auto-capture can fail with:
AbortError: This operation was aborted- or
extract returned 0 memories
In our case, the OpenViking process is still alive and TCP port 1933 is connectable, but HTTP endpoints such as /health and /api/v1/system/status stop responding. The plugin then aborts requests after timeoutMs.
This looks like a service-hang / internal queue blockage case rather than a simple startup failure.
Environment
- OpenClaw:
2026.3.7 - Host: macOS arm64
- memory plugin:
memory-openviking - OpenViking mode:
local - OpenViking port:
1933
memory-openviking config
{
"mode": "local",
"configPath": "~/.openviking/ov.conf",
"targetUri": "viking://user/memories",
"autoCapture": true,
"autoRecall": true,
"apiKey": "openviking-local",
"timeoutMs": 120000
}OpenViking config
{
"server": {
"host": "127.0.0.1",
"port": 1933
},
"embedding": {
"max_concurrent": 10
},
"vlm": {
"provider": "openai",
"api_base": "http://127.0.0.1:8317/v1",
"model": "gpt-5",
"max_concurrent": 10,
"max_retries": 2
}
}Symptoms
Observed logs include:
memory-openviking: auto-capture failed: AbortError: This operation was abortedmemory-openviking: memory_store failed: AbortError: This operation was abortedmemory-openviking: auto-capture completed but extract returned 0 memoriesmemory-openviking: memory_store completed but extract returned 0 memories- upstream timeouts / rate-limit related failures also appeared around the same period
Key observation
The OpenViking process is still running, and TCP connect to 127.0.0.1:1933 succeeds immediately.
However, HTTP requests time out with no bytes returned:
/health/api/v1/system/status/api/v1/observer/vlm/api/v1/observer/vikingdb
So this seems to be a case where:
- process is alive
- port is open
- but the HTTP service is internally blocked / hung
Reproduction pattern
This is easier to trigger under heavier memory traffic, for example when several of these happen close together:
- auto-capture
- memory_store
- batch import / repeated extract
- VLM / embedding concurrency set relatively high
Expected behavior
If OpenViking becomes internally blocked:
- plugin should surface a clearer cause than generic
AbortError - local-mode health checks should detect “process alive but HTTP hung”
- ideally there should be recovery / restart / circuit-breaker behavior
Actual behavior
- process stays alive
- plugin requests abort on timeout
- memory ingestion becomes unreliable
- logs make it look like a request failure, while the deeper problem is likely service hang
Possible areas to inspect
- OpenViking internal request queue / worker starvation
- VLM / embedding concurrency interaction
- local-mode process supervision and hung-service detection
- more explicit distinction between:
- startup failure
- upstream provider failure
- HTTP hang while process remains alive
Additional note
This is not caused by invalid content being stored. The same failure affects:
memory_storeauto-capture- health/status observer endpoints
Metadata
Metadata
Assignees
Labels
Type
Projects
Status