Issue: JSON Parse Fragility in Fact Retrieval Response
Description
The mem0 OSS fact retrieval pipeline crashes when the LLM response contains malformed JSON or extra text wrapping. The current code uses JSON.parse() directly on the LLM response without defensive extraction.
Reproduction
- Configure mem0 with an LLM provider that returns non-strict JSON (e.g., extra explanation text, markdown code fences)
- Trigger
addToVectorStore() with infer=true
- LLM response includes text like:
Here are the facts I extracted:
```json
{"facts": ["fact1", "fact2"]}
JSON.parse() throws: Unexpected token 'H', "Here are t"... is not valid JSON
- Entire memory operation fails
Expected Behavior
Gracefully extract JSON object from response text using regex pattern matching before parsing. Fall back to empty facts array on parse failure (don't crash).
Current Code (problematic)
const parsed = FactRetrievalSchema.parse(JSON.parse(responseText));
facts = parsed.facts || [];
Proposed Fix
let jsonStr = cleanResponse;
// Try to extract JSON object if wrapped in text
const jsonMatch = cleanResponse.match(/\{[\s\S]*"facts"[\s\S]*\}/);
if (jsonMatch) {
jsonStr = jsonMatch[0];
}
try {
const parsed = FactRetrievalSchema.parse(JSON.parse(jsonStr));
facts = parsed.facts || [];
} catch (e) {
console.warn("[mem0] Failed to parse facts from LLM response (graceful degradation):", e.message);
facts = []; // Don't crash - continue with empty facts
}
Impact
- Current: Any LLM response variation breaks memory ingestion
- After fix: Robust to markdown fences, explanation text, minor JSON variations
- Tested: Local patch handles 5/5 test cases (clean JSON, wrapped text, code fences, inline JSON, invalid JSON → graceful degradation)
Environment
- mem0 version: OSS (local patch applied)
- LLM provider: ollama-local/qwen3.5:cloud
- OpenClaw integration: openclaw-mem0 plugin
Proposed Fix Location
mem0ai/mem0/mem0/oss/src/index.ts (or .js), addToVectorStore() method, lines ~5100-5130
Issue: JSON Parse Fragility in Fact Retrieval Response
Description
The mem0 OSS fact retrieval pipeline crashes when the LLM response contains malformed JSON or extra text wrapping. The current code uses
JSON.parse()directly on the LLM response without defensive extraction.Reproduction
addToVectorStore()withinfer=trueJSON.parse()throws:Unexpected token 'H', "Here are t"... is not valid JSONExpected Behavior
Gracefully extract JSON object from response text using regex pattern matching before parsing. Fall back to empty facts array on parse failure (don't crash).
Current Code (problematic)
Proposed Fix
Impact
Environment
Proposed Fix Location
mem0ai/mem0/mem0/oss/src/index.ts(or.js),addToVectorStore()method, lines ~5100-5130