fix(ts): extract JSON from chatty LLM responses in fact retrieval#4533
Merged
whysosaket merged 3 commits intomem0ai:mainfrom Mar 30, 2026
Merged
Conversation
When local LLMs (Ollama, LM Studio) return JSON wrapped in explanation
text without code fences, JSON.parse() fails and all facts are silently
dropped. Add extractJson() that strips code fences then locates JSON by
first `{`/last `}` boundaries, keeping the existing try/catch fallback.
utkarsh240799
approved these changes
Mar 27, 2026
kartik-mem0
approved these changes
Mar 30, 2026
whysosaket
approved these changes
Mar 30, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Linked Issue
Closes #4526
Description
Local LLMs (Ollama, LM Studio, Qwen, etc.) often wrap JSON output in conversational text without code fences:
The existing
removeCodeBlocks()only handles code-fence-wrapped responses. When there are no code fences,JSON.parse()fails on the raw text and all extracted facts are silently dropped (the try/catch setsfacts = []). This means memory operations quietly produce no memories.Root cause:
removeCodeBlocks()passes through text unchanged when there are no code fences, soJSON.parse()receives the full chatty response and fails.Fix: Add
extractJson()toprompts/index.tsthat:removeCodeBlocks()(preserving existing behavior){/last}(or[/]) boundaries to extract the outermost JSON object/arrayReplace
removeCodeBlocks()calls withextractJson()in both fact retrieval and memory action parsing paths inmemory/index.ts.This is the TypeScript SDK counterpart of the Python-side fix in PR #4525.
Files changed
mem0-ts/src/oss/src/prompts/index.tsextractJson()functionmem0-ts/src/oss/src/memory/index.tsextractJson()instead ofremoveCodeBlocks()for JSON parsingmem0-ts/src/oss/tests/extract-json.test.tsextractJson()Type of Change
Breaking Changes
N/A —
extractJson()is a strict superset ofremoveCodeBlocks()for JSON extraction. Clean JSON, code-fenced JSON, and JSON arrays all produce identical results. The existing try/catch graceful degradation is preserved.Test Coverage
Tests added (
mem0-ts/src/oss/tests/extract-json.test.ts— 17 tests)returns clean JSON unchangedextracts JSON from json code fenceextracts JSON from bare code fenceextracts JSON wrapped in explanation textextracts JSON from chatty LLM response with leading textextracts JSON from chatty LLM response with trailing textextracts JSON from text with both leading and trailingextracts JSON from code-fenced response with surrounding texthandles nested JSON objectshandles multi-line JSON in chatty textreturns original text when no JSON boundaries foundhandles JSON array responsesreturns empty string for empty inputhandles truncated code block missing closing fencehandles whitespace-padded JSONhandles LM Studio-style verbose responsehandles Ollama-style response with thinking prefixAll 299 existing tests + 17 new tests pass.
Checklist