fix: store full AI response in convo_miner exchange chunking#695
Merged
bensig merged 1 commit intoMemPalace:developfrom Apr 12, 2026
Merged
Conversation
bensig
approved these changes
Apr 12, 2026
Collaborator
bensig
left a comment
There was a problem hiding this comment.
Code review + security audit clean.
jphein
added a commit
to jphein/mempalace
that referenced
this pull request
Apr 12, 2026
Upstream merged MemPalace#682-684 (our splits), MemPalace#687 (dry-run None room), MemPalace#695/MemPalace#708 (convo_miner full response), MemPalace#732 (0-chunk re-processing), plus VitePress docs site. Conflicts: - config.py: take upstream's [^\W_] regex (our MemPalace#683 merged version) - miner.py: integrate upstream's early-return for tiny files, dedupe dry-run read path - test_miner.py: keep our detect_room tests + upstream's dry-run test - CONTRIBUTING.md: take upstream's org URL update Co-Authored-By: Claude Opus 4.6 <[email protected]>
ichoosetoaccept
pushed a commit
to detailobsessed/mempalace
that referenced
this pull request
Apr 13, 2026
Remove the ai_lines[:8] truncation in _chunk_by_exchange() so full AI responses are stored. Add _register_empty_file() sentinel drawer for conversation files that produce 0 chunks, preventing infinite re-processing on subsequent runs. Also fix pre-existing test_miner assertion that expected int from process_file (now returns tuple since PR #45). Ports upstream MemPalace#654, MemPalace#692, MemPalace#695.
ichoosetoaccept
pushed a commit
to detailobsessed/mempalace
that referenced
this pull request
Apr 13, 2026
Remove the ai_lines[:8] truncation in _chunk_by_exchange() so full AI responses are stored. Add _register_empty_file() sentinel drawer for conversation files that produce 0 chunks, preventing infinite re-processing on subsequent runs. Also fix pre-existing test_miner assertion that expected int from process_file (now returns tuple since PR #45). Ports upstream MemPalace#654, MemPalace#692, MemPalace#695.
gnusam
pushed a commit
to gnusam/mempalace-pgsql
that referenced
this pull request
Apr 25, 2026
… 0-chunk files Three upstream fixes ported together because they're conceptually one "convo_miner polish" pass on the same exchange-chunking path. 1. Remove ai_lines[:8] truncation (upstream d52d6c9, PR MemPalace#695). The _chunk_by_exchange path was silently dropping every line past line 8 of the AI response, violating the verbatim-storage principle. 2. Split oversize exchanges across drawers (upstream 9b60c6e, PR MemPalace#708). Now that the full response is preserved, an exchange that exceeds CHUNK_SIZE (800 chars, aligned with miner.py) is split into consecutive drawers instead of a single oversized one. Adds CHUNK_SIZE module constant. 3. Register a no-embedding sentinel for files that produce zero chunks (upstream 87e8baf, PR MemPalace#732). mine_convos has three early-exit paths (OSError, content too short, zero chunks) that previously wrote nothing — file_already_mined() then returned False on the next run and the file was re-read every time. Adapted MemPalace#3 for the PG backend: the upstream sentinel uses collection.upsert() (ChromaDB API). This fork instead adds a PalaceDB.register_empty_file() method that inserts a row directly with embedding=NULL and metadata.ingest_mode='registry', so the sentinel is free of embedding cost and invisible to vector search. file_already_mined() already keys on source_file + source_mtime, so the existing path picks up the sentinel without further changes. Three behavioural tests added: full AI response preserved, oversize exchange split across drawers, and the sentinel + file_already_mined round trip. Upstream: MemPalace@d52d6c9 MemPalace@9b60c6e MemPalace@87e8baf Co-authored-by: shafdev <[email protected]> Co-authored-by: Sanjay Ramadugu <[email protected]> Co-authored-by: Mikhail Valentsev <[email protected]> Co-Authored-By: Claude Opus 4.7 (1M context) <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes #692
What does this PR do?
Removes an undocumented 8-line cap on AI responses in
_chunk_by_exchange()inside
convo_miner.py.Before:
After:
The
[:8]slice silently discarded everything beyond the 8th line of any AIresponse, violating the project's core "verbatim first" principle
(CONTRIBUTING.md). The fallback
_chunk_by_paragraph()path has no equivalentcap, making this inconsistency a likely unintentional oversight from the initial
commit.
How to test
A new regression test
test_long_ai_response_not_truncatedintests/test_convo_miner_unit.pyverifies that a 13-line AI response is storedin full.
Checklist
python -m pytest tests/ -v)ruff check .)