What happened?
_chunk_by_exchange() in mempalace/convo_miner.py (line 73) caps AI responses
at 8 lines before storing them in the palace:
ai_response = " ".join(ai_lines[:8])
Any content after line 8 is silently discarded and never stored anywhere. A
50-line code example, a step-by-step guide, or any detailed AI response gets
stored as 8 lines with no warning.
What did you expect?
The full AI response to be stored verbatim, consistent with the project's stated
core principle: "Verbatim first: Never summarize user content. Store exact words."
I also want to confirm whether the 8-line limit was intentional before submitting
a fix — I can imagine it was meant to keep the Q+A chunk within CHUNK_SIZE, but
an arbitrary line count silently drops content rather than splitting it cleanly.
Happy to hear if there's a reason for it I've missed.
How to reproduce:
- Install mempalace and run the following script:
from mempalace.convo_miner import _chunk_by_exchange
conversation = (
"> How do I implement JWT authentication in Flutter?\n"
"Step 1: Add the jwt_decoder package\n"
"Step 2: Create an AuthService class\n"
"Step 3: Implement login with token storage\n"
"Step 4: Add token to HTTP headers\n"
"Step 5: Handle token expiry\n"
"Step 6: Implement logout\n"
"Step 7: Add StreamController for auth state\n"
"Step 8: Wire up AuthBloc\n"
"Step 9: SHOULD be stored\n"
"Step 10: SHOULD be stored\n"
).split("\n")
chunks = _chunk_by_exchange(conversation)
print(chunks[0]["content"])
- Observe that Steps 9 and 10 are missing from the output.
- Mine any real conversation export where the AI gave responses longer than
8 lines and search for content from those responses — it will not be found.
Environment:
- OS: macOS 15
- Python version: 3.11
- MemPal version: 3.1.0 (git SHA: 068dbd9)
What happened?
_chunk_by_exchange()inmempalace/convo_miner.py(line 73) caps AI responsesat 8 lines before storing them in the palace:
Any content after line 8 is silently discarded and never stored anywhere. A
50-line code example, a step-by-step guide, or any detailed AI response gets
stored as 8 lines with no warning.
What did you expect?
The full AI response to be stored verbatim, consistent with the project's stated
core principle: "Verbatim first: Never summarize user content. Store exact words."
I also want to confirm whether the 8-line limit was intentional before submitting
a fix — I can imagine it was meant to keep the Q+A chunk within
CHUNK_SIZE, butan arbitrary line count silently drops content rather than splitting it cleanly.
Happy to hear if there's a reason for it I've missed.
How to reproduce:
8 lines and search for content from those responses — it will not be found.
Environment: