-
Notifications
You must be signed in to change notification settings - Fork 2
research(memory): Acon — failure-driven compression guideline optimization for long-horizon agents (arXiv:2510.00615) #2201
Copy link
Copy link
Closed
Labels
P3Research — medium-high complexityResearch — medium-high complexityenhancementNew feature or requestNew feature or requestmemoryzeph-memory crate (SQLite)zeph-memory crate (SQLite)researchResearch-driven improvementResearch-driven improvement
Description
Source
arXiv:2510.00615 — "Acon: Optimizing Context Compression for Long-horizon LLM Agents" (October 2025, KAIST + Microsoft)
Key Finding
Failure-driven natural language guideline optimization: the compressor compares successful (uncompressed) vs. failed (compressed) trajectories and uses an LLM to refine its compression instructions. Achieves 26–54% reduction in peak token usage with no parameter updates. Gradient-free, works with closed-source APIs.
Applicability to Zeph
- Directly usable with OpenAI backend (gradient-free, closed-source compatible)
- Maps well to Zeph's compaction subsystem in
zeph-memory(memory.compression) - Failure-driven guideline refinement could be implemented as an offline optimization loop over Zeph's debug dump trajectories (
.local/debug/) - Dual-mode compression (interaction history + environment observations) aligns with Zeph's message history + tool output model
The key innovation is self-improving compression: after a compressed session fails (task not completed), an LLM analyzes what was lost and refines the compression instructions. This is more principled than Zeph's current threshold-based compaction.
Priority
P3 — enhancement to existing compaction system
References
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
P3Research — medium-high complexityResearch — medium-high complexityenhancementNew feature or requestNew feature or requestmemoryzeph-memory crate (SQLite)zeph-memory crate (SQLite)researchResearch-driven improvementResearch-driven improvement