Skip to content

research(memory): Acon — failure-driven compression guideline optimization for long-horizon agents (arXiv:2510.00615) #2201

@bug-ops

Description

@bug-ops

Source

arXiv:2510.00615 — "Acon: Optimizing Context Compression for Long-horizon LLM Agents" (October 2025, KAIST + Microsoft)

Key Finding

Failure-driven natural language guideline optimization: the compressor compares successful (uncompressed) vs. failed (compressed) trajectories and uses an LLM to refine its compression instructions. Achieves 26–54% reduction in peak token usage with no parameter updates. Gradient-free, works with closed-source APIs.

Applicability to Zeph

  • Directly usable with OpenAI backend (gradient-free, closed-source compatible)
  • Maps well to Zeph's compaction subsystem in zeph-memory (memory.compression)
  • Failure-driven guideline refinement could be implemented as an offline optimization loop over Zeph's debug dump trajectories (.local/debug/)
  • Dual-mode compression (interaction history + environment observations) aligns with Zeph's message history + tool output model

The key innovation is self-improving compression: after a compressed session fails (task not completed), an LLM analyzes what was lost and refines the compression instructions. This is more principled than Zeph's current threshold-based compaction.

Priority

P3 — enhancement to existing compaction system

References

Metadata

Metadata

Assignees

Labels

P3Research — medium-high complexityenhancementNew feature or requestmemoryzeph-memory crate (SQLite)researchResearch-driven improvement

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions