Skip to content

enhancement(core): LearningEngine is a stub — no actual behavioral learning from interaction patterns #1913

@bug-ops

Description

@bug-ops

Problem

LearningEngine (crates/zeph-core/src/agent/learning_engine.rs, ~94 lines) exists as a structural placeholder but implements no actual learning logic. The agent does not improve its behavior based on interaction history.

Current state

The file contains:

  • A struct with a single reflection_used flag
  • No pattern analysis
  • No preference inference
  • No behavioral adaptation between turns or sessions
  • No connection to FeedbackDetector outputs or user_corrections

What a functional LearningEngine should do

  1. Preference inference: detect recurring user preferences from corrections (e.g. prefers concise answers, always wants code in Rust, dislikes emoji)
  2. Tool usage patterns: learn which tools the user relies on and pre-suggest them
  3. Skill affinity: track which skills produce high-quality outcomes for this user (connect to skill_outcomes table)
  4. Cross-session adaptation: persist learned preferences to zeph_key_facts or a dedicated user_preferences collection
  5. Response style adaptation: adjust verbosity, language, format based on feedback history

Affected tables (currently empty/unused)

Impact

  • Self-learning feature is advertised but non-functional
  • No behavioral personalization across sessions
  • FeedbackDetector signals are detected but never acted upon beyond skill re-ranking

Suggested approach

Start with the simplest useful behavior: after each session, scan user_corrections for patterns and write 1–3 preference facts to zeph_key_facts via memory_save. Build from there incrementally.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions