-
Notifications
You must be signed in to change notification settings - Fork 2
feat(core): add DetectorMode::Model for classifier-backed feedback detection #2210
Copy link
Copy link
Closed
Labels
P3Research — medium-high complexityResearch — medium-high complexityenhancementNew feature or requestNew feature or request
Description
Background
Issue #2190 introduced Candle-backed injection classifier infrastructure. The FeedbackDetector in zeph-core currently supports two modes:
Regex(default)Judge(LLM-backed)
A third Model variant using the ClassifierBackend trait would enable fast, offline, CPU-based feedback detection without an LLM round-trip.
Required work
- Add
DetectorMode::Model { repo_id: String }variant to the config enum inzeph-config - Add
ClassifierBackendintegration toFeedbackDetectorinzeph-core - Add
zeph-coredependency onzeph-llmclassifier module (or extractClassifierBackendtrait to a shared crate) - Wire
detector_mode = "model"in config parsing - Add tests for the new variant
Notes
FeedbackDetectorlives incrates/zeph-core/src/agent/feedback_detector.rsClassifierBackendtrait is incrates/zeph-llm/src/classifier/mod.rs- This is a cross-crate dependency change — requires careful consideration of the dependency graph
- Deferred from PR test(candle): add integration tests for Candle-backed classifier models #2190 (test infrastructure PR) as it requires implementing a new feature, not just tests
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
P3Research — medium-high complexityResearch — medium-high complexityenhancementNew feature or requestNew feature or request