-
-
Notifications
You must be signed in to change notification settings - Fork 69.4k
[Feature] Native listen-only mode: receive all messages without LLM invocation #15268
Description
Summary
OpenClaw needs a native "listen-only" or "silent ingest" mode for group chats where the agent should see all messages (for context/indexing) but only invoke the LLM when explicitly mentioned or asked a direct question.
Currently, the only way to achieve this is requireMention: false + a systemPrompt instructing the agent to reply NO_REPLY for non-relevant messages. This wastes tokens on every single message just to produce a "stay silent" response.
Problem
With requireMention: false, every message in the group triggers a full LLM call. For a busy group with 50+ messages/day, this means:
- Significant token waste — each "NO_REPLY" decision costs input tokens (system prompt + context) + output tokens
- Unnecessary API calls — hitting rate limits faster
- Higher latency for actual requests — queue congestion from silent processing
Proposed Solution
Add native config-level support for selective response:
{
"channels": {
"telegram": {
"groups": {
"-123456789": {
"listenMode": "all",
"respondMode": "mention",
"silentIngest": {
"enabled": true,
"hooks": ["session-memory"]
}
}
}
}
}
}Key concepts:
-
listenMode: "all"— Agent receives all messages into session context (currentrequireMention: falsebehavior) -
respondMode: "mention" | "all" | "allowFrom"— Controls when the LLM is actually invoked:"mention"— only when @mentioned or replied to"all"— every message (current default whenrequireMention: false)"allowFrom"— only for messages from specific sender IDs
-
silentIngest— Messages that don't trigger LLM invocation are still:- Added to session context (so agent has full history when invoked)
- Processed by configured hooks (e.g.,
session-memoryfor indexing) - NOT sent to the LLM API
Additional enhancement: Per-sender response permissions
Currently allowFrom is binary — a sender either triggers the agent or is ignored entirely. A more granular model would allow:
{
"allowFrom": {
"293894843": { "canCommand": true, "canTriggerResponse": true },
"355322273": { "canCommand": true, "canTriggerResponse": true },
"999999999": { "canTriggerResponse": false }
}
}This enables scenarios like "team members can command the bot, but client messages are only observed."
Use Cases
- Client observation — Bot in client chat, listens to everything, only responds when team @mentions it
- Internal work chat — Bot indexes all discussions into knowledge graph, responds only when asked
- Meeting notes — Bot captures all messages but doesn't interrupt conversation flow
- Cost optimization — Reduce API calls by 80-90% in busy groups
Current Workaround
{
"requireMention": false,
"systemPrompt": "You receive ALL messages. Only respond when @mentioned or asked directly. For ALL other messages: reply with exactly NO_REPLY."
}This works but wastes tokens on every message and relies on prompt compliance (soft enforcement).
Environment
- OpenClaw 2026.2.9 (stable)
- Telegram groups with mixed listen/respond needs
Status update (2026-02-17)
Implemented in #15841 (open PR).
The delivered design is more generalized than the initial request:
- shared silent-ingest pipeline
- shared group ingest policy resolver
- explicit
message_ingesthook contract - Telegram + Signal config/schema support
This aligns with current OpenClaw architecture (centralized policy resolution + thin channel adapters + typed hook contracts).