top of page

Timeplus PulseBot: Open-Source OpenClaw Alternative, Built with Timeplus SQL Skill

  • Writer: Gang Tao
    Gang Tao
  • Feb 26
  • 6 min read

Updated: Mar 6

In this blog, our CTO, Gang Tao, shares an introduction to Timeplus PulseBot, and how to get started. Stay tuned for more examples and use cases with PulseBot!



Earlier this year, a project called OpenClaw became one of the fastest-growing open-source repositories in GitHub history, accumulating over 214,000 stars in two weeks. For anyone who hadn't been following the AI agent space closely, the numbers were staggering.


Why did it resonate so deeply? Because OpenClaw did something that chatbots never could: it acted. Instead of just answering questions, an OpenClaw agent could negotiate a car purchase over email while you slept, file a rebuttal to an insurance company, or set up API credentials by navigating a browser — all autonomously. It connected to messaging platforms people already used (WhatsApp, Telegram, Slack, iMessage), worked with any LLM (Claude, GPT, local models via Ollama), and stored everything as plain Markdown files you could inspect and version-control. The "heartbeat" scheduler lets the agent proactively wake up and check on things without being prompted.


It felt like the first AI assistant that genuinely did things. That's a powerful idea, and the community responded accordingly. OpenAI took notice — OpenClaw's creator Peter Steinberger joined OpenAI to drive the next generation of personal agents, with Sam Altman describing it as "the future of very smart agents interacting with each other to do very useful things for people."



What OpenClaw Gets Wrong


The explosive growth also exposed some fundamental problems.


A security audit of OpenClaw uncovered 512 vulnerabilities, including critical issues that allowed remote code execution with no authentication required. By default, the gateway was open to the internet. Researchers scanning for exposed instances found close to 1,000 publicly accessible installations. The skill marketplace had thousands of community-contributed packages, but nearly 12% of audited skills were malicious. API keys, OAuth tokens, and months of conversation history were stored in plaintext JSON files. Multiple government cybersecurity agencies issued emergency advisories.



Beyond security, the framework has architectural limitations that affect production deployments more broadly:


  • No persistent communication layer. Agent messages flow through in-memory queues. When the process stops, those messages are gone. There's no built-in way to replay what happened, recover from a crash, or run agents distributed across machines.

  • Observability as an afterthought. There's no native way to monitor what your agents are doing in real time. You find out something went wrong after the fact, by reading logs.

  • Memory is hard to track by default. Memory lives as markdown files. Without significant additional engineering, there's no durable, queryable store of what agents learned.

  • Scheduling is external. Proactive agent behavior relies on system cron or external schedulers — not something native to the agent infrastructure itself.


These aren't edge cases. For any serious deployment — especially one where agents are taking actions with real consequences — you need to know what's happening as it happens.



Introducing Timeplus PulseBot


Timeplus PulseBot is a stream-native AI agent framework I built at Timeplus that addresses these gaps directly. The core idea is simple: instead of treating observability, memory, communication, and scheduling as separate concerns to be bolted on later, I made Timeplus's streaming engine the backbone for all of them.


Every agent message, every LLM call, every tool execution flows through Timeplus streams. This means everything is persistent, queryable, replayable, and monitorable by construction — not as an add-on.


The framework is also deliberately lightweight: ~8,000 lines of Python. It supports Anthropic Claude, OpenAI, Ollama (local models), OpenRouter, and NVIDIA. Channels include Telegram, webchat via WebSocket, and CLI. It's designed to run with a single docker container.



How PulseBot Works


PulseBot maintains five core Timeplus streams that together provide the full agent infrastructure:



messages is the central communication hub. When a user sends a message (via Telegram, webchat, or CLI), the API server writes a user_input event to this stream. The agent is continuously listening to this stream and picks it up, processes it, then writes agent_response and tool_call events back. The API server listens for these and forwards them to the client over WebSocket. The key difference from a typical message queue: these events are persisted. You can query the full history of every conversation across every session at any time.


llm_logs captures every LLM call: model name, provider, input/output tokens, estimated cost, latency, time-to-first-token, which tools were called, and whether the call succeeded. This is your real-time cost dashboard and latency monitor out of the box.


tool_logs records every tool execution: tool name, arguments, duration, success or failure. You can see exactly which tools your agent calls most, which ones are slow, and which ones are failing.


memory stores vector embeddings of facts the agent has extracted from conversations. After each turn, the LLM automatically extracts important information — user preferences, project context, facts worth remembering — and writes them here with an importance score. At the start of the next conversation, semantic search retrieves the most relevant memories and includes them in the system prompt. Memory uses an append-only pattern with soft deletes, which works cleanly with Proton's open-source streaming engine.


events handles system-level signals: heartbeats, channel connections, skill loads, alerts.


The agent loop itself is straightforward:


  1. Listen to messages stream for user_input events

  2. Build context: fetch recent conversation history + top-k relevant memories via vector search

  3. Call LLM with tools available

  4. If LLM requests tools, execute them and log to tool_logs

  5. Write responses back to messages, log LLM call to llm_logs

  6. Extract and store memories to memory stream


Because everything is a Timeplus stream, you can run a SQL query against any of these streams at any point to understand exactly what your agent did and why.



The Timeplus SQL Agent Skill


PulseBot uses the agentskills.io standard for extending agent capabilities with domain knowledge. A skill is a directory with a SKILL.md file — YAML frontmatter for metadata, Markdown body for instructions. Skills are discovered at startup, but only their name and description (~24 tokens) are loaded into the system prompt. Full instructions are loaded on demand when the agent decides the skill is relevant.


The timeplus-sql-guide skill gives any PulseBot agent deep knowledge of Timeplus streaming SQL. When a user asks for real-time analytics, the agent loads the full skill and can immediately:


  • Create streams, external streams, and materialized views

  • Write correct streaming window queries using tumble(), hop(), and session()

  • Ingest data from Kafka, Redpanda, Pulsar, or HTTP

  • Send results to downstream sinks

  • Write Python or JavaScript UDFs for custom ML inference in the pipeline

  • Create RANDOM STREAM sources for instant test data generation

  • Set up CREATE TASK for scheduled SQL-driven agent triggers


Here's an example interaction: a user asks "create a real-time alert when the error rate exceeds 5% in the last minute." The agent loads the skill, then generates and executes:



That view runs continuously. The results flow into the events stream, which can trigger further agent actions or external alerts.


The skill also includes reference files for ingestion patterns, transformation examples, sink configurations, UDF templates, and random stream recipes. The agent reads these on demand rather than loading them all upfront — keeping the context window lean while giving the agent access to deep domain knowledge when it matters.



What's Next?


The AI agent space is moving fast. OpenClaw proved there's enormous demand for agents that take real action in the world. But autonomous action without observability is dangerous — as the security fallout demonstrated.


PulseBot's approach is to treat the communication layer itself as the observability layer. By routing everything through Timeplus streams, you get:


  • Persistent communications — agents survive restarts, distribute across machines

  • Real-time observability — streaming SQL queries over agent behavior as it happens

  • Durable, searchable memory — vector search over everything an agent has learned

  • SQL-native scheduling — CREATE TASK replaces external cron jobs


And with the Timeplus SQL skill, agents don't just communicate through Timeplus — they understand how to use it to build real-time data pipelines and analytics on behalf of users.


PulseBot is open source under MIT license. You can run it locally with Proton (our open-source engine) or connect it to Timeplus Cloud. To get started:

git clone https://github.com/timeplus-io/PulseBot.git
cd PulseBot
pip install -e .
pulsebot init   # generates config.yaml
pulsebot run    # starts the agent

So, what's next? On the roadmap: MCP server integration, Slack and WhatsApp channels, and a real-time analytics dashboard over the llm_logs and tool_logs streams. Plus, stay tuned for more examples and use cases with Timeplus PulseBot.


If you're building AI agents and want observability that doesn't feel like an afterthought, give PulseBot a try.





 
 
bottom of page