🦞OpenClaw AI/ML
About
OpenClaw is an AI platform for building AI agents and assistants. It runs on your own devices and connects to popular messaging platforms (such as WhatsApp, Telegram, Slack, Discord, and others) while preserving full data privacy (all agent data is stored locally in a SQLite database).
Developers use OpenClaw to build multi-channel AI assistants with streaming responses, browser automation, vision, and voice features. It includes a local Gateway service, a CLI for management, and support for 12+ messaging platforms.
Data privacy: OpenClaw stores data locally by default. Nothing is sent externally unless you configure it.
What you get
Multi-channel assistants and routing across 12+ messaging platforms
Streaming responses for faster, more interactive chats
Vision inputs for image understanding and UI analysis
Browser automation via an OpenClaw-managed Chrome instance
Voice integrations (platform dependent)
Session memory and conversation history
Tooling via skills, function calling, and external integrations
Retries and error handling for more robust agents
A local Gateway (binds to
localhost:18789by default) and a CLIA local SQLite database containing all agent data (default path:
~/.openclaw/openclaw.db)
Prerequisites
An AIMLAPI key obtained from your account dashboard
Node.js and npm
pnpmif you build from source
Installation
Option 1: Install via npm (recommended)
openclaw-aimlapi@latest includes two AI/ML API skills:
aimlapi-media-genfor images and videoaimlapi-llm-reasoningfor chat and reasoning
The onboarding wizard installs the Gateway as a system service. It uses launchd on macOS and systemd on Linux.
Option 2: Build from source
UI walkthrough (screenshots)








aimlapi/ prefix
Suggested: aimlapi/google/gemini-3-flash-preview




Option 3: Install skills from the official repo (ClawHub)
Use this if you want to install or update skills separately from the OpenClaw package.
Install the CLI
Pick one:
For more details, see: ClawHub tool docs.
Install the skills
How it fits into OpenClaw
By default,
clawhubinstalls skills into./skillsunder your current directory.If an OpenClaw workspace is configured,
clawhubfalls back to that workspace.Override the install location with
--workdirorCLAWHUB_WORKDIR.OpenClaw loads workspace skills from
<workspace>/skills.New skills are picked up on the next session (restart the Gateway).
If you already use
~/.openclaw/skillsor bundled skills, workspace skills take precedence.
What these skills do
aiml-image-video — Our media generation models
Generate images and videos via two Python scripts (gen_image.py, gen_video.py).
aiml-llm-reasoning — Our LLMs + Reasoning
Run chat completions via run_chat.py. Use --extra-json for advanced params.
Paths above assume you run clawhub install ... from your OpenClaw workspace root (so skills land in ./skills). If you install somewhere else, adjust the paths to match your --workdir.
If you installed OpenClaw via openclaw-aimlapi@latest, you may already have AIML-related skills installed. Use ClawHub when you specifically want the skills from the official skills repository.
Configure AI/ML API in OpenClaw
Use the Web UI from onboarding. The default URL is usually http://127.0.0.1:59062/.
Add your API key
Use API Key auth. Paste the key from aimlapi.com/app/keys.
Use OpenClaw
Use via a chat connector (Telegram example)
1. Message your bot. You will receive a pairing code.

2. Approve the pairing:
Expected output looks like this:
3. Message your bot again. You should get a response.

Use via CLI
Use Cases
Example: Route Slack + Discord to the same agent
User messages the bot on Slack or Discord.
Gateway receives the message with platform context.
OpenClaw routes the message to the agent.
The agent calls AI/ML API using your chosen model.
The response goes back to the same channel.
Example: Analyze a web page with vision
User requests a web page analysis.
OpenClaw opens a Chrome instance (CDP-controlled).
OpenClaw captures a screenshot of the page.
The agent sends the screenshot to a vision model.
The model returns a description and key details.
OpenClaw sends the result back to the user.
Supported models
OpenAI models (gpt-4o, gpt-4o-mini, gpt-4-turbo, o3-mini, o1, and others)
More
Last updated
Was this helpful?