Production-ready Rust implementation of Google's Agent Development Kit (ADK). Build high-performance, memory-safe AI agent systems with streaming responses, workflow orchestration, and extensible tool integration.
ADK-Rust provides a comprehensive framework for building AI agents in Rust, featuring:
- Type-safe agent abstractions with async execution and event streaming
- Multiple agent types: LLM agents, workflow agents (sequential, parallel, loop), and custom agents
- Realtime voice agents: Bidirectional audio streaming with OpenAI Realtime API and Gemini Live API
- Tool ecosystem: Function tools, Google Search, MCP (Model Context Protocol) integration
- Production features: Session management, artifact storage, memory systems, REST/A2A APIs
- Developer experience: Interactive CLI, 15+ working examples, comprehensive documentation
Status: Production-ready, actively maintained
Requires Rust 1.75 or later. Add to your Cargo.toml:
[dependencies]
adk-rust = "0.1"
# Or individual crates
adk-core = "0.1"
adk-agent = "0.1"
adk-model = "0.1" # Add features for providers: features = ["openai", "anthropic"]
adk-tool = "0.1"
adk-runner = "0.1"Set your API key:
# For Gemini (default)
export GOOGLE_API_KEY="your-api-key"
# For OpenAI
export OPENAI_API_KEY="your-api-key"
# For Anthropic
export ANTHROPIC_API_KEY="your-api-key"
# For DeepSeek
export DEEPSEEK_API_KEY="your-api-key"use adk_agent::LlmAgentBuilder;
use adk_model::gemini::GeminiModel;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<()> {
let model = GeminiModel::new(&api_key, "gemini-2.5-flash")?;
let agent = LlmAgentBuilder::new("assistant")
.description("Helpful AI assistant")
.instruction("You are a helpful assistant. Be concise and accurate.")
.model(Arc::new(model))
.build()?;
// Run agent (see examples for full usage)
Ok(())
}use adk_agent::LlmAgentBuilder;
use adk_model::openai::OpenAIModel;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<()> {
let model = OpenAIModel::from_env("gpt-4.1")?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Ok(())
}use adk_agent::LlmAgentBuilder;
use adk_model::anthropic::AnthropicModel;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<()> {
let model = AnthropicModel::from_env("claude-sonnet-4")?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Ok(())
}use adk_agent::LlmAgentBuilder;
use adk_model::deepseek::{DeepSeekClient, DeepSeekConfig};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<()> {
let api_key = std::env::var("DEEPSEEK_API_KEY")?;
// Standard chat model
let model = DeepSeekClient::new(DeepSeekConfig::chat(api_key))?;
// Or use reasoner for chain-of-thought reasoning
// let model = DeepSeekClient::new(DeepSeekConfig::reasoner(api_key))?;
let agent = LlmAgentBuilder::new("assistant")
.instruction("You are a helpful assistant.")
.model(Arc::new(model))
.build()?;
Ok(())
}# Interactive console (Gemini)
cargo run --example quickstart
# OpenAI examples (requires --features openai)
cargo run --example openai_basic --features openai
cargo run --example openai_tools --features openai
# DeepSeek examples (requires --features deepseek)
cargo run --example deepseek_basic --features deepseek
cargo run --example deepseek_reasoner --features deepseek
cargo run --example deepseek_tools --features deepseek
# REST API server
cargo run --example server
# Workflow agents
cargo run --example sequential_agent
cargo run --example parallel_agent
# See all examples
ls examples/ADK-Rust follows a clean layered architecture from application interface down to foundational services.
| Crate | Purpose | Key Features |
|---|---|---|
adk-core |
Foundational traits and types | Agent trait, Content, Part, error types, streaming primitives |
adk-agent |
Agent implementations | LlmAgent, SequentialAgent, ParallelAgent, LoopAgent, builder patterns |
adk-model |
LLM integrations | Gemini, OpenAI, Anthropic clients, streaming, function calling |
adk-tool |
Tool system and extensibility | FunctionTool, Google Search, MCP protocol, schema validation |
adk-session |
Session and state management | SQLite/in-memory backends, conversation history, state persistence |
adk-artifact |
Artifact storage system | File-based storage, MIME type handling, image/PDF/video support |
adk-memory |
Long-term memory | Vector embeddings, semantic search, Qdrant integration |
adk-runner |
Agent execution runtime | Context management, event streaming, session lifecycle, callbacks |
adk-server |
Production API servers | REST API, A2A protocol, middleware, health checks |
adk-cli |
Command-line interface | Interactive REPL, session management, MCP server integration |
adk-realtime |
Real-time voice agents | OpenAI Realtime API, Gemini Live API, bidirectional audio, VAD |
adk-graph |
Graph-based workflows | LangGraph-style orchestration, state management, checkpointing, human-in-the-loop |
adk-browser |
Browser automation | 46 WebDriver tools, navigation, forms, screenshots, PDF generation |
adk-eval |
Agent evaluation | Test definitions, trajectory validation, LLM-judged scoring, rubrics |
LLM Agents: Powered by large language models with tool use, function calling, and streaming responses.
Workflow Agents: Deterministic orchestration patterns.
SequentialAgent: Execute agents in sequenceParallelAgent: Execute agents concurrentlyLoopAgent: Iterative execution with exit conditions
Custom Agents: Implement the Agent trait for specialized behavior.
Realtime Voice Agents: Build voice-enabled AI assistants with bidirectional audio streaming.
Graph Agents: LangGraph-style workflow orchestration with state management and checkpointing.
Build voice-enabled AI assistants using the adk-realtime crate:
use adk_realtime::{RealtimeAgent, openai::OpenAIRealtimeModel, RealtimeModel};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let model: Arc<dyn RealtimeModel> = Arc::new(
OpenAIRealtimeModel::new(&api_key, "gpt-4o-realtime-preview-2024-12-17")
);
let agent = RealtimeAgent::builder("voice_assistant")
.model(model)
.instruction("You are a helpful voice assistant.")
.voice("alloy")
.server_vad() // Enable voice activity detection
.build()?;
Ok(())
}Supported Realtime Models:
| Provider | Model | Description |
|---|---|---|
| OpenAI | gpt-4o-realtime-preview-2024-12-17 |
Stable realtime model |
| OpenAI | gpt-realtime |
Latest model with improved speech quality and function calling |
gemini-2.0-flash-live-preview-04-09 |
Gemini Live API |
Features:
- OpenAI Realtime API and Gemini Live API support
- Bidirectional audio streaming (PCM16, G711)
- Server-side Voice Activity Detection (VAD)
- Real-time tool calling during voice conversations
- Multi-agent handoffs for complex workflows
Run realtime examples:
cargo run --example realtime_basic --features realtime-openai
cargo run --example realtime_tools --features realtime-openai
cargo run --example realtime_handoff --features realtime-openaiBuild complex, stateful workflows using the adk-graph crate (LangGraph-style):
use adk_graph::{prelude::*, node::AgentNode};
use adk_agent::LlmAgentBuilder;
use adk_model::GeminiModel;
// Create LLM agents for different tasks
let translator = Arc::new(LlmAgentBuilder::new("translator")
.model(Arc::new(GeminiModel::new(&api_key, "gemini-2.0-flash")?))
.instruction("Translate the input text to French.")
.build()?);
let summarizer = Arc::new(LlmAgentBuilder::new("summarizer")
.model(model.clone())
.instruction("Summarize the input text in one sentence.")
.build()?);
// Create AgentNodes with custom input/output mappers
let translator_node = AgentNode::new(translator)
.with_input_mapper(|state| {
let text = state.get("input").and_then(|v| v.as_str()).unwrap_or("");
adk_core::Content::new("user").with_text(text)
})
.with_output_mapper(|events| {
let mut updates = HashMap::new();
for event in events {
if let Some(content) = event.content() {
let text: String = content.parts.iter()
.filter_map(|p| p.text())
.collect::<Vec<_>>()
.join("");
updates.insert("translation".to_string(), json!(text));
}
}
updates
});
// Build graph with parallel execution
let agent = GraphAgent::builder("text_processor")
.description("Translates and summarizes text in parallel")
.channels(&["input", "translation", "summary"])
.node(translator_node)
.node(summarizer_node) // Similar setup
.edge(START, "translator")
.edge(START, "summarizer") // Parallel execution
.edge("translator", "combine")
.edge("summarizer", "combine")
.edge("combine", END)
.build()?;
// Execute
let mut input = State::new();
input.insert("input".to_string(), json!("AI is transforming how we work."));
let result = agent.invoke(input, ExecutionConfig::new("thread-1")).await?;Features:
- AgentNode: Wrap LLM agents as graph nodes with custom input/output mappers
- Parallel & Sequential: Execute agents concurrently or in sequence
- Cyclic Graphs: ReAct pattern with tool loops and iteration limiting
- Conditional Routing: Dynamic routing via
Router::by_fieldor custom functions - Checkpointing: Memory and SQLite backends for fault tolerance
- Human-in-the-Loop: Dynamic interrupts based on state, resume from checkpoint
- Streaming: Multiple modes (values, updates, messages, debug)
Run graph examples:
cargo run --example graph_agent # Parallel LLM agents with callbacks
cargo run --example graph_workflow # Sequential multi-agent pipeline
cargo run --example graph_conditional # LLM-based routing
cargo run --example graph_react # ReAct pattern with tools
cargo run --example graph_supervisor # Multi-agent supervisor
cargo run --example graph_hitl # Human-in-the-loop approval
cargo run --example graph_checkpoint # State persistenceGive agents web browsing capabilities using the adk-browser crate:
use adk_browser::{BrowserSession, BrowserToolset, BrowserConfig};
// Create browser session
let config = BrowserConfig::new("http://localhost:4444");
let session = BrowserSession::new(config).await?;
// Get all 46 browser tools
let toolset = BrowserToolset::new(session);
let tools = toolset.all_tools();
// Add to agent
let agent = LlmAgentBuilder::new("web_agent")
.model(model)
.instruction("Browse the web and extract information.")
.tools(tools)
.build()?;46 Browser Tools:
- Navigation:
browser_navigate,browser_back,browser_forward,browser_refresh - Extraction:
browser_extract_text,browser_extract_links,browser_extract_html - Interaction:
browser_click,browser_type,browser_select,browser_submit - Forms:
browser_fill_form,browser_get_form_fields,browser_clear_field - Screenshots:
browser_screenshot,browser_screenshot_element - JavaScript:
browser_evaluate,browser_evaluate_async - Cookies, frames, windows, and more
Requirements: WebDriver (Selenium, ChromeDriver, etc.)
docker run -d -p 4444:4444 selenium/standalone-chrome
cargo run --example browser_agentTest and validate agent behavior using the adk-eval crate:
use adk_eval::{Evaluator, EvaluationConfig, EvaluationCriteria};
let config = EvaluationConfig::with_criteria(
EvaluationCriteria::exact_tools()
.with_response_similarity(0.8)
);
let evaluator = Evaluator::new(config);
let report = evaluator
.evaluate_file(agent, "tests/my_agent.test.json")
.await?;
assert!(report.all_passed());Evaluation Capabilities:
- Trajectory validation (tool call sequences)
- Response similarity (Jaccard, Levenshtein, ROUGE)
- LLM-judged semantic matching
- Rubric-based scoring with custom criteria
- Safety and hallucination detection
- Detailed reporting with failure analysis
Built-in tools:
- Function tools (custom Rust functions)
- Google Search
- Artifact loading
- Loop termination
MCP Integration: Connect to Model Context Protocol servers for extended capabilities.
XML Tool Call Markup: For models without native function calling, ADK supports XML-based tool call parsing:
<tool_call>
function_name
<arg_key>param1</arg_key>
<arg_value>value1</arg_value>
</tool_call>
ADK supports multiple LLM providers with a unified API:
| Provider | Model Examples | Feature Flag |
|---|---|---|
| Gemini | gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash |
(default) |
| OpenAI | gpt-4.1, gpt-4.1-mini, o3-mini, gpt-4o |
openai |
| Anthropic | claude-sonnet-4, claude-opus-4, claude-haiku-4 |
anthropic |
| DeepSeek | deepseek-chat, deepseek-reasoner |
deepseek |
All providers support streaming, function calling, and multimodal inputs (where available).
DeepSeek-specific features: Thinking mode (chain-of-thought reasoning), context caching (10x cost reduction for repeated prefixes).
Session Management:
- In-memory and database-backed sessions
- Conversation history and state persistence
- SQLite support for production deployments
Memory System:
- Long-term memory with semantic search
- Vector embeddings for context retrieval
- Scalable knowledge storage
Servers:
- REST API with streaming support (SSE)
- A2A protocol for agent-to-agent communication
- Health checks and monitoring endpoints
- Website: adk-rust.com - Official documentation and guides
- API Reference: docs.rs/adk-rust - Full API documentation
- Examples: examples/README.md - 50+ working examples with detailed explanations
The adk-ui crate enables agents to render rich user interfaces:
use adk_ui::{UiToolset, UI_AGENT_PROMPT};
let agent = LlmAgentBuilder::new("ui_assistant")
.instruction(UI_AGENT_PROMPT) // Tested prompt for reliable UI generation
.tools(UiToolset::all_tools()) // 10 render tools
.build()?;React Client: npm install @zavora-ai/adk-ui-react
Features: 28 components, 10 templates, dark mode, streaming updates, server-side validation
# Run all tests
cargo test
# Test specific crate
cargo test --package adk-core
# With output
cargo test -- --nocapture# Linting
cargo clippy
# Formatting
cargo fmt
# Security audit
cargo audit# Development build
cargo build
# Optimized release build
cargo build --releaseAdd to your Cargo.toml:
[dependencies]
# All-in-one crate
adk-rust = "0.1"
# Or individual crates for finer control
adk-core = "0.1"
adk-agent = "0.1"
adk-model = { version = "0.1", features = ["openai", "anthropic"] } # Enable providers
adk-tool = "0.1"
adk-runner = "0.1"
# Optional dependencies
adk-session = { version = "0.1", optional = true }
adk-artifact = { version = "0.1", optional = true }
adk-memory = { version = "0.1", optional = true }
adk-server = { version = "0.1", optional = true }
adk-cli = { version = "0.1", optional = true }
adk-realtime = { version = "0.1", features = ["openai"], optional = true }
adk-graph = { version = "0.1", features = ["sqlite"], optional = true }
adk-browser = { version = "0.1", optional = true }
adk-eval = { version = "0.1", optional = true }See examples/ directory for complete, runnable examples:
Getting Started
quickstart/- Basic agent setup and chat loopfunction_tool/- Custom tool implementationmultiple_tools/- Agent with multiple toolsagent_tool/- Use agents as callable tools
OpenAI Integration (requires --features openai)
openai_basic/- Simple OpenAI GPT agentopenai_tools/- OpenAI with function callingopenai_multimodal/- Vision and image supportopenai_workflow/- Multi-agent workflows with OpenAI
Workflow Agents
sequential/- Sequential workflow executionparallel/- Concurrent agent executionloop_workflow/- Iterative refinement patternssequential_code/- Code generation pipeline
Realtime Voice Agents (requires --features realtime-openai)
realtime_basic/- Basic text-only realtime sessionrealtime_vad/- Voice assistant with VADrealtime_tools/- Tool calling in realtime sessionsrealtime_handoff/- Multi-agent handoffs
Graph Workflows
graph_agent/- GraphAgent with parallel LLM agents and callbacksgraph_workflow/- Sequential multi-agent pipelinegraph_conditional/- LLM-based classification and routinggraph_react/- ReAct pattern with tools and cyclesgraph_supervisor/- Multi-agent supervisor routinggraph_hitl/- Human-in-the-loop with risk-based interruptsgraph_checkpoint/- State persistence and time travel debugging
Browser Automation
browser_basic/- Basic browser session and toolsbrowser_agent/- AI agent with browser toolsbrowser_interactive/- Full 46-tool interactive example
Agent Evaluation
eval_basic/- Basic evaluation setupeval_trajectory/- Tool call trajectory validationeval_semantic/- LLM-judged semantic matchingeval_rubric/- Rubric-based scoring
Production Features
load_artifacts/- Working with images and PDFsmcp/- Model Context Protocol integrationserver/- REST API deploymenta2a/- Agent-to-Agent communicationweb/- Web UI with streamingresearch_paper/- Complex multi-agent workflow
Optimized for production use:
- Zero-cost abstractions with Rust's ownership model
- Efficient async I/O via Tokio runtime
- Minimal allocations and copying
- Streaming responses for lower latency
- Connection pooling and caching support
Apache 2.0 (same as Google's ADK)
- ADK - Google's Agent Development Kit
- MCP Protocol - Model Context Protocol for tool integration
- Gemini API - Google's multimodal AI model
Contributions welcome! Please open an issue or pull request on GitHub.
Implemented (v0.1.6):
- Core framework and agent types
- Multi-provider LLM support (Gemini, OpenAI, Anthropic, DeepSeek)
- Tool system with MCP support
- Agent Tool - Use agents as callable tools
- Session and artifact management
- Memory system with vector embeddings
- REST and A2A servers
- CLI with interactive mode
- Realtime voice agents (OpenAI Realtime API, Gemini Live API)
- Graph-based workflows (LangGraph-style) with checkpointing and human-in-the-loop
- Browser automation (46 WebDriver tools)
- Agent evaluation framework with trajectory validation and LLM-judged scoring
- Dynamic UI generation (adk-ui) with 28 components, 10 templates, React client
Planned (see docs/roadmap/):
| Priority | Feature | Target | Status |
|---|---|---|---|
| π΄ P0 | Guardrails | Q1 2026 | Planned |
| π΄ P0 | ADK Studio | Q1-Q2 2026 | Planned |
| π‘ P1 | Cloud Integrations | Q2-Q3 2026 | Planned |
| π‘ P1 | Graph Visualization | Q2 2026 | Planned |
| π’ P2 | Enterprise Features | Q4 2026 | Planned |
| π’ P2 | VertexAI Sessions | Q2 2026 | Planned |
