If you want to build a multi-agent system with Google ADK in Python, the framework makes the first 80% feel effortless. Google Developer Advocate Ivan Nardini wasn’t wrong when he said ADK lets you spin up “complex, multi-agent apps in <100 lines of code.” That’s true for demos. For anything resembling production, you’ll hit four specific gotchas that no quick-start tutorial prepares you for.
Google’s Agent Development Kit hit v1.28.1 on April 2, 2026 โ roughly 50 releases in its first year since launching at Google Cloud Next ’25 in April 2025. With stable 1.0.0 SDKs for Java and Go shipping in late March 2026, ADK is no longer Python-only. This tutorial walks through the three core workflow agent types โ SequentialAgent, ParallelAgent, and LoopAgent โ shows how agents communicate through shared session state, and flags the gotchas that will bite you once you move past a demo.
Why Google ADK Has Real Momentum
ADK’s GitHub repository has accumulated 18,700+ stars and 3,200+ forks in twelve months. For context, LangGraph has accumulated 27,700+ stars over a longer timeline. ADK isn’t leading the pack yet, but the release velocity โ roughly one per week for an entire year โ signals infrastructure-level investment, not an experiment Google will quietly sunset.
Developer @Shudh noted on X that they chose ADK specifically “for long term support and maintenance” โ the kind of production-minded reasoning that matters more than star counts. ADK is model-agnostic in principle but Gemini-optimized in practice. If your team is already on Google Cloud, it’s a natural fit. If you’re not, weigh that dependency honestly. For a full comparison against LangGraph, CrewAI, and OpenAI’s SDK, see our AI Agent Framework Comparison 2026.
Setup: Install ADK and Configure Your Environment
ADK requires Python 3.10 or higher. Install it in one line:
pip install google-adk
Create a .env file with your API key configuration:
GOOGLE_GENAI_USE_VERTEXAI=FALSE
GOOGLE_API_KEY=your-api-key-here
Here’s a minimal working agent in six lines โ enough to confirm your environment is set up correctly:
from google.adk.agents import Agent
from google.adk.tools import google_search
root_agent = Agent(
name="search_assistant",
model="gemini-2.5-flash",
instruction="You are a helpful assistant. Answer questions using search.",
tools=[google_search]
)
Test it locally with adk web โ ADK’s built-in browser UI for interacting with your agents. Most framework tutorials skip this, but it saves real debugging time. One thing to note: google_search is a restricted tool with exclusivity constraints (more on this in the gotchas section). That restriction is the first hint that ADK’s design philosophy pushes you toward composing specialized agents rather than building monolithic ones. Full setup details are in the official ADK documentation.
The Three Workflow Agent Types: How Google ADK Structures Multi-Agent Systems
ADK orchestrates multi-agent systems through three workflow agent types for deterministic execution, plus LLM-driven delegation (transfer_to_agent) for dynamic routing. The workflow agents handle predictable pipelines. The LLM delegation handles “I don’t know which sub-agent should handle this until I see the input.” Both rely on the same underlying mechanism: shared session state.
As Shubham Saboo, Senior AI Product Manager at Google, put it: “State management is vital: In ADK, session.state is your whiteboard.” Every agent reads from and writes to this shared state, and the output_key parameter is the mechanism that makes it work automatically.
SequentialAgent: Linear Pipelines with State Passing
SequentialAgent executes sub-agents in order, passing data downstream through session state. The output_key parameter automatically stores an agent’s output so the next agent can reference it via placeholder syntax:
from google.adk.agents import LlmAgent, SequentialAgent
agent_A = LlmAgent(
name="CityLookup",
model="gemini-2.5-flash",
instruction="What is the capital of France?",
output_key="capital_city"
)
agent_B = LlmAgent(
name="CityExpert",
model="gemini-2.5-flash",
instruction="Tell me three interesting facts about {capital_city}."
)
pipeline = SequentialAgent(
name="CityInfoPipeline",
sub_agents=[agent_A, agent_B]
)
When agent_A runs, its output is automatically stored in session.state["capital_city"]. When agent_B runs, {capital_city} in its instruction resolves to whatever agent_A produced. No manual wiring. No callback functions. The output_key is doing all the heavy lifting โ and understanding this one mechanism is what separates “I copied a demo” from “I can build real pipelines.”
ParallelAgent and LoopAgent: Concurrency and Iteration
ParallelAgent runs sub-agents concurrently and aggregates their results. Use it when tasks are independent and latency matters:
from google.adk.agents import LlmAgent, ParallelAgent
fetch_weather = LlmAgent(
name="WeatherFetcher",
model="gemini-2.5-flash",
instruction="Get the current weather for San Francisco.",
output_key="weather"
)
fetch_news = LlmAgent(
name="NewsFetcher",
model="gemini-2.5-flash",
instruction="Get the latest tech news headlines.",
output_key="news"
)
gatherer = ParallelAgent(
name="InfoGatherer",
sub_agents=[fetch_weather, fetch_news]
)
Notice each agent writes to a unique output_key โ "weather" and "news". This is not optional. Concurrent agents share session state, and if two agents write to the same key, you get a silent race condition. No error, no warning โ just whichever agent finishes last wins. Name your keys explicitly.
LoopAgent handles iterative execution โ run sub-agents repeatedly until a condition is met or a maximum iteration count is reached:
from google.adk.agents import LoopAgent
poller = LoopAgent(
name="StatusPoller",
max_iterations=10,
sub_agents=[process_step, check_condition]
)
A sub-agent exits the loop by signaling escalate=True in its EventActions โ useful for quality gates where a critic agent decides whether the output passes muster. Without either max_iterations or an escalate signal, the loop runs forever. Always set both as a safety net.

Eight Design Patterns for Building Multi-Agent Systems with Google ADK
Google’s multi-agent patterns guide documents eight production architectures. You don’t need all eight on day one, but knowing the map prevents you from reinventing patterns that already have names. The four most practical:
Sequential Pipeline โ the linear assembly line shown above. Agent A feeds Agent B feeds Agent C. Start here. As Saboo advises: “Begin with sequential chains before adding complexity.”
Coordinator/Dispatcher โ a central LLM agent analyzes input and routes to specialist sub-agents via transfer_to_agent(). Use this when the input type determines which sub-agent handles it โ a customer support bot routing to billing, technical, or account specialists, for example.
Parallel Fan-Out/Gather โ concurrent execution with result aggregation, as shown in the ParallelAgent example. Use when tasks are independent and you want to minimize latency.
Generator and Critic โ separate creation from evaluation using a LoopAgent. A generator agent produces content, a critic agent evaluates it, and the loop continues until escalate=True signals that quality meets the threshold. This is the pattern behind self-improving agents.
The remaining four โ Hierarchical Decomposition (recursive task breakdown), Iterative Refinement (multiple polish rounds), Human-in-the-Loop (human approval gates), and Composite Patterns (combining multiple patterns in one system) โ are documented in Google’s guide and become relevant as your systems grow more complex. If you’ve built multi-agent workflows with OpenAI’s SDK, our OpenAI Agents SDK tutorial covers how their handoff model compares.
The Gotchas That Will Break Your Multi-Agent System
This is where the “<100 lines of code” promise meets reality. Each gotcha below has bitten real developers, and none of them show up in the happy-path quickstart.
Gotcha 1: Tool exclusivity. Code Execution cannot share an agent with any other tool. Google Search was equally exclusive until v1.16.0, which added bypass_multi_tools_limit=True on GoogleSearchTool to combine it with function tools. But that flag only covers Search and VertexAiSearch โ it’s not a universal escape hatch. You still can’t build one agent that searches Google and runs code.
The fix: create specialized single-tool agents, then wrap them with AgentTool(agent=target_agent) so a parent agent can call them as tools. Unix philosophy โ do one thing well. The constraint enforces composability.
Gotcha 2: Sub-agent context loss. When a root agent delegates to a sub-agent, all subsequent user input goes directly to the sub-agent. The root agent loses the broader conversation context. Design for this: sub-agents should be self-contained for their specific task, and you should store any needed context in session state before delegation.
Gotcha 3: ParallelAgent race conditions. Concurrent agents share session state. Without unique output_key values, agents silently overwrite each other’s results. Always assign explicit, distinct keys to every parallel sub-agent.
Gotcha 4: Function tool schema quality. ADK inspects your function’s full signature โ name, docstring, parameters, type hints, and default values โ to generate the schema the LLM uses. The docstring becomes the tool description and is the most human-readable part, but sparse type hints or ambiguous parameter names will produce an equally weak schema. Write docstrings like you’re writing a contract, and pair them with explicit type hints. Both matter.
Where ADK Goes from Here
ADK’s Agent-to-Agent (A2A) protocol enables ADK agents to communicate with agents built in LangGraph, CrewAI, and other frameworks. That raises an uncomfortable question for anyone agonizing over framework selection: if cross-framework interoperability becomes standard, is “which framework should I pick?” even the right question? The smarter bet might be using the best tool for each layer rather than going all-in on a single framework.
The developers who get the most out of ADK won’t be the ones who memorize all eight design patterns. They’ll be the ones who understand that session.state is the real foundation โ and who design their agent contracts around clean state boundaries from the start. Every gotcha in this tutorial traces back to state management: race conditions, context loss, tool isolation. Master the whiteboard, and the patterns follow. If you prefer TypeScript, our ADK TypeScript tutorial covers the same concepts in a type-safe environment.
Get the Daily Pulse
Sharp analysis on what's actually moving in AI. No hype, no filler, no weekly digest.



