Google Agent Development Kit Java Example
Building intelligent, tool-enabled AI systems in Java requires more than sending prompts to a language model. A well-structured implementation must manage user sessions, maintain context, integrate external tools, and coordinate execution clearly and reliably.
This article explains how to build a Java-based AI agent using the Google Agent Development Kit (ADK) and the Gemini API. It demonstrates how to define an agent, register function tools, create and manage user sessions, and execute conversations through an in-memory runner. The example also shows how the agent can call external functions to retrieve dynamic data, making the solution practical and extensible.
1. What Is the Agent Development Kit (ADK)?
The Google Agent Development Kit (ADK) is an open-source, code-first framework for building, testing, and deploying AI agents and multi-agent systems. It provides a structured foundation for creating agents that can reason with large language models, interact with external tools, manage session state, and coordinate complex workflows.
ADK includes:
- LlmAgent (Agent): This is the reasoning engine of the system. It uses large language models such as Gemini to interpret input, generate responses, plan actions, and decide when to call tools.
- Workflow agents: These are deterministic controllers that manage execution flow without using a language model. Examples include SequentialAgent, ParallelAgent, and LoopAgent.
- Tools: Tools are standardized capabilities that allow agents to interact with external systems. A tool can be a Java method, a Python function, or an API call. They enable agents to retrieve live data, perform computations, or trigger external processes
- Runner: The Runner acts as the orchestration engine. It coordinates communication between agents, user sessions, and tools, ensuring that each request is executed within the correct context.
- Session management: ADK maintains conversational history and working memory during a session. This allows agents to preserve context across multiple interactions and respond consistently.
- Streaming Support: Built-in support for bidirectional streaming enables real-time and multimodal interactions, including audio and video.
- Development UI: A browser-based interface allows developers to inspect events, monitor state changes, and debug agent behavior in real time.
2. Project Setup
To begin, create a standard Maven-based Java project. Then add the following dependencies to your pom.xml file:
<dependencies>
<dependency>
<groupId>com.google.adk</groupId>
<artifactId>google-adk</artifactId>
<version>0.5.0</version>
</dependency>
<dependency>
<groupId>commons-logging</groupId>
<artifactId>commons-logging</artifactId>
<version>1.3.5</version>
</dependency>
</dependencies>
The google-adk dependency provides the core classes required to build and run agents. This includes components such as LlmAgent, workflow agents, tools, session management, and the runner that orchestrates execution. It contains the runtime functionality necessary to create and operate AI agents in a Java application.
The commons-logging dependency provides a lightweight logging abstraction used internally by the framework. It ensures that logging output is handled correctly and can integrate with different logging implementations if needed.
Configuring Access to the LLM
Before running the application, you must configure authentication so the agent can connect to a Gemini model. The LlmAgent communicates with Google’s Generative AI service, and this requires a valid API key.
Set the following environment variables in your system:
export GEMINI_API_KEY= export GOOGLE_GENAI_USE_VERTEXAI=FALSE
You can generate an API key from the Google Cloud Console. After creating the key, make sure the Gemini (Generative Language) API is enabled for your project. Once these environment variables are set and your project is properly configured, the agent will be able to connect to the LLM and process user requests successfully.
3. Defining the LLM Agent
In this section, we build a Java agent that combines:
- A root LLM agent
- A custom tool that fetches a workout plan based on a given fitness focus area
This demonstrates how to let an agent make decisions and call external services at runtime.
Agent Configuration
We start by creating a Java class that configures the AI agent and defines its core behavior. This includes setting the agent’s purpose, selecting the language model it will use, and registering any tools it can access. In this example, the agent is designed to provide fitness guidance and is connected to an external tool that can fetch structured workout plans when needed.
public class FitnessAgent {
public static BaseAgent ROOT_AGENT = initializeAgent();
public static BaseAgent initializeAgent() {
ArrayList<BaseTool> tools = configureTools();
return LlmAgent.builder()
.name("fitness-user")
.description("An AI agent that provides fitness guidance and workout recommendations.")
.model("gemini-2.5-flash")
.instruction("""
You assist users with fitness and health goals.
You suggest workouts, explain exercises clearly,
and use tools when you need up-to-date workout data.
""")
.tools(tools)
.build();
}
private static ArrayList<BaseTool> configureTools() {
ArrayList<BaseTool> list = new ArrayList<>();
list.add(WorkoutApiTool.create());
return list;
}
}
This configuration class defines the core agent and its behavior. The initializeAgent method sets up a Gemini-based LLM agent with instructions describing how it should respond to user queries. The configureTools method adds the custom WorkoutApiTool, allowing the agent to fetch workout recommendations dynamically.
The agent itself is created using the LlmAgent.builder() pattern. This builder configuration defines several important properties. The name identifies the agent internally. The description provides metadata explaining the agent’s purpose. The model specifies which large language model the agent will use to generate responses—in this case, a Gemini model. The instruction field defines the system-level guidance that controls how the agent behaves. These instructions shape the agent’s tone, responsibilities, and decision-making process, including when it should use tools.
The .tools(tools) method attaches the configured tools to the agent, enabling it to call external functions when required. Finally, the .build() method constructs and returns the fully configured agent instance.
Interactive Session Runner
The next step in building the agent is to provide a way for it to interact. In this section, we use the InMemoryRunner provided by ADK to create and manage sessions while handling user input through a command-line interface. The runner acts as the execution engine of the application. It initializes the agent, creates a session for a specific user, and continuously processes user input by sending it to the agent and returning the generated responses.
public class FitnessAdvisorAgentRunner {
private static final String USER_IDENTIFIER = "fitness-session";
private static final String SESSION_NAME = "fitness-user";
public static void main(String[] args) {
InMemoryRunner runner = new InMemoryRunner(FitnessAgent.ROOT_AGENT);
var session = runner.sessionService()
.createSession(SESSION_NAME, USER_IDENTIFIER)
.blockingGet();
String sessionId = session.id();
try (Scanner scanner = new Scanner(System.in, StandardCharsets.UTF_8)) {
while (true) {
System.out.print("\nYou > ");
String userInput = scanner.nextLine();
if ("exit".equalsIgnoreCase(userInput)) {
break;
}
var userContent = Content.fromParts(Part.fromText(userInput));
Flowable<Event> events = runner.runAsync(USER_IDENTIFIER, sessionId, userContent);
System.out.print("\nAgent > ");
events.blockingForEach(event
-> System.out.println(event.stringifyContent()));
}
}
}
}
The above class provides a command-line interface for interacting with the AI agent. It initializes an InMemoryRunner with the configured agent, creates a session for a specific user, and retrieves its session ID for maintaining context. Inside a loop, it reads user input, wraps it in a Content object, and sends it asynchronously to the agent using runAsync. The resulting events are processed and printed to the console, allowing real-time responses while preserving conversation state until the user exits.
4. Custom External Tool
Tools allow the agent to access external data. The following tool simulates fetching workout plans from an external fitness service.
public class WorkoutApiTool {
@Schema(
name = "fetchWorkoutPlan",
description = "Fetches a workout plan based on a given fitness focus area"
)
public static Map<String, Object> fetchWorkoutPlan(String focusArea) {
// Simulated external API response
return Map.of(
"focusArea", focusArea,
"workout", "3 sets of squats, push-ups, and planks",
"duration", "30 minutes"
);
}
public static FunctionTool create() {
return FunctionTool.create(
WorkoutApiTool.class,
"fetchWorkoutPlan"
);
}
}
This tool exposes a function (fetchWorkoutPlan) that the agent can call. The @Schema annotation describes the function so the agent understands when and how to use it. In a real system, this method could call an external REST API; here, it returns mock data for clarity.
5. Running the Application
Once all classes are in place, you can run the application locally.
Run the agent
mvn compile exec:java -Dexec.mainClass="com.jcg.example.FitnessAdvisorAgentRunner"
The application starts an interactive command-line session.
Sample Output
Below is an example of what interacting with the agent looks like:
You > Can you suggest a beginner workout plan for someone new to fitness?
Agent > Function Call: FunctionCall{id=Optional[adk-d84ca712-7c8e-4bfa-8a88-00d7f9dd6e43], args=Optional[{arg0=beginner}], name=Optional[fetchWorkoutPlan], partialArgs=Optional.empty, willContinue=Optional.empty}
Function Response: FunctionResponse{willContinue=Optional.empty, scheduling=Optional.empty, parts=Optional.empty, id=Optional[adk-d84ca712-7c8e-4bfa-8a88-00d7f9dd6e43], name=Optional[fetchWorkoutPlan], response=Optional[{workout=3 sets of squats, push-ups, and planks, duration=30 minutes, focusArea=beginner}]}
Here's a beginner-friendly workout plan for you:
**Workout Plan:**
* **Duration:** 30 minutes
* **Exercises:**
* Squats: 3 sets
* Push-ups: 3 sets
* Planks: 3 sets
Remember to focus on proper form for each exercise. If you're unsure about the form, there are many excellent resources online (videos, articles) that can guide you. Start with a weight or resistance level that allows you to complete each set with good form, and gradually increase as you get stronger.
When the user asks for a workout, the agent determines that external data is useful and calls the fetchWorkoutPlan tool. The response combines the tool output with natural language guidance, creating a helpful and conversational result.
6. Conclusion
In this article, we explored how to build a Java-based AI agent using the Google Agent Development Kit (ADK). We covered setting up the project, defining the agent’s behavior, connecting external tools for dynamic data retrieval, and handling user interactions through sessions using the InMemoryRunner. By following this approach, developers can create intelligent, context-aware agents capable of responding to user queries, executing tasks, and integrating with real-world data sources. This framework provides a structured, extensible foundation for building practical AI-driven applications in Java.
7. Download the Source Code
This article explored the Google Agent Development Kit (ADK) for Java.
You can download the full source code of this example here: Java Google agent development kit




