Building AI Agents with Google ADK: A Practical Guide
Where the wand chooses the wizard, and the AI chooses the right tool for the job.
If you’ve been following the AI agent space, you’ll know that we’re in the midst of a paradigm shift. We’re moving away from simple chatbots that respond to prompts, towards sophisticated agents that can reason, use tools, and collaborate with other agents to accomplish complex tasks.
Google’s Agent Development Kit (ADK) is their answer to this emerging landscape. In this article, I’ll walk you through building a whimsical yet technically robust project: Ollivanders Wand Shop - a multi-agent customer service system that handles product enquiries and technical support for magical wands. And what’s amazing about this project is that it is using the new Agent Development Kit (ADK) for TypeScript!
Who is Ollivander? Ollivander is a wizard who owns a wand shop in Diagon Alley, London. He is known for his ability to create and sell magical wands that are tailored to the individual needs of each customer. More here.
Along the way, we’ll explore vector search, embeddings, tool creation, and the various patterns for orchestrating multiple agents. Whether you’re new to AI agents or looking to deepen your understanding, there’s something here for you.
This article is an enhanced version of a previous piece that I wrote on Routing with LLMs.
What We’re Building
Our Ollivanders Wand Shop helpdesk consists of:
- A root agent that acts as the front desk coordinator
- A product expert agent that handles enquiries about wand woods, cores, and prices
- A tech support agent that diagnoses wand malfunctions using semantic search
- A LibSQL database with vector search capabilities for finding relevant troubleshooting guides
- Google’s Gemini embeddings for semantic understanding of customer complaints
The result? A customer can walk in (metaphorically speaking) and ask “my wand is making a funny sound” and receive an accurate diagnosis based on semantic similarity to known issues in our technical manuals. Or they can enquire about wand prices or wood types.
Setting the Stage: The Database Layer
Before we can build intelligent agents, we need data for them to work with. Our database uses LibSQL - a SQLite-compatible database with native vector search support. This is particularly powerful for AI applications where semantic similarity matters more than exact keyword matching.
The Database Connection
Here’s something interesting that I encountered: when using ADK’s dev tools, native Node.js modules can struggle to resolve correctly due to how the bundler works. The solution? Force module resolution from the project’s node_modules:
import { createRequire } from 'module';
import path from 'path';
const require = createRequire(path.join(process.cwd(), 'node_modules'));
const { createClient } = require('@libsql/client');
This pattern uses createRequire to build a require function that resolves modules relative to your project root, bypassing bundler issues with native bindings.
Storing Wands and Technical Manuals
Our database stores two types of data:
- Wands - straightforward product catalogue with name, wood, core, length, price, and description
- Technical Manuals - troubleshooting guides with symptoms, diagnoses, solutions, and crucially, vector embeddings
The vector embeddings are where the magic happens. When a customer describes a problem in their own words, we can find semantically similar issues even if they don’t use the exact terminology from our manual.
export async function insertTechManual(
manual: Omit<TechManual, 'id'>,
embedding: Float32Array
): Promise<void> {
const d = getDb();
const embeddingVector = `[${Array.from(embedding).join(',')}]`;
await d.execute({
sql: `INSERT INTO tech_manuals (symptom, diagnosis, solution, embedding)
VALUES (?, ?, ?, vector32(?))`,
args: [manual.symptom, manual.diagnosis, manual.solution, embeddingVector],
});
}
The vector32() function is LibSQL’s way of storing 32-bit floating-point vectors. Our embeddings are 3072-dimensional - that’s 3072 numbers representing the semantic meaning of each symptom description.
Vector Search in Action
When a customer reports a problem, we convert their description to an embedding and find the most similar entries:
export async function vectorSearch(
queryEmbedding: Float32Array,
limit: number = 3
): Promise<TechManual[]> {
const d = getDb();
const embeddingVector = `[${Array.from(queryEmbedding).join(',')}]`;
const result = await d.execute({
sql: `
SELECT tech_manuals.id, tech_manuals.symptom,
tech_manuals.diagnosis, tech_manuals.solution
FROM vector_top_k('tech_manuals_embedding_idx', vector32(?), ?) AS v
JOIN tech_manuals ON tech_manuals.rowid = v.id
`,
args: [embeddingVector, limit],
});
return result.rows.map(row => ({
id: row.id as number,
symptom: row.symptom as string,
diagnosis: row.diagnosis as string,
solution: row.solution as string,
}));
}
The vector_top_k function performs the similarity search, returning the k most similar entries based on vector distance. This is dramatically more powerful than keyword matching - a customer saying “my wand is making a funny sound” will match our manual entry about “wand making humming buzzing vibrating sounds” despite sharing almost no words in common.
Generating Embeddings with Gemini
Of course in order to make sure that our queries work, we need to embed the “manuals” to begin with. Google’s gemini-embedding-001 model converts text into those 3072-dimensional vectors I mentioned. The implementation is refreshingly straightforward:
import { GoogleGenAI } from '@google/genai';
const ai = new GoogleGenAI({});
const EMBEDDING_MODEL = 'gemini-embedding-001';
export async function generateEmbedding(text: string): Promise<Float32Array> {
const response = await ai.models.embedContent({
model: EMBEDDING_MODEL,
contents: text,
});
const values = response.embeddings?.[0]?.values;
if (!values) {
throw new Error('No embedding values returned');
}
return new Float32Array(values);
}
What’s happening under the hood? The embedding model has been trained to place semantically similar text close together in vector space. “Wand shooting sparks” and “wand emitting random sparks unexpectedly” end up near each other, whilst “wand feels cold” ends up in a completely different region of that 3072-dimensional space.
Seeding the Database: Our Magical Inventory
Before our agents can help customers, we need to populate the database. Our seed script creates a delightful catalogue of wands and a comprehensive troubleshooting manual.
The Wand Catalogue
const wandCatalogue = [
{
name: 'Elder Wand',
core: 'Thestral tail hair',
wood: 'Elder',
length: '15 inches',
price: 500,
description: 'The most powerful wand in existence. Extremely selective.',
},
{
name: 'Holly Phoenix',
core: 'Phoenix feather',
wood: 'Holly',
length: '11 inches',
price: 35,
description: 'Excellent for defensive magic. Chose Harry Potter.',
},
// ... more wands
];
The Technical Manuals
This is where things get interesting. Each manual entry describes a symptom in natural language:
const techManuals = [
{
symptom: 'wand shooting random sparks unexpectedly',
diagnosis: 'Core instability due to moisture exposure or magical overload.',
solution: 'Keep wand in dry environment. Perform a core stabilization charm or visit Ollivanders for realignment.',
},
{
symptom: 'wand backfiring spells reversing direction',
diagnosis: 'Weakened wand-wizard bond, often from lack of use or emotional disconnect.',
solution: 'Spend quality time with your wand. Practice simple spells daily to rebuild trust and connection.',
},
{
symptom: 'wand making humming buzzing vibrating sounds',
diagnosis: 'Core resonance issue, often occurs when wand senses nearby magical interference.',
solution: 'Move away from other magical objects. If persistent, the core may need recalibration.',
},
// ... more manuals
];
The seeding process generates embeddings for each symptom and stores everything:
async function seed() {
console.log('Initializing database...');
await initDatabase();
const existingWands = await getWandCount();
if (existingWands > 0) {
console.log(`Database already seeded with ${existingWands} wands. Skipping.`);
return;
}
console.log('Seeding wands...');
for (const wand of wandCatalogue) {
await insertWand(wand);
console.log(` Added: ${wand.name}`);
}
console.log('Generating embeddings and seeding tech manuals...');
for (const manual of techManuals) {
const embedding = await generateEmbedding(manual.symptom);
await insertTechManual(manual, embedding);
}
console.log('Database seeding complete!');
}
Building the Agents: Where ADK Shines
Now we arrive at the heart of the project: the agent architecture. Google’s ADK provides several primitives for building agents, and understanding when to use each is crucial. Let’s dive in!
Creating Tools with FunctionTool
Tools are how agents interact with the outside world. In ADK, you create them using FunctionTool with Zod schemas for parameter validation:
import { FunctionTool } from '@google/adk';
import { z } from 'zod';
const getWandInfo = new FunctionTool({
name: 'get_wand_info',
description: 'Searches the wand database by wood type, core material, or wand name.',
parameters: z.object({
query: z
.string()
.describe(
'Search term: wand name, wood type, or core material (e.g., "phoenix feather", "elder", "holly")',
),
}),
execute: async ({ query }) => {
console.log(`--- Tool: get_wand_info called with query: ${query} ---`);
const matches = await searchWands(query);
if (matches.length > 0) {
return {
found: true,
count: matches.length,
wands: matches.map((w) => ({
name: w.name,
wood: w.wood,
core: w.core,
length: w.length,
price: `${w.price} Galleons`,
description: w.description,
})),
};
}
const allWands = await getAllWands();
const allWoods = [...new Set(allWands.map((w) => w.wood))].join(', ');
const allCores = [...new Set(allWands.map((w) => w.core))].join(', ');
return {
found: false,
message: `No wands found matching "${query}".`,
availableWoods: allWoods,
availableCores: allCores,
};
},
});
A few things to note:
- The description matters enormously - the LLM uses this to decide when to call the tool (I wrote a separate piece on this: The Importance of Precise Function Declarations for LLMs)
- Zod provides type safety and generates the schema the LLM needs to understand the parameters
- Return structured data - the LLM will interpret and present this to the user
The Diagnostic Tool with Vector Search
Our tech support tool is more sophisticated, using embeddings and vector search:
const diagnoseIssue = new FunctionTool({
name: 'diagnose_issue',
description: 'Diagnoses wand malfunctions using semantic search on technical manuals.',
parameters: z.object({
symptom: z
.string()
.describe(
'Description of the wand problem (e.g., "my wand shoots sparks randomly", "spells are backfiring")',
),
}),
execute: async ({ symptom }) => {
console.log(`--- Tool: diagnose_issue called with symptom: ${symptom} ---`);
try {
const queryEmbedding = await generateEmbedding(symptom);
const results = await vectorSearch(queryEmbedding, 1);
const best = results[0];
if (best) {
return {
matchedSymptom: best.symptom,
diagnosis: best.diagnosis,
solution: best.solution,
};
}
return {
diagnosis: 'No matching issue found in our technical manuals.',
solution: 'Please visit Ollivanders for an in-person consultation with our wand specialists.',
};
} catch (error) {
console.error('Vector search error:', error);
return {
diagnosis: 'Unable to search technical manuals.',
solution: 'Please visit Ollivanders for assistance.',
};
}
},
});
This is RAG (Retrieval-Augmented Generation) in action. Instead of relying solely on the LLM’s training data, we’re augmenting it with our specific knowledge base.
Multi-Agent Architecture: Understanding the Options
Here’s where ADK offers several patterns, and choosing the right one matters. Let me break down the key options:
Option 1: subAgents (Developer-Controlled Orchestration)
The subAgents property is designed for workflow orchestration where you control the flow:
- SequentialAgent - runs agents in order (A → B → C)
- ParallelAgent - runs agents simultaneously
- LoopAgent - repeats until a condition is met
These are powerful for deterministic pipelines, but they’re not ideal when you want the LLM to decide which specialist to consult.
Option 2: AgentTool (LLM-Controlled Delegation)
When you want the LLM to decide which agent to use based on the conversation, wrap agents as tools:
import { AgentTool } from '@google/adk';
const productExpertTool = new AgentTool({ agent: productExpert });
const techSupportTool = new AgentTool({ agent: techSupport });
The agent is now callable like any other tool. The root agent’s LLM decides when to invoke it, and control automatically returns to the parent when done.
Our Architecture
I endedu up using AgentTool because I wanted the root agent to intelligently route customers to the right specialist:
// --- Specialist Agents ---
const productExpert = new LlmAgent({
name: 'product_expert',
model: 'gemini-2.5-flash',
description: 'Specialist for wand product information - woods, cores, features, and recommendations.',
instruction: `You are the Product Expert at Ollivanders Wand Shop.
Use the 'get_wand_info' tool to answer questions about wands including woods, cores, magical properties, and prices.
Be knowledgeable and slightly mysterious, as befits a wand shop assistant.`,
tools: [getWandInfo],
});
const techSupport = new LlmAgent({
name: 'tech_support',
model: 'gemini-2.5-flash',
description: 'Specialist for wand malfunctions, troubleshooting, and repair advice.',
instruction: `You are the Technical Support wizard at Ollivanders Wand Shop.
Use the 'diagnose_issue' tool to help customers with wand problems and malfunctions.
Be reassuring and helpful - wand troubles can be stressful for wizards.`,
tools: [diagnoseIssue],
});
// --- Wrap as AgentTools ---
const productExpertTool = new AgentTool({ agent: productExpert });
const techSupportTool = new AgentTool({ agent: techSupport });
It has to be noted that when I tried the
subAgentspattern the Agent SDK had issues routing back from a subAgent to the root agent and the application crashed. I am not sure why this was but I have not ran into this issue while usingAgentTool.
The Root Agent: Tying It All Together
export const rootAgent = new LlmAgent({
name: 'ollivanders_helpdesk',
model: 'gemini-2.5-flash',
description: 'Ollivanders Wand Shop customer service coordinator.',
instruction: `You are the front desk assistant at Ollivanders Wand Shop.
Your job is to help customers by using the right specialist:
- Questions about wand types, woods, cores, or features → use 'product_expert' tool
- Wand malfunctions, problems, or troubleshooting → use 'tech_support' tool
- Unrelated questions → politely explain you only assist with wand-related inquiries
If state.hasGreeted is not true, include a brief warm welcome in your first response and set state.hasGreeted to true. Otherwise skip the greeting and help directly. Remember: "The wand chooses the wizard."`,
tools: [productExpertTool, techSupportTool],
});
Notice the greeting logic - we use session state to ensure customers get a warm welcome on their first message, but subsequent messages skip straight to helping them.
Workflow Orchestrators: When to Use Each
Whilst I used AgentTool for the project, let’s explore when you’d use the other patterns:
SequentialAgent - The Pipeline
This setup is best used when each step must complete before the next begins, and output flows forward.
Real-world examples:
- Content pipeline: Research → Draft → Edit → Format → Publish
- Code review: Parse → Lint → Security scan → Generate report
- Order processing: Validate → Check stock → Process payment → Ship
ParallelAgent - The Fan-Out
This setup is especially beneficial for independent tasks which can run simultaneously to save time.
Real-world examples:
- Multi-source research: Query news + academic papers + social media → merge findings
- Competitive analysis: Scrape competitor A + B + C → aggregate comparison
- System health check: Check API + Check DB + Check cache → combined status report
LoopAgent - The Iterator
This last setup is great for when you need to refine results until a quality threshold is met.
Real-world examples:
- Code generation with tests: Generate code → Run tests → If fail, analyse errors and retry
- Content improvement: Write draft → Critique → Revise → repeat until quality score passes
- Negotiation: Make offer → Evaluate response → Adjust → repeat until agreement
Key Learning Points
Let me distil the core lessons from this project:
1. Embeddings Enable Semantic Understanding
Traditional keyword search fails when users describe problems in their own words. Vector embeddings capture meaning, allowing “my wand makes a weird noise” to match “wand making humming buzzing vibrating sounds” despite minimal word overlap.
2. Tool Descriptions Are Critical
The LLM decides which tool to use based on descriptions. Be precise. If your product expert’s description doesn’t mention “prices”, it might refuse to answer pricing questions even when the data is available. (This was in fact the case for the first version of this project and a slight adjustment was required to make sure that pricing information is also discussed by the product info agent.)
3. AgentTool vs subAgents: Know the Difference
- AgentTool: LLM chooses when to delegate. Control returns automatically. Use for intelligent routing.
- subAgents with orchestrators: Developer controls the flow. Use for deterministic pipelines.
4. Session State Enables Continuity
Use state to track conversation context - whether you’ve greeted the user, their preferences, or accumulated information across turns.
5. Native Modules and Bundlers Don’t Always Play Nice
When using tools like ADK’s devtools that bundle your code, native Node.js modules may fail to resolve. The createRequire pattern can force correct resolution from your project’s node_modules.
6. Callbacks Provide Hooks for Cross-Cutting Concerns
ADK’s callback system (beforeAgentCallback, beforeToolCallback, etc.) lets you inject logic for rate limiting, validation, state initialisation, and more - without cluttering your main agent logic.
Running the Project
You can access this project on GitHub.
And to try it out yourself, just follow these steps from the README.
You’ll be greeted by the Ollivanders helpdesk, ready to help you find the perfect wand or diagnose why your current one is misbehaving.
Wrapping Up
Google’s Agent Development Kit provides a thoughtful set of primitives for building AI agents. The combination of:
- FunctionTool for grounding agents in real capabilities
- LlmAgent for creating specialised personas
- AgentTool for LLM-controlled delegation
- Workflow orchestrators for developer-controlled pipelines
These all give you the flexibility to build everything from simple chatbots to sophisticated multi-agent systems.
Our Ollivanders project demonstrates these concepts in a memorable context. The principles, however, transfer directly to real-world applications: customer service systems, research assistants, content pipelines, and beyond.
The wand chooses the wizard. And with ADK, you can build agents that choose the right tool for every job.