Writing
The Coordinator Agent Pattern
Sequential pipelines run A then B. Parallel pipelines fan out and gather. Loop pipelines iterate until the output holds up. All three assume you already know the shape of the work.
Real users don’t hand you a neatly shaped request. One asks you to build a team. The next asks for stats on a single Pokémon. The third wants matchup advice against water types. Hardcoding one pipeline for all three forces every request through the same pipe, whether it fits or not.
The coordinator pattern hands that decision to an LLM. A router agent reads the request, inspects the description of each sub-agent, and delegates to the right one (or several). It’s the pattern that stitches all the previous patterns together.
What is the coordinator agent pattern?
A coordinator is an LlmAgent that treats other agents as callable tools. The sub-agents can be leaf agents, sequential pipelines, parallel fan-outs, loops, anything. The coordinator itself doesn’t execute the task. It reads the user’s request, looks at each tool’s description, and calls whichever one (or several) match.
Three moving parts:
- The coordinator. An
LlmAgentwith an instruction that tells it when to call which tool. Its job is routing. - The sub-agents, wrapped as
AgentTools. Each one has a focuseddescriptionthe coordinator uses to decide. They can be simple leaf agents or composite pipelines. - Session state. Sub-agents stash their results under
outputKeynames. The coordinator reads whichever keys landed and synthesises a final answer.
Coordinator vs loop
The loop agent also uses an LLM to make runtime decisions, so the two patterns look similar on paper. They’re solving different problems.
A loop runs the same set of sub-agents on repeat. The LLM’s job is deciding when to stop. PASS exits, FAIL goes around again. The pipeline shape is fixed. Only the iteration count varies. Loops are for polishing one output until it clears a bar.
A coordinator runs different sub-agents for different requests. The LLM’s job is deciding who runs at all. Team builder, research, matchup, or some combination. The pipeline shape varies per request. There’s no iteration. Coordinators are for dispatching across capabilities.
Put another way: loops are vertical (same pipeline, many passes). Coordinators are horizontal (many pipelines, one pass). You can combine them. A coordinator can delegate to a loop for requests that need refinement, and a loop can sit inside a sequential pipeline the coordinator routes to.
When does this beat a fixed pipeline?
Whenever the right pipeline depends on the request. A single-pattern system handles a narrow slice of requests well and fails awkwardly on the rest. A coordinator handles breadth by delegating, so each sub-pipeline stays focused.
Good fit when:
- Requests vary in shape. Build a team, look up one Pokémon, analyse a matchup. Different tasks, different tools, different pipelines.
- You want to compose patterns. A coordinator can delegate to a
SequentialAgentfor one request, aParallelAgentfor another, and a single leaf agent for a third. - The routing logic is fuzzy. “Build me a ghost team and tell me about each one” needs two pipelines. A rules-based router would need regex acrobatics. An LLM reads the intent in one pass.
The project
We’re building a Pokémon Coordinator. Type a request, and the coordinator decides what to do:
- “Build me a spooky team” → sequential pipeline: pick 6 Pokémon, then analyse the team’s balance.
- “Tell me about Charizard” → parallel pipeline: fetch stats and evolution chain at the same time.
- “How would my fire team do against a water gym?” → single matchup agent.
- “Build me a dragon team and see how it fares against ice” → both the team builder AND the matchup analyser.
All grounded in PokéAPI.
Click Run to replay a real session for the prompt “Build a ghost team AND separately look up Eevee’s evolutions”. The coordinator emits two tool calls in the same turn: team_builder("a ghost team") and pokemon_research("Eevee's evolutions"). team_builder runs PokemonPickerAgent then TeamAnalyzerAgent sequentially. pokemon_research fires StatsAgent and EvolutionAgent in parallel. Both finish, the coordinator reads session state, and synthesises one unified response with both answers. About 16 seconds of model time end-to-end.
Getting this to work took a detour through ADK’s subAgents vs AgentTool mechanics. More on that below.
Setup
The package.json:
{
"name": "coordinator-agent",
"version": "1.0.0",
"type": "module",
"scripts": {
"start": "npx adk run --log_level ERROR agent.ts",
"web": "npx adk web agent.ts"
},
"dependencies": {
"@google/adk": "^0.5.0",
"zod": "^4.3.6"
}
}
Same two dependencies as the rest of the series. PokéAPI is free and needs no key; only the Gemini-powered agents need GOOGLE_API_KEY.
The code
agent.ts is longer than the previous posts, so we’ll walk through it piece by piece instead of pasting it wholesale.
Tools
Four thin wrappers around PokéAPI. Each one targets a single concern.
const searchPokemon = new FunctionTool({
name: "search_pokemon",
description:
"Search for Pokémon by name or type. For name-based search, returns details about that Pokémon. For type-based search, returns a list of Pokémon of that type.",
parameters: z.object({
query: z.string().describe("A Pokémon name (e.g. 'pikachu') or a type (e.g. 'fire', 'ghost')"),
}),
execute: async ({ query }) => {
// Try the type endpoint first; fall back to name lookup.
},
});
const getPokemonDetails = new FunctionTool({
name: "get_pokemon_details",
description: "Get full detailed information about a specific Pokémon: stats, abilities, moves, types, height, and weight.",
parameters: z.object({ name: z.string() }),
execute: async ({ name }) => { /* ... */ },
});
const getEvolutionChain = new FunctionTool({
name: "get_evolution_chain",
description: "Get the full evolution chain for a Pokémon, including evolution methods (level-up, stone, trade, etc.).",
parameters: z.object({ name: z.string() }),
execute: async ({ name }) => { /* ... */ },
});
const getTypeEffectiveness = new FunctionTool({
name: "get_type_effectiveness",
description: "Get type effectiveness data: what a type is strong against, weak against, and immune to.",
parameters: z.object({ type: z.string() }),
execute: async ({ type }) => { /* ... */ },
});
Four tools, four agents that need them. No tool is shared across every agent. Each leaf agent only gets the tools it uses. That’s the same discipline as the sequential post: narrow tool access keeps the model from picking the wrong one.
Leaf agents
Five focused agents, one per capability.
const pokemonPickerAgent = new LlmAgent({
name: "PokemonPickerAgent",
model: "gemini-2.5-flash",
description: "Picks 6 Pokémon matching the user's described vibe or preference.",
instruction: `You are a Pokémon team selection expert. Your job is to pick exactly 6 Pokémon that match the user's described vibe, theme, or preferences...`,
tools: [searchPokemon],
outputKey: "team_picks",
});
const teamAnalyzerAgent = new LlmAgent({
name: "TeamAnalyzerAgent",
model: "gemini-2.5-flash",
description: "Analyzes a picked Pokémon team's type coverage, weaknesses, and overall balance.",
instruction: `You are a Pokémon team balance analyst. You receive a team of 6 Pokémon from the previous step in {team_picks}...`,
tools: [getTypeEffectiveness],
outputKey: "team_analysis",
});
const statsAgent = new LlmAgent({ /* get_pokemon_details → pokemon_stats */ });
const evolutionAgent = new LlmAgent({ /* get_evolution_chain → evolution_info */ });
const matchupAnalyzerAgent = new LlmAgent({ /* get_type_effectiveness → matchup_result */ });
Two things to flag.
The description field matters here in a way it didn’t in earlier posts. In sequential and parallel pipelines, the description was documentation. In a coordinator, the description is the routing signal. The coordinator LLM reads these descriptions to decide which agent fits the request. Write them for a reader who has no other context.
outputKey values are the hand-off contract. team_picks, team_analysis, pokemon_stats, evolution_info, and matchup_result. The coordinator doesn’t know in advance which of these will be populated. It reads whichever landed and composes the response from those.
Composite agents
This is where the previous patterns come back in. The coordinator’s sub-agents aren’t all leaf agents. Two of them are pipelines.
const teamBuilder = new SequentialAgent({
name: "team_builder",
description:
"Builds a Pokémon team sequentially — first picks 6 Pokémon matching the user's vibe, then analyzes the team's type coverage and balance.",
subAgents: [pokemonPickerAgent, teamAnalyzerAgent],
});
const pokemonResearch = new ParallelAgent({
name: "pokemon_research",
description:
"Researches Pokémon in parallel — fetches detailed stats/abilities and evolution chain info simultaneously.",
subAgents: [statsAgent, evolutionAgent],
});
Why the choice of pipeline for each?
- Team building is sequential. You can’t analyse a team you haven’t picked yet. Step B depends on Step A’s output (the picker writes
team_picks, the analyser reads it). - Research is parallel. Stats and evolution are independent PokéAPI calls. Running them in sequence doubles the latency for no benefit.
ParallelAgentfires both at once.
Matchup analysis stays as a single leaf agent because it doesn’t fan out or chain.
From the coordinator’s perspective, these three things (a sequential pipeline, a parallel pipeline, and a leaf agent) all look the same. They each have a name, a description, and an outputKey (or a set of them). The coordinator delegates without caring what’s inside.
The coordinator
export const rootAgent = new LlmAgent({
name: "pokemon_coordinator",
model: "gemini-2.5-flash",
description:
"A smart Pokémon coordinator that analyzes user requests and delegates to specialized sub-agent teams.",
instruction: `You are a Pokémon coordinator. Your job is to call the right sub-agent tool(s) and then synthesise their results.
HARD RULES:
- Your FIRST action must be to call one or more of the sub-agent tools below.
- Do NOT write any text, preamble, or announcement before calling tools.
- If a request has multiple distinct asks (e.g. joined by "and" / "plus" / "also"), call EVERY matching tool in parallel. Do not pick just one.
Your available sub-agent tools:
1. **team_builder** — Call when the user wants to BUILD a Pokémon team.
2. **pokemon_research** — Call when the user wants to LEARN ABOUT specific Pokémon.
3. **MatchupAnalyzerAgent** — Call for BATTLE/MATCHUP analysis.
Multi-call examples:
- "Build me a ghost team and tell me about each one" → call team_builder AND pokemon_research (both, same turn).
- "Make a fire team and see how it fares against water types" → call team_builder AND MatchupAnalyzerAgent.
AFTER the tools return, synthesise their outputs into a single response. Sub-agents also write to session state under: team_picks, team_analysis, pokemon_stats, evolution_info, matchup_result.`,
tools: [
new AgentTool({ agent: teamBuilder }),
new AgentTool({ agent: pokemonResearch }),
new AgentTool({ agent: matchupAnalyzerAgent }),
],
});
Three things worth calling out.
The root is an LlmAgent. Previous posts put deterministic wrappers (SequentialAgent, ParallelAgent) at the root because the flow was fixed. Here, the flow depends on the request, and that decision is the LLM’s to make.
Sub-agents are exposed as AgentTools, not subAgents. This is the important bit (and the one I got wrong on the first pass, more on that below). Wrapping each pipeline as new AgentTool({ agent: ... }) and passing them in tools means the coordinator sees them as function calls. Gemini can then emit parallel tool calls in a single turn, exactly what multi-delegation needs. The subAgents array uses ADK’s transfer_to_agent flow instead, which hands control over one agent at a time.
The instruction is pure routing logic. Compare to earlier posts where the root instruction explained the task itself. Here, the root only explains the cast of sub-agents and when each one fits. The sub-agents carry the task-specific instructions. The HARD RULES block exists because Gemini will happily reply with “I’ll delegate to…” text if you let it. Forbid it explicitly.
How routing actually works
With AgentTool, routing reduces to function calling. Each wrapped sub-agent gets a function declaration with its name and description. The coordinator’s model sees them the same way it sees get_type_effectiveness: a callable with documented arguments. When Gemini decides a request matches, it emits a function call naming the sub-agent. ADK runs the full sub-agent internally (including any sub-sub-agents, loops, parallel fan-outs, whatever), waits for it to finish, and returns its output as the tool result.
The critical bit: modern Gemini models can emit multiple function calls in a single response. Ask for a team and a matchup, and the coordinator returns two tool calls in one turn. ADK executes them, collects both results, and the coordinator synthesises from there.
Two consequences:
- Description quality drives routing quality. Vague descriptions lead to mis-routing. Write them the way you’d write a tool description: what the agent does, when it fits.
- The coordinator doesn’t see the sub-agent’s internals. It doesn’t know that
team_builderis aSequentialAgentor thatpokemon_researchfans out in parallel. It only knows that if the user wants a team built, one name fits; if they want to learn about a Pokémon, another fits.
How state flows for a multi-delegation request
Take “Build me a dragon team and see how it does against ice”:
- Coordinator emits two tool calls in one turn →
team_builder(...)andMatchupAnalyzerAgent(...). team_builderruns sequentially →PokemonPickerAgentwritesteam_picks;TeamAnalyzerAgentreads it and writesteam_analysis.MatchupAnalyzerAgentruns → writesmatchup_result.- Coordinator composes → reads whichever of
team_picks,team_analysis,pokemon_stats,evolution_info, andmatchup_resultexist and synthesises a single response.
The coordinator never touches PokéAPI. It never calls get_type_effectiveness. It does the two things the sub-agents can’t: deciding who runs, and folding the results into one coherent answer.
The routing gotcha I hit
The demo above is the working version. The first two attempts were not.
Attempt 1: the coordinator wouldn’t delegate at all. It produced a chatty reply (“I’ll delegate to the Team Builder specialists and Pokémon Research team…”) and then stopped. No sub-agent ran. The instruction had ended with “Be enthusiastic about Pokémon! Use a fun, knowledgeable tone. Now analyse the user’s request and orchestrate.” That gave the model permission to produce a preamble and treat the preamble as its final answer. Fix: add a HARD RULES block forbidding any text before tool calls. “Your FIRST action must be to call a sub-agent. Do NOT write any text, preamble, or announcement before calling tools.”
Attempt 2: the coordinator delegated, but only to one pipeline. After the HARD RULES fix, the coordinator correctly emitted a tool call, but only ever one per turn. “Build a ghost team AND separately look up Eevee’s evolutions” routed to pokemon_research only. team_builder never fired. The team half of the request got dropped. The reason was an architecture choice, not a prompt choice: I had the coordinator wired with subAgents, not AgentTool.
subAgents uses ADK’s transfer flow. Under the hood, the model emits a transfer_to_agent request, ADK hands control over to that sub-agent, the sub-agent runs, and then the flow returns. It’s a handoff. The transfer flow was designed for agent-to-agent hierarchies where one agent passes the baton to another. It’s single-hop by design: one transfer per turn.
AgentTool uses function calling. The sub-agent is a callable tool. Gemini can emit several tool calls in a single response, ADK executes them (in parallel where possible), and the coordinator keeps control throughout.
Switching the root from subAgents: [...] to tools: [new AgentTool({ agent: ... }), ...] fixed the multi-delegation. Same three pipelines, same instruction, but Gemini could now emit two function calls in one response. The demo above is the run right after that change landed.
For a coordinator that needs to fan out to multiple pipelines for compound requests, AgentTool is the right primitive. subAgents is for hierarchical delegation where a single pivot is the point.
If routing still misses with AgentTool (the model picks one tool when two are clearly warranted), a few levers help:
- Sharpen each sub-agent’s
description. Routing reads descriptions the way a function signature reads to the model. Overlap between tools makes the model pick the closer one and stop. Tighten the wording so each description covers a distinct slice. - Explicitly authorise multi-calls in the instruction. A line like “If the request joins two distinct asks with ‘and’ or ‘plus’, call EVERY matching tool in parallel” gives the model permission. Without it, flash-tier models default to picking one.
- Forbid preamble text before tool calls. Without this, Gemini will sometimes reply with “I’ll delegate to…” and never actually emit the tool call. The
HARD RULESblock in the instruction exists for this reason. - Upgrade the model. Routing is a tool-selection problem, and bigger models are measurably better at it.
gemini-2.5-flashis a reasonable default,gemini-2.5-proearns its keep for coordinators with five or more tools.
The deeper point: LLM routing is probabilistic. A coordinator is the right call when fuzzy matching is what you want. When you need every compound request to hit every relevant pipeline deterministically, a switch statement or an intent classifier upstream of the agents is a better fit.
Why not just use a bigger prompt?
Fair question. You could give a single agent all four tools and a long prompt explaining every scenario. It would kind of work. Here’s what you’d lose:
- Tool ambiguity comes back. With four tools on one agent, the model sometimes picks the wrong one or skips one it should have called.
- Prompt dilution. The same failure mode from the sequential post: the more scenarios crammed into one prompt, the less reliably the model handles each.
- No pattern composition. A monolithic agent can’t fan out
statsAgentandevolutionAgentin parallel. It can only sequence tool calls. You give up the latency win. - Debuggability. When the coordinator mis-routes, you see it in the delegation event. With a monolithic agent, the model silently picks the wrong tool and you’re back to guessing.
You trade a little extra orchestration for a system that’s easier to extend and easier to debug.
When to reach for this pattern
- Requests vary in shape. Different user intents map to different pipelines.
- You already have patterns worth composing. A coordinator is the glue that binds sequential, parallel, and loop pipelines into one system without rewriting any of them.
- Routing is fuzzy. “Build me a team and tell me about it” is hard to match with keywords. An LLM handles it in one shot.
Skip it when the workflow is fixed (pick the matching deterministic pattern), when you only have one capability (a single agent is simpler), or when routing decisions need to be auditable and deterministic (a switch statement beats an LLM).
Wrapping up
The coordinator pattern is the one that ties the series together. LlmAgent as the root, sub-agents wrapped in AgentTool and passed as tools, description fields steering the routing, and outputKey plumbing the results back for synthesis. Sub-agents can be leaf agents, sequential pipelines, parallel fan-outs, or loops. The coordinator treats them all the same.
The one detail that bit me on the first pass: prefer AgentTool over subAgents when you want multi-delegation. The transfer flow is single-hop by design. Function calling can fan out.
Between single, sequential, parallel, loop, and coordinator agents, you now have five patterns that compose into almost any agentic system. Next up: putting a critic agent to work on subjective output, where there’s no external source of truth to check against.