The Importance of Precise Function Declarations for LLMs

In the ever-evolving landscape of AI-powered applications, the ability for Large Language Models (LLMs) like Google's Gemini to interact with external tools and APIs is a game-changer. This capability, often referred to as "function calling" or "tool use," transforms a conversational agent into a powerful orchestrator, capable of executing complex tasks.

However, the magic of function calling isn't purely the model's intelligence. A significant part of its success lies squarely on our shoulders as developers: the precision of our FunctionDeclaration. Just like a compiler needs exact syntax, LLMs needs precise semantic and structural information to confidently and correctly invoke our functions.

This article discusses why precision in function declarations is not just a best practice, but a fundamental requirement for robust and reliable AI interactions. We'll explore the critical components of a well-defined function and illustrate the pitfalls of imprecision with practical examples.

The Contract: Why Precision is Non-Negotiable

Think of a FunctionDeclaration as a formal contract between your application's capabilities and the LLM's understanding. This contract must be clear, unambiguous, and accurate. If the terms of the contract are vague, misleading, or incomplete, the model - despite its advanced reasoning -will struggle to uphold its end of the bargain.

An LLM doesn't guess what your function does. It parses your FunctionDeclaration to build an internal representation of the tool's purpose, the data it expects, and the context in which it should be used. Any ambiguity in this definition can lead to:

  1. Missed Calls: The model fails to recognise that a user's intent aligns with your function, resulting in a generic text response.
  2. Incorrect Arguments: The model calls the function but populates its arguments with incorrect or ill-formatted values.
  3. Suboptimal Performance: Even if it eventually calls the function, the model might require more turns or struggle to map complex queries.

Ultimately, precision minimises the "cognitive load" on the model, allowing it to efficiently and accurately translate natural language into executable code.

Anatomy of a Precise Function Declaration

Let's break down the key elements of a FunctionDeclaration and emphasise what makes them precise.

1. name: The Identity Tag

name: 'summonSpell',
  • Precision Principle: The name must be unique and descriptive of the primary action the function performs.
  • Why it Matters: This is the direct identifier the model uses. If the name is generic (e.g., myFunction) or confusing (e.g., dataProcessorV2), it adds an unnecessary layer of abstraction for the model to bridge. Choose verbs that clearly state the function's purpose (e.g., createOrder, fetchWeatherData, sendEmail).

2. description: The Purpose Statement

description: 'Summon a magical spell by specifying its type and intended effect. Useful in fantasy games or creative writing scenarios.',
  • Precision Principle: The description should clearly and concisely explain what the function does and when it should be used. Think of it as the elevator pitch for your function.
  • Why it Matters: This is arguably the most critical piece of information for the model's initial decision-making. A vague description (e.g., "A custom function.") gives the model no real hint about its utility. A precise description helps the model match user queries like "I need to cast a fire spell" to your summonSpell function. Include context, examples, and common use cases if helpful.

3. parameters: The Input Blueprint

This is where the rubber meets the road. The parameters object defines the arguments your function expects. Its precision directly dictates the model's ability to extract necessary information from the user's prompt.

parameters: {
type: Type.OBJECT, // Always an object for function parameters
properties: {
spellType: {
type: Type.STRING,
description: 'The type of spell to summon (e.g., "fire", "healing", "teleportation").',
},
effect: {
type: Type.STRING,
description: 'The desired effect or outcome of the spell (e.g., "burn enemies", "restore health").',
},
},
required: ['spellType', 'effect'],
},

Let's dissect this:

properties: The Argument Definitions

Each key-value pair within properties defines an expected argument for your function.

  • Precision Principle (for propertyName): Use clear, descriptive parameter names (e.g., spellType instead of param1).
  • Why it Matters: The model uses these names to understand the role of each piece of information it extracts from the prompt. spellType immediately tells it that "fire" or "healing" belongs here.
  • Precision Principle (for type): Specify the exact data type (Type.STRING, Type.NUMBER, Type.BOOLEAN, Type.ARRAY, Type.OBJECT).
  • Why it Matters: This enforces data integrity and helps the model parse correctly. If you expect a NUMBER but define it as STRING, the model might try to send "one hundred" instead of 100. This is where common pitfalls lie. If a user says "two thousand" and your parameter is Type.NUMBER, the model is smart enough to convert it. If it's Type.STRING, it might just send "two thousand" which could break your backend.
  • Precision Principle (for description of properties): Provide a specific description for each parameter, including examples where appropriate.
  • Why it Matters: These descriptions guide the model on what specific pieces of information from the prompt map to which argument. The examples like (e.g., "fire", "healing") are incredibly valuable hints for the model.

required: The Non-Negotiables

required: ['spellType', 'effect'],
  • Precision Principle: Explicitly list all parameters that must be provided for the function to execute successfully.
  • Why it Matters: This tells the model, "Don't even think about calling this function unless you can extract values for all these parameters from the user's input." If a required parameter can't be found in the prompt, the model will typically ask for clarification or provide a textual response rather than making an incomplete function call. Omitting a required parameter here means the model might call your function without vital information, leading to errors in your application.

Common Pitfalls of Imprecision (and How to Fix Them)

Let's illustrate how even subtle imprecision can derail function calls, using our summonSpell example. What's interesting is that the model selection also plays an important part in determining the success of the function call.

For the examples going forward we'll assume the following skeleton code:

import { GoogleGenAI, Type } from '@google/genai';

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const functionDeclaration = {/* ... */};

const response = await ai.models.generateContent({
model: 'gemini-2.5-flash',
contents: 'I need fire to burn goblins.',
config: {
tools: [
{
functionDeclarations: [functionDeclaration],
},
],
},
});

if (response.functionCalls && response.functionCalls.length > 0) {
const call = response.functionCalls[0];
console.log(`🪄 Function to call: ${call.name}`);
console.log(`✨ Arguments: ${JSON.stringify(call.args, null, 2)}`);
} else {
console.log('🧵 No function call — model responded with:');
console.log(response.text);
}

Example 1 - too vague

const functionDeclaration = {
name: 'myFunction',
parameters: {
type: Type.OBJECT,
properties: {
input1: {
type: Type.STRING,
description: 'First input.',
},
input2: {
type: Type.STRING,
description: 'Second input.',
},
},
required: ['input1', 'input2'],
},
};

This functionDeclaration is way too vague. It doesn't specify a precise description, the inputs are also named vaguely and there's a description property missing to describe the overall purpose of the tool.

Let's run our sample project and see the result we get:

🧵 No function call — model responded with:
I cannot fulfill that request.

As expected the model is not making any function calls.

Example 2 - slightly less vague

const functionDeclaration = {
name: 'summonSpell',
parameters: {
type: Type.OBJECT,
properties: {
input1: {
type: Type.STRING,
description: 'First input.',
},
input2: {
type: Type.STRING,
description: 'Second input.',
},
},
required: ['input1', 'input2'],
},
};

When running the code with this functionDeclaration we now get a slighty different response - it seems that the model is able to recognise that we are asking about summoning a spell, but it doesn't understand the parameters, even though we have provided them.

🧵 No function call — model responded with:
I can summon spells. What kind of spell would you like to cast to create fire? I need to know the specific inputs for the spell.

Example 3 - short explanations

const functionDeclaration = {
name: 'summonSpell',
description: 'Magic function.',
parameters: {
type: Type.OBJECT,
properties: {
spellType: {
type: Type.STRING,
description: 'type',
},
effect: {
type: Type.STRING,
description: 'effect',
},
},
required: ['spellType', 'effect'],
},
};

// note we also update the prompt for this one
contents: 'Cast a teleportation spell to travel to the mountains.',

This is where things get interesting. Using gemini-2.5-flash we are getting the expected output, even though we are still not doing a great job at describing what this function does nor its parameters in detail.

🪄 Function to call: summonSpell
✨ Arguments: {
  "spellType": "teleportation",
  "effect": "travel to the mountains"
}

But notice what happens when we change to gemini-2.0-flash:

🧵 No function call — model responded with:
I can only cast spells with a specified effect and type. What effect and type of spell would you like to cast for teleportation?

It's clear that more advanced/latest models are more capable of understanding and executing complex tasks, including function calling. However, it's important to note that even the most advanced models may still struggle with certain tasks or edge cases. Therefore, it's crucial to provide clear and precise function declarations to ensure optimal performance. And it goes without saying that such descriptions are crucial for older models.

Final Example - how it should be

const functionDeclaration = {
name: 'summonSpell',
description:
'Summon a magical spell by specifying its type and intended effect. Useful in fantasy games or creative writing scenarios.',
parameters: {
type: Type.OBJECT,
properties: {
spellType: {
type: Type.STRING,
description:
'The type of spell to summon for example "fire" or "teleporation" but other spell types could be used too.',
},
effect: {
type: Type.STRING,
description:
'The desired effect or outcome of the spell, for example "burn enemies", "restore health" but other effects can be used as well.',
},
},
required: ['spellType', 'effect'],
},
};

With the above example we have all bases covered. The descriptions are precise, they give examples and provide an overall great guidance for the LLM. I have tested this against gemini-2.5-flash, gemini-2.0-flash and efven gemini-1.5-flash and in all cases the function call executed correctly.

And for the ultimate test, let's change the prompt again to be the following: Cast a shadow curse that drains the will of its target.. Notice that I am deliberately not using any of the example types and effects that were specified in the descriptions.

// gemini-2.5-flash
🪄 Function to call: summonSpell
✨ Arguments: {
  "effect": "drain will",
  "spellType": "shadow curse"
}

// gemini-2.0-flash
🪄 Function to call: summonSpell
✨ Arguments: {
  "effect": "drain will",
  "spellType": "shadow"
}

// gemini-1.5-flash
🪄 Function to call: summonSpell
✨ Arguments: {
  "effect": "drain the will of its target",
  "spellType": "shadow curse"
}

As you can see all models have now understood to call the summonSpell tool - the only issue seems that gemini-2.0-flash decided to assign the spellType to be shadow. I have updated the description for spellType to be 'The type of spell to summon for example "fire" or "teleporation" but other spell types could be used too that are multi-worded.', and that seemed to have do the job. But please remember, your milage may vary!

A Sidenote

Using toolConfig: { functionCallingConfig: { mode: 'any' } } provides a powerful, albeit assertive, way to control the LLM's behaviour. It's one of three key modes within the Gemini API's functionCallingConfig, dictating how the model utilises your declared tools. When set to 'any', you are explicitly instructing the model to forgo generating a conversational response and, instead, to always attempt to call one of the functions you have declared in your tool configuration. This stands in contrast to the default 'auto' mode, where the model intelligently decides whether to generate a natural language response or suggest a function call based on the prompt and context, offering the most flexibility for general use. Conversely, the 'none' mode explicitly prohibits the model from making any function calls whatsoever, effectively treating the request as if no tool declarations were present - useful for temporarily disabling tool use without removing your definitions.

The 'any' mode, by forcing a function prediction and guaranteeing schema adherence, can be incredibly useful for scenarios where a function call is absolutely essential for the user's workflow, such as form filling or strict command interpretation. However, it's crucial to employ this mode judiciously. As the model is compelled to pick a function, if the user's prompt doesn't clearly align with any of your defined tools, Gemini might make a "best guess" or even an arbitrary function call, potentially leading to unexpected or nonsensical results. It effectively prioritises action over dialogue, making it a sharp tool for specific, high-confidence interaction patterns, but one that requires careful consideration of potential misinterpretations.

Conclusion

The elegance of LLM's function calling lies in its ability to bridge the gap between human language and programmatic execution. However, this bridge is only as strong as the blueprints we provide. By meticulously crafting precise function declarations - with clear names, descriptive purposes, accurately typed parameters, and clearly marked requirements - we empower the model to consistently and reliably act as the intelligent interface to our application's capabilities.