Skip to main content

The Importance of Precise Function Declarations for LLMs

10 min read

The ability for LLMs like Google’s Gemini to interact with external tools and APIs is a powerful capability. Called “function calling” or “tool use,” it transforms a conversational agent into an orchestrator that can execute complex tasks.

But the magic isn’t purely the model’s intelligence. A significant part of its success lands squarely on our shoulders as developers: the precision of our FunctionDeclaration. Just like a compiler needs exact syntax, LLMs need precise semantic and structural information to correctly invoke our functions.

Precision in function declarations isn’t a best practice. It’s a fundamental requirement for reliable AI interactions. We’ll explore the critical components of a well-defined function and the pitfalls of imprecision with practical examples.

The Contract: Why Precision Is Non-Negotiable

Think of a FunctionDeclaration as a formal contract between your application’s capabilities and the LLM’s understanding. This contract must be clear, unambiguous, and accurate. If the terms are vague, misleading, or incomplete, the model will struggle to hold up its end.

An LLM doesn’t guess what your function does. It parses your FunctionDeclaration to build an internal representation of the tool’s purpose, the data it expects, and the context in which it should be used. Any ambiguity leads to:

  1. Missed Calls: The model fails to recognise that a user’s intent matches your function, falling back to a generic text response.
  2. Incorrect Arguments: The model calls the function but populates arguments with wrong or badly formatted values.
  3. Suboptimal Performance: The model needs more turns to map complex queries, even if it eventually calls the function.

Precision minimises the “cognitive load” on the model, letting it efficiently translate natural language into executable code.

Anatomy of a Precise Function Declaration

Let’s break down the key elements.

1. name: The Identity Tag

name: 'summonSpell',
  • Precision Principle: the name must be unique and descriptive of the primary action.
  • Why it matters: this is the direct identifier the model uses. Generic names like myFunction or confusing ones like dataProcessorV2 force the model to bridge an unnecessary abstraction gap. Choose verbs that state the function’s purpose clearly (e.g., createOrder, fetchWeatherData, sendEmail).

2. description: The Purpose Statement

description: 'Summon a magical spell by specifying its type and intended effect. Useful in fantasy games or creative writing scenarios.',
  • Precision Principle: clearly and concisely explain what the function does and when it should be used.
  • Why it matters: this is arguably the most critical piece for the model’s initial decision-making. A vague description (“A custom function.”) gives the model nothing to work with. A precise one helps it match queries like “I need to cast a fire spell” to your summonSpell function. Include context, examples, and common use cases where helpful.

3. parameters: The Input Blueprint

This is where it gets concrete. The parameters object defines the arguments your function expects. Its precision directly dictates the model’s ability to extract information from the user’s prompt.

parameters: {
  type: Type.OBJECT, // Always an object for function parameters
  properties: {
    spellType: {
      type: Type.STRING,
      description: 'The type of spell to summon (e.g., "fire", "healing", "teleportation").',
    },
    effect: {
      type: Type.STRING,
      description: 'The desired effect or outcome of the spell (e.g., "burn enemies", "restore health").',
    },
  },
  required: ['spellType', 'effect'],
},

Let’s dissect this:

properties: The Argument Definitions

Each key-value pair within properties defines an expected argument.

  • Precision Principle (for propertyName): use clear, descriptive parameter names (e.g., spellType instead of param1).
  • Why it matters: the model uses these names to understand what role each piece of information plays. spellType immediately signals that “fire” or “healing” belongs here.
  • Precision Principle (for type): specify the exact data type (Type.STRING, Type.NUMBER, Type.BOOLEAN, Type.ARRAY, Type.OBJECT).
  • Why it matters: this enforces data integrity. If you expect a NUMBER but define it as STRING, the model might send “one hundred” instead of 100. If a user says “two thousand” and your parameter is Type.NUMBER, the model is smart enough to convert it. If it’s Type.STRING, it might just pass the words through, breaking your backend.
  • Precision Principle (for description of properties): provide a specific description for each parameter, including examples.
  • Why it matters: these descriptions guide the model on mapping prompt information to arguments. Examples like (e.g., “fire”, “healing”) are incredibly valuable hints.

required: The Non-Negotiables

required: ['spellType', 'effect'],
  • Precision Principle: explicitly list all parameters that must be provided.
  • Why it matters: this tells the model, “Don’t call this function unless you can extract values for all these parameters.” If a required parameter can’t be found, the model will typically ask for clarification rather than making an incomplete call. Omitting a required parameter means the model might call your function without vital information, crashing your application.

Common Pitfalls of Imprecision (and How to Fix Them)

Let’s see how subtle imprecision can derail function calls, using our summonSpell example. Model selection also plays a significant part in determining whether a function call succeeds.

For the examples going forward we’ll assume the following skeleton code:

import { GoogleGenAI, Type } from '@google/genai';

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const functionDeclaration = {/* ... */};

const response = await ai.models.generateContent({
  model: 'gemini-2.5-flash',
  contents: 'I need fire to burn goblins.',
  config: {
    tools: [
      {
        functionDeclarations: [functionDeclaration],
      },
    ],
  },
});

if (response.functionCalls && response.functionCalls.length > 0) {
  const call = response.functionCalls[0];
  console.log(`🪄 Function to call: ${call.name}`);
  console.log(`✨ Arguments: ${JSON.stringify(call.args, null, 2)}`);
} else {
  console.log('🧵 No function call — model responded with:');
  console.log(response.text);
}

Example 1: too vague

const functionDeclaration = {
  name: 'myFunction',
  parameters: {
    type: Type.OBJECT,
    properties: {
      input1: {
        type: Type.STRING,
        description: 'First input.',
      },
      input2: {
        type: Type.STRING,
        description: 'Second input.',
      },
    },
    required: ['input1', 'input2'],
  },
};

This declaration is far too vague. No description for the function’s purpose. Inputs named generically. The model has nothing to work with.

$

As expected, the model doesn’t make any function calls.

Example 2: slightly less vague

const functionDeclaration = {
  name: 'summonSpell',
  parameters: {
    type: Type.OBJECT,
    properties: {
      input1: {
        type: Type.STRING,
        description: 'First input.',
      },
      input2: {
        type: Type.STRING,
        description: 'Second input.',
      },
    },
    required: ['input1', 'input2'],
  },
};

The model now recognises we’re asking about summoning a spell (from the function name), but it can’t map the parameters:

$

Example 3: short explanations

const functionDeclaration = {
  name: 'summonSpell',
  description: 'Magic function.',
  parameters: {
    type: Type.OBJECT,
    properties: {
      spellType: {
        type: Type.STRING,
        description: 'type',
      },
      effect: {
        type: Type.STRING,
        description: 'effect',
      },
    },
    required: ['spellType', 'effect'],
  },
};

// note we also update the prompt for this one
contents: 'Cast a teleportation spell to travel to the mountains.',

This is where things get interesting. Using gemini-2.5-flash, we get the expected output even with minimal descriptions:

$

But switch to gemini-2.0-flash:

$

More advanced models are better at understanding vague declarations. But precise descriptions are still crucial, especially for older models. Don’t rely on the model to fill in the gaps.

Final Example: how it should be

const functionDeclaration = {
  name: 'summonSpell',
  description:
    'Summon a magical spell by specifying its type and intended effect. Useful in fantasy games or creative writing scenarios.',
  parameters: {
    type: Type.OBJECT,
    properties: {
      spellType: {
        type: Type.STRING,
        description:
          'The type of spell to summon for example "fire" or "teleporation" but other spell types could be used too.',
      },
      effect: {
        type: Type.STRING,
        description:
          'The desired effect or outcome of the spell, for example "burn enemies", "restore health" but other effects can be used as well.',
      },
    },
    required: ['spellType', 'effect'],
  },
};

All bases covered. Descriptions are precise, they include examples, and they provide solid guidance for the LLM. I tested this against gemini-2.5-flash, gemini-2.0-flash, and even gemini-1.5-flash; the function call executed correctly every time.

For the ultimate test, let’s change the prompt to: Cast a shadow curse that drains the will of its target. (deliberately avoiding any of the example types and effects from the descriptions).

$

All three models now call summonSpell. The only hiccup: gemini-2.0-flash shortened spellType to just shadow. Updating the description to mention that multi-word spell types are valid fixed that. But your mileage may vary.

A Sidenote

Using toolConfig: { functionCallingConfig: { mode: 'any' } } forces the model to always attempt a function call rather than generating a conversational response. It’s one of three modes in the Gemini API’s functionCallingConfig:

  • auto (default): the model decides whether to generate text or call a function, based on context. Most flexible for general use.
  • any: the model must call one of your declared functions. Useful when a function call is absolutely essential (form filling, strict command interpretation). But if the prompt doesn’t align with any declared tool, the model will make a best guess, potentially producing nonsensical results.
  • none: function calling is completely disabled, even if tools are declared. Handy for temporarily switching off tool use without removing definitions.

The any mode prioritises action over dialogue. It’s a sharp tool for high-confidence interaction patterns, but requires careful consideration of potential misinterpretations.

Conclusion

The elegance of LLM function calling lies in bridging human language and programmatic execution. But that bridge is only as strong as the blueprints we provide. By crafting precise function declarations, with clear names, descriptive purposes, accurately typed parameters, and clearly marked requirements, we empower the model to consistently and reliably act as the intelligent interface to our application’s capabilities.