Skip to main content

Prompt Chaining: Building Step-by-Step AI Reasoning

5 min read

Introduction

Getting accurate, coherent output from an LLM often requires guiding it through structured reasoning. Prompt chaining is one of the most effective techniques for this: breaking complex tasks into manageable steps, each handled by the AI in sequence.

What is Prompt Chaining?

Prompt chaining decomposes a complex task into a series of interconnected prompts. Each prompt tackles a specific subtask, and the output from one feeds directly into the next. Of all the agentic AI workflows (which we’ll explore later), this one is the simplest.

Why Use Prompt Chaining?

  • Enhanced Accuracy: focusing on one subtask at a time produces more precise outputs.
  • Improved Interpretability: each step’s output can be examined, making the AI’s reasoning transparent.
  • Modularity: subtasks can be modified or replaced independently, giving you flexibility in workflow design.

Implementing Prompt Chaining

Here’s an example implementation using Node.js and Gemini.

  • Task Decomposition: identify the main task and divide it into smaller, logical subtasks.
  • Prompt Design: craft prompts tailored to each subtask, ensuring clarity and specificity.
  • Sequential Execution: run each prompt in order, feeding the output of one as input to the next.
  • Validation: at each stage, assess the output for correctness before proceeding. (Optional)

Practical Example: Generating a social media post based on a press release

Create a new project and install dependencies:

npm init -y && npm pkg set type="module"
npm i @google/genai cheerio

You’ll need access to a Large Language Model. I’m using Google’s Gemini here. Make sure you have an API key and some credit for token usage.

Create a .env file and add your API key there.

Add the following to package.json:

"start": "node --env-file=.env --experimental-strip-types --watch --no-warnings=ExperimentalWarning app.ts"

Then create app.ts.

First, we’ll add a function to extract information from the press release. This is the sample press release we’ll use.

async function fetchAndExtractTextFromUrl(url: string): Promise<string> {
  const res = await fetch(url);
  const html = await res.text();
  const $ = cheerio.load(html);

  const title = $('h1.entry-title').text().trim();
  const subtitle = $('h2.subtitle, .entry-subtitle').first().text().trim();
  const body = $('section.entry-content[itemprop="articleBody"]').text().trim();

  return `${title}\n\n${subtitle}\n\n${body}`
    .replace(/\s+/g, ' ')
    .slice(0, 4000);
}

Now the main run function:

async function run() {
  if (!process.env.GEMINI_API_KEY) {
    console.error(
      'GEMINI_API_KEY environment variable not set. Please set it before running the script.'
    );
    return;
  }

  const url = '...';

  const originalText = await fetchAndExtractTextFromUrl(url);

  const prompt1 = `Summarise the following press release in two sentences: ${originalText}`;
  try {
    console.log('Sending prompt 1 (Summarise):', prompt1, '\n\n');
    const result1 = await ai.models.generateContent({
      model: 'gemini-2.0-flash',
      contents: prompt1,
    });

    const summary = result1.text;
    console.log('Summary:', summary, '\n\n');

    const prompt2 = `Translate the following summary into Spanish, only return the translation, no other text: ${summary}`;

    console.log('Sending prompt 2 (Translate):', prompt2, '\n\n');
    const result2 = await ai.models.generateContent({
      model: 'gemini-2.0-flash',
      contents: prompt2,
    });
    const translation = result2.text;
    console.log('Translation:', translation);
  } catch (error) {
    console.error('An error occurred:', error);
  }
}

We ask the LLM to do the first task, then feed that result into the second prompt. That’s prompt chaining in action.

Here’s what the output looks like:

$

You could extend the chain further by bolting on another step that asks the LLM to generate 2-3 relevant hashtags.

Challenges and Considerations

  • Error Propagation: mistakes in early prompts cascade through subsequent outputs. Build in validation steps to catch them.
  • Complexity Management: as chains grow longer, maintaining clarity gets harder. Document each step thoroughly.
  • Performance: long chains increase processing time. Keep prompts tight and focused.

Conclusion

Prompt chaining is a foundational technique for building agentic AI workflows. By guiding models through structured, step-by-step processes, you get outputs that are more accurate, more interpretable, and easier to adapt.