The Loop Agent Pattern
When one pass is not enough, let agents iterate. A playlist curator built with Google ADK that generates songs, verifies them against MusicBrainz, and refines until every track checks out.
Technical articles on AI integration, web development, and emerging technologies.
When one pass is not enough, let agents iterate. A playlist curator built with Google ADK that generates songs, verifies them against MusicBrainz, and refines until every track checks out.
Chaining multiple specialised agents into a pipeline where each one builds on the last. Illustrated with a TypeScript CLI that fetches a quote, researches its author, and writes an inspiration card.
When agents do not depend on each other, run them at the same time. Illustrated with a TypeScript translation pipeline built with Google ADK that translates a phrase into three languages simultaneously, then aggregates the results with Gemini.
A look at the simplest agentic AI pattern: one model, one tool, zero orchestration. Illustrated with a tiny TypeScript agent built with the Google Agent Development Kit (ADK) that uses Gemini and Google Maps to review any place by name.
What if you could take a dusty old black-and-white photograph and watch it come to life? In this post, I walk through a Node.js pipeline that colorises historic photos with Gemini and then animates them into video using Veo 3.1.
Have you ever wondered what your favourite landmark looked like a hundred years ago? In this post, I walk you through a Node.js application that generates historically accurate photographs of any real-world location at any point in time, and even checks its own work for anachronisms.
Before you reach for a more powerful embedding model or a larger context window, look at what you're actually feeding into a RAG pipeline. Sometimes the highest-leverage improvement isn't a better model but rather it's a better split.
Instead of stuffing documents into prompts, give your AI agent a filesystem and let it retrieve its own context. Here's how, using a murder mystery detective as the demo.
This article concludes the series by showing how a deliberately designed CLI becomes a powerful interaction layer, giving users precise control over an AI system's conversational context, short-term memory, and long-term semantic knowledge.
This piece introduces background processors as autonomous AI agents that summarise conversations and extract critical facts to continuously enrich Long-Term Semantic Memory. By running asynchronously and optimising token usage, these processors enable a self-improving, increasingly personalised AI system that learns from every interaction.