Blog

Creating an MCP Client: Connecting LLMs to the Real World

In this article, we walk through the process of building a Model Context Protocol (MCP) client. Learn how to connect to servers, discover tools, and invoke them from your own app or LLM integration.

MCP Servers - The Bridge Between LLMs and Real-World Tools

Model Context Protocol (MCP) servers expose tools, resources, and prompts to LLMs in a unified, structured way. This post explores how they work, how to build one, and why they are a critical part of the future AI stack.

Consuming Streamed LLM Responses on the Frontend: A Deep Dive into SSE and Fetch

Learn how to build a responsive, real-time user experience by consuming streamed Large Language Model responses on your frontend. This article provides a comprehensive guide to using both Server-Sent Events (SSE) and the Fetch API with Readable Streams, complete with code examples and a detailed comparison.

Filling in the Blanks: Teaching AI to Inpaint

A hands-on guide for exploring how to train a simple AI model using TensorFlow.js to inpaint missing parts of images - without needing large datasets or prior machine learning experience.

Agentic AI: Multi-Agent Systems and Task Handoff

The final article in the Agentic AI series explores multi-agent systems: how specialised agents collaborate through structured handoffs to complete complex user goals.

Understanding the Orchestrator-Worker Pattern

The orchestrator-worker pattern brings scalable structure to agentic AI workflows by cleanly separating high-level planning from specialised task execution. Through a practical trip planning example, this article demonstrates how LLMs can dynamically coordinate expert agents, grounded in schema-driven logic and real-world data.

Building with Reflection: A Practical Agentic AI Workflow

This article explores how to implement a reflection loop-an agentic AI pattern where a model generates, critiques, and iteratively improves its output - using image captioning as a practical example.

Parallelisation as an Agentic Workflow

Unlock faster, more diverse reasoning by running multiple LLM prompts in parallel and aggregating their responses into a single, cohesive output.

How Transformers and LLMs Actually Work - A Developer's Guide with Code

A hands-on walkthrough for web developers to demystify large language models by actually building a mini Transformer from scratch.

Routing: Building Step-by-Step AI Reasoning

Explore how to intelligently route AI queries using schema-guided function calling and contextual categorisation.

Page 1 / 21

Next