Creating an MCP Client: Connecting LLMs to the Real World

Large language models (LLMs) are impressive on their own, but they become exponentially more useful when they can interact with external tools, APIs, and resources. That's exactly what the Model Context Protocol (MCP) enables and while the server side is responsible for exposing tools, the client is what makes them usable to a model in the first place.

In this article, we'll walk through how to create an MCP client. You'll learn how to connect to an MCP server, discover available tools, and invoke them with structured input - either as part of an AI agent workflow or a standalone application.

What is an MCP Client?

An MCP client is a piece of code that acts on behalf of the LLM or application. It:

  • Connects to one or more MCP servers (over stdio, HTTP, or other transports)
  • Lists the available tools, resources, and prompts
  • Sends structured JSON-RPC calls to invoke tools
  • Parses responses to use in downstream logic (e.g. as input to the model)

This makes the client the bridge between the model and the toolchain - responsible for orchestration, routing, and formatting.

Installing the SDK

The easiest way to build an MCP client is by using the official Node.js SDK npm install @modelcontextprotocol/sdk

For the purposes of this sample project we'll be using the Anthropic SDK which cyou can install via npm install @anthropic-ai/sdk.

Importing Required Modules

import readline from 'readline';
import { Client as McpClient } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import Anthropic from '@anthropic-ai/sdk';

We import the necessary packages and the one that we didn't talk about yet is readline that provides an interactive command-line interface.

Handling the Server Script Argument

const serverScript = process.argv[2];
if (!serverScript) {
console.error('Usage: node build/client.js <path/to/your-server.js>');
process.exit(1);
}

The client expects a path to the MCP server script as a command-line argument. If the path isn't provided, it prints a usage message and exits.

Initialising the MCP Client and Connecting

const mcp = new McpClient(
{ name: 'swapi-stdio-client', version: '1.0.0' },
{ capabilities: { tools: {} } },
);

await mcp.connect(
new StdioClientTransport({
command: 'node',
args: [serverScript],
}),
);

We create an instance of McpClient, giving it a name and declaring that we support tool capabilities. We then connect to the provided MCP server by spawning it as a subprocess over stdio.

Listing Available Tools

const { tools } = await mcp.listTools();
console.error('đź›  Available tools:', tools.map((t: any) => t.name).join(', '));

After connecting, we request the list of tools from the MCP server - this can be done via the listTools method.

Creating the Claude API Client

const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY!,
});

We create an instance of the Anthropic SDK, authenticated using the ANTHROPIC_API_KEY from environment variables. (Note that you need to obtain this separately from https://docs.anthropic.com/en/home).

Setting Up a Read–Eval–Print Loop (REPL)

const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
const conversation: Array<{ role: 'user' | 'assistant'; content: any }> = [];

A readline interface is created to accept user input from the command line. We also initialise a conversation array to maintain the ongoing chat history between user and Claude.

The Main Input–Processing Loop

while (true) {
const user = await new Promise<string>((res) => rl.question('You: ', res));
if (user.trim().toLowerCase() === 'exit') break;

conversation.push({ role: 'user', content: user });

The outer loop waits for user input. If the input is exit, the loop breaks. Otherwise, the user's message is added to the conversation log.

Inner Tool-Orchestration Loop

let currentConversation = [...conversation];
let responseComplete = false;

while (!responseComplete) {
const resp: any = await anthropic.messages.create({
model: 'claude-3-5-sonnet-latest',
max_tokens: 5000,
messages: currentConversation,
tools: tools.map((t: any) => ({
name: t.name,
description: t.description,
input_schema: t.inputSchema,
})),
});

We duplicate the conversation state into currentConversation, which is mutable during tool use. We then call Claude's API, providing:

  • The full message history,
  • The tool definitions retrieved earlier from the MCP server.

Claude will return a response that may contain normal text, a tool call, or a mix of both.

Handling Tool Calls

const toolUse = resp.content.find((b: any) => b.type === 'tool_use');

if (toolUse) {
console.log('Claude wants to use a tool:', toolUse.name, 'with args:', toolUse.input);

currentConversation.push({
role: 'assistant',
content: [
{
type: 'tool_use',
name: toolUse.name,
input: toolUse.input,
id: toolUse.id,
},
],
});

If the response includes a tool_use block, that means Claude wants to use one of the registered tools. We log the tool name and its arguments, then push the tool invocation into the currentConversation array.

Executing the Tool via MCP

  const toolResult = await mcp.callTool({ name: toolUse.name, arguments: toolUse.input });
const toolResultContent = toolResult.content as Array<{ type: string; text?: string }>;

currentConversation.push({
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: toolUse.id,
content:
toolResultContent.find((c) => c.type === 'text')?.text ||
JSON.stringify(toolResultContent),
},
],
});

The client executes the requested tool using callTool(), passing the name and structured arguments. Once the tool returns, we wrap the result into a tool_result block and feed it back into the conversation. This allows Claude to reflect on the result and respond appropriately.

Handling Regular Text Responses

} else {
for (const block of resp.content) {
if (block.type === 'text') {
console.log('Claude:', block.text);
conversation.push({ role: 'assistant', content: block.text });
} else {
console.log('Claude:', JSON.stringify(block));
conversation.push({ role: 'assistant', content: JSON.stringify(block) });
}
}
responseComplete = true;
}

If no tool is requested, we assume Claude responded directly. We print the response to the terminal and add it to the main conversation history.

At this point all is left to do is to clean up, handle errors and of course run the main() function.

Starting the client can be done with this command (where we also need to point to an MCP server - see this article for more details): node --env-file=.env build/client.js build/server.js

Sample Output

âś… SWAPI MCP server (full toolkit) running over STDIO
đź›  Available tools: getCharacter, getStarship, getPlanet, getFilm, getSpecies, getVehicle, searchResource
You: What do you know about the Millenium Falcon?
Claude wants to use a tool: getStarship with args: { name: 'Millennium Falcon' }
Claude: The Millennium Falcon is one of the most famous starships in the Star Wars universe. Here are its key specifications:

1. Class and Model: It's a YT-1300 light freighter manufactured by the Corellian Engineering Corporation.

2. Physical Specifications:
- Length: 34.37 meters
- Cargo capacity: 100,000 units
- Can carry 4 crew members and 6 passengers
- Has consumables to last 2 months

3. Performance:
- Maximum atmospheric speed: 1,050
- Hyperdrive rating: 0.5 (which is very fast for its class)
- MGLT: 75

4. Cost: 100,000 credits

The ship is notable for its significant combat history and has been involved in several major battles in the Star Wars saga, appearing in multiple films including A New Hope, The Empire Strikes Back, Return of the Jedi, and The Force Awakens.

The Millennium Falcon has had several notable pilots, but is most famously associated with Han Solo and Chewbacca. Despite its somewhat shabby appearance, it's renowned for being one of the fastest ships in the galaxy, thanks in part to extensive modifications from its standard design.

Would you like me to look up more specific information about any of its pilots or its appearances in particular films?

Wrapping Up

MCP clients are the unsung heroes of the tool-calling ecosystem. They make it possible for large language models to interact with real systems through structured, discoverable APIs. Using the official SDK, you can get started with just a few lines of code - and once you're up and running, the possibilities grow quickly.

Want to go deeper? Try adding memory, orchestrating multiple servers, or building a conversational wrapper around your tools. If you're curious about the server side of MCP, check out my article on building a full-featured server.