Creating an MCP Client: Connecting LLMs to the Real World
Large language models are impressive on their own. They become wildly more useful when you bolt on access to external tools, APIs and resources. That’s what the Model Context Protocol (MCP) does, and while the server side exposes the tools, the client is the piece that makes them reachable by the model.
Here’s how to build one. You’ll wire up a connection to an MCP server, discover available tools and invoke them with structured input.
What is an MCP Client?
An MCP client is a piece of code that acts on behalf of the LLM or application. It:
- Connects to one or more MCP servers (over stdio, HTTP, or other transports)
- Lists the available tools, resources, and prompts
- Sends structured JSON-RPC calls to invoke tools
- Parses responses to use in downstream logic (e.g. as input to the model)
The client sits between the model and the toolchain. It handles orchestration, routing and formatting.
Installing the SDK
The easiest way to build an MCP client is by using the official Node.js SDK npm install @modelcontextprotocol/sdk
For the purposes of this sample project we’ll be using the Anthropic SDK which cyou can install via npm install @anthropic-ai/sdk.
Importing Required Modules
import readline from 'readline';
import { Client as McpClient } from '@modelcontextprotocol/sdk/client/index.js';
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
import Anthropic from '@anthropic-ai/sdk';
We pull in the necessary packages. The one we haven’t mentioned yet is readline, which gives us an interactive command-line interface.
Handling the Server Script Argument
const serverScript = process.argv[2];
if (!serverScript) {
console.error('Usage: node build/client.js <path/to/your-server.js>');
process.exit(1);
}
The client expects a path to the MCP server script as a command-line argument. No path, no party: it prints a usage message and exits.
Initialising the MCP Client and Connecting
const mcp = new McpClient(
{ name: 'swapi-stdio-client', version: '1.0.0' },
{ capabilities: { tools: {} } },
);
await mcp.connect(
new StdioClientTransport({
command: 'node',
args: [serverScript],
}),
);
We spin up an McpClient instance, giving it a name and declaring tool capabilities. Then we connect to the MCP server by spawning it as a subprocess over stdio.
Listing Available Tools
const { tools } = await mcp.listTools();
console.error('🛠 Available tools:', tools.map((t: any) => t.name).join(', '));
Once connected, we grab the list of tools from the MCP server via the listTools method.
Creating the Claude API Client
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY!,
});
We create an instance of the Anthropic SDK, authenticated using the ANTHROPIC_API_KEY from environment variables. (You’ll need to grab this separately from https://docs.anthropic.com/en/home).
Setting Up a Read-Eval-Print Loop (REPL)
const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
const conversation: Array<{ role: 'user' | 'assistant'; content: any }> = [];
A readline interface picks up user input from the command line. We also initialise a conversation array to hold the ongoing chat history between user and Claude.
The Main Input-Processing Loop
while (true) {
const user = await new Promise<string>((res) => rl.question('You: ', res));
if (user.trim().toLowerCase() === 'exit') break;
conversation.push({ role: 'user', content: user });
The outer loop waits for user input. Type “exit” and it breaks. Otherwise, the message gets pushed onto the conversation log.
Inner Tool-Orchestration Loop
let currentConversation = [...conversation];
let responseComplete = false;
while (!responseComplete) {
const resp: any = await anthropic.messages.create({
model: 'claude-3-5-sonnet-latest',
max_tokens: 5000,
messages: currentConversation,
tools: tools.map((t: any) => ({
name: t.name,
description: t.description,
input_schema: t.inputSchema,
})),
});
We clone the conversation state into currentConversation (mutable during tool use), then call Claude’s API with:
- The full message history
- The tool definitions pulled earlier from the MCP server
Claude returns a response that may contain plain text, a tool call, or both.
Handling Tool Calls
const toolUse = resp.content.find((b: any) => b.type === 'tool_use');
if (toolUse) {
console.log('Claude wants to use a tool:', toolUse.name, 'with args:', toolUse.input);
currentConversation.push({
role: 'assistant',
content: [
{
type: 'tool_use',
name: toolUse.name,
input: toolUse.input,
id: toolUse.id,
},
],
});
When the response includes a tool_use block, Claude wants to call one of the registered tools. We log the tool name and arguments, then push the invocation into currentConversation.
Executing the Tool via MCP
const toolResult = await mcp.callTool({ name: toolUse.name, arguments: toolUse.input });
const toolResultContent = toolResult.content as Array<{ type: string; text?: string }>;
currentConversation.push({
role: 'user',
content: [
{
type: 'tool_result',
tool_use_id: toolUse.id,
content:
toolResultContent.find((c) => c.type === 'text')?.text ||
JSON.stringify(toolResultContent),
},
],
});
The client fires the requested tool using callTool(), passing the name and structured arguments. Once the tool returns, we wrap the result into a tool_result block and feed it back into the conversation. Claude can then reflect on the result and respond.
Handling Regular Text Responses
} else {
for (const block of resp.content) {
if (block.type === 'text') {
console.log('Claude:', block.text);
conversation.push({ role: 'assistant', content: block.text });
} else {
console.log('Claude:', JSON.stringify(block));
conversation.push({ role: 'assistant', content: JSON.stringify(block) });
}
}
responseComplete = true;
}
If no tool is requested, Claude responded directly. We print the response and add it to the main conversation history.
At this point all that’s left is cleanup, error handling and calling the main() function.
Starting the client looks like this (pointing it at an MCP server; see this article for details): node --env-file=.env build/client.js build/server.js
Sample Output
Here’s what a typical interaction looks like when asking about the Millennium Falcon:
Wrapping Up
MCP clients are the glue of the tool-calling ecosystem. They make it possible for large language models to interact with real systems through structured, discoverable APIs. With the official SDK, you can get wired up in a handful of lines.
Want to push further? Try bolting on memory, orchestrating multiple servers, or wrapping a conversational layer around your tools. For the server side of MCP, check out my article on building a full-featured server.