F
Loading...
Flaex AI

Building a Model Control Plane (MCP) server lets you connect powerful AI clients to your private data, internal APIs, and custom automations. This guide will show you exactly how to build an MCP server from scratch. We will use TypeScript to define tools, write their logic, and run a local server that a client like Claude Desktop can connect to.
By following these steps, you will create a secure bridge between AI models and your unique capabilities, going from an empty project folder to a tested, working server.
An MCP server is a specialized backend you build to expose tools and resources to MCP-compatible AI clients. It acts as a secure and structured gateway. Your business has valuable assets like customer data, internal APIs, or proprietary automations. An AI model cannot access these on its own. The MCP server is the go between, translating an AI's request into an action your systems can safely execute.
Here is how the key players in the MCP ecosystem relate to each other:
MCP Server: The backend system you will build. It hosts the logic for your custom AI capabilities.
MCP Client: The AI application, like an agent or chatbot, that uses the capabilities your server provides.
Tools: Functions your server exposes to perform actions. For example, a lookupCustomer tool takes a customer ID and returns their details.
Resources: Structured data or context an AI can use for reasoning, such as a company organization chart or a product catalog schema.
Host Application: The environment where your server runs, whether on your local machine for testing or in the cloud for production.
This architecture lets you update your internal tools or data sources without needing to change the AI client, and vice versa. It is a modular and scalable way to extend AI.
Before writing any code, it is important to have your foundation ready. You need a clear goal, the right tools, and a way to test your server.
Here is a checklist of prerequisites:
A programming language and SDK choice: We will use TypeScript and the official MCP SDK, which is a great starting point for beginners.
Local development environment: You need a recent version of Node.js (v18 or later) installed to run the server. A code editor like VS Code is also recommended.
A client for testing: You need an MCP compatible client that supports local server connections for testing. An application like Claude Desktop is a good option.
A simple use case: Define a single, well defined action for your server. This makes the process manageable. Examples include a weather lookup tool, a simple CRM contact fetcher, or a task summarizer.
Your choice of language and SDK will shape your development experience. The MCP project provides official SDKs and documentation for several languages. For beginners, starting with an officially supported language like TypeScript is the most practical path.
The official TypeScript SDK offers strong type safety, great documentation, and an active community. This means you get features like autocompletion in your editor and can catch errors early, which speeds up development. While you can build an MCP server in any language that can handle HTTP requests, using an official SDK removes a lot of boilerplate and ensures you are following the protocol correctly. For this guide, we will use the Node.js and TypeScript stack.
You can see how these components fit into a modern workflow by exploring our guide on the modern AI build stack.
Before you build, decide exactly what your server should do. A server with a vague purpose is not useful. Start with a specific, actionable goal.
Here are some practical examples of what an MCP server can do:
Expose custom tools: Create functions for an AI to use, like a calculateMortgage tool or a translateText tool.
Connect to an API: Build a wrapper around a third party or internal API, like looking up a user in your Stripe account.
Expose internal business data: Safely provide access to information from your internal database or CRM. For example, a tool to fetchOrderDetails by order ID.
Wrap automations: Trigger existing scripts or automation workflows, such as running a daily report generation script.
Provide resources: Expose structured context like product schemas, company policies, or file directories that an AI can use for better reasoning.
Power agent workflows: Combine several tools to enable a complex agent to complete multi step tasks like onboarding a new customer.
For this tutorial, our goal will be to build a server that exposes a simple getWeather tool.
With a clear goal, it is time to set up the project. We will create the project folder, install the necessary dependencies, and create the main server file.
First, open your terminal and create a new directory for your server.
mkdir my-mcp-server
cd my-mcp-server
Next, initialize a Node.js project. This command creates a package.json file to manage your project's dependencies.
npm init -y
Now, install the core libraries: TypeScript and the MCP server SDK.
npm install typescript @mcp/core @mcp/server
Your project foundation is now in place. You should have a package.json file and a node_modules folder.
The core of any MCP server are the capabilities it exposes. These are either tools or resources. Understanding the difference is key to building an effective server.
Tools are for action. They are functions that perform a task, like getWeather or createTask. Use a tool when you want the AI to execute a command.
Resources are for knowledge. They provide structured data that gives an AI context, like a companyOrgChart or productCatalogSchema. Use a resource when you want the AI to be aware of information.
We will start by building a getWeather tool. A tool definition acts as a contract, telling the AI client its name, what it does, the inputs it needs, and what the output will look like.
Create a file named server.ts in your project directory and add the following code.
// server.ts
import { McpServer, tool } from '@mcp/server';
// 1. Define the input schema for the tool.
const weatherInput = {
city: {
type: 'string',
description: 'The city to get the weather for, e.g., "San Francisco"',
},
};
// 2. Define the tool itself with a name, description, and schema.
const weatherTool = tool({
name: 'getWeather',
description: 'Gets the current weather for a specific city.',
input_schema: {
type: 'object',
properties: weatherInput,
required: ['city'],
},
// The handler contains the logic. We will implement this next.
handler: async (input) => {
// For now, return a hardcoded value for testing.
return { temperature: 22, unit: 'Celsius' };
}
});
// 3. Create the server instance and register the tool.
const server = new McpServer({
tools: [weatherTool],
});
// 4. Start the server.
server.listen();
The input_schema is critical. It tells any connected AI that the getWeather tool requires a city argument of type string. This structured definition makes the interaction reliable and predictable.

Our server skeleton is defined. Now we need to implement the actual logic for our getWeather tool. We will replace the placeholder handler with code that processes the city input, calls an external data source, and returns a structured result.
To keep this example simple, we will simulate an external API call. In a real application, you would use a library like axios or fetch to call a real weather service.
First, let's create a mock function to simulate fetching weather data.
// A mock function to simulate fetching weather data
const fetchWeatherFromAPI = async (city: string): Promise<{ temperature: number; unit: string; condition: string }> => {
console.log(`Fetching weather for ${city}...`);
// Simulate a network delay
await new Promise(resolve => setTimeout(resolve, 500));
const cityLower = city.toLowerCase();
if (cityLower === 'san francisco') {
return { temperature: 18, unit: 'Celsius', condition: 'Foggy' };
} else if (cityLower === 'london') {
return { temperature: 15, unit: 'Celsius', condition: 'Cloudy' };
} else {
// A simple default for other cities
return { temperature: 25, unit: 'Celsius', condition: 'Sunny' };
}
};
Now, let's update the weatherTool handler in server.ts to use this function and add basic error handling.
// server.ts (updated handler)
const weatherTool = tool({
name: 'getWeather',
description: 'Gets the current weather for a specific city.',
input_schema: { /* ... as before ... */ },
handler: async (input: { city: string }) => {
// 1. Handle input
if (!input || typeof input.city !== 'string' || input.city.length === 0) {
return { error: 'Invalid input: A city name is required.' };
}
try {
// 2. Call the data source
const weatherData = await fetchWeatherFromAPI(input.city);
// 3. Return structured results
return weatherData;
} catch (error) {
// 4. Handle errors
console.error('Error fetching weather:', error);
return { error: 'Failed to retrieve weather data.' };
}
},
});
This handler logic is a solid pattern: it validates input, calls an external service, returns a clean result, and handles failures. This makes the tool's output clean and predictable.
With the logic in place, it is time for local testing. This lets you debug and verify the server before connecting it to an AI client.
First, start the server. Open a terminal in your project directory and run the server.ts file using ts-node. If you do not have ts-node installed, you can install it with npm install -g ts-node.
ts-node server.ts
You should see a message in your console confirming the server is running, usually on port 6277.
MCP server "my-mcp-server" is running.
Next, you can use a simple curl command to check that the server is alive and advertising its capabilities.
curl http://127.0.0.1:6277/
You should see a 200 OK response with a JSON body describing your server and its getWeather tool. This confirms your server is live. The local development loop of code, run, test is the fastest way to build and debug. By validating your tool with simple commands, you can fix server side issues before involving an AI client.
For more complex setups, containerizing your server can make deployment much smoother. If you are interested, we have a guide on how to dockerize an MCP server to create consistent and portable environments.
Now it is time to connect your local server to an MCP compatible client. This is when you see your work come to life.
Most MCP clients, like Claude Desktop, provide a settings area for adding local servers. You just need to point the client to your server's address, which is typically http://127.0.0.1:6277. The client will then automatically discover the tools your server exposes.
Once connected, start a new conversation with the AI and ask a question that requires your tool. For example: "What is the weather like in London?"
The AI will recognize it has a new capability, construct a request for your getWeather tool with the 'London' parameter, and send it to your server. Your server's console will log the request, execute the handler, and return the structured weather data. The AI then uses that data to provide a natural language answer. This simple interaction confirms your entire setup is working.

Now that the basic connection works, it is time to make the server more robust. Here are some practical improvements.
Input Validation: Good servers never trust their inputs. Use a schema validation library like Zod to enforce strict rules on incoming data. This prevents a whole class of bugs.
Authentication and Authorization: If your server handles sensitive data, security is critical. Implement a check for a secret API key or token in the request headers. Your server should reject any request without a valid key.
Structured Logging: As your server grows, console.log is not enough. Use a library like Pino or Winston to create structured, searchable logs. This is essential for debugging in a production environment.
Better Error Messages: Provide clean and informative error messages. This helps both you and the AI client understand what went wrong.
Modular Organization: As you add more tools, your server.ts file can get crowded. Organize your code into a tools/ directory, with each tool in its own file. This keeps the project clean and scalable.
These improvements take your server from a simple prototype to a more reliable and secure service.
Publishing your server is an optional step. The MCP ecosystem includes an official MCP Registry, which is a public directory for discovering servers. However, you are not required to use it. If you built an internal tool or a server for a personal project, keeping it private is often the best choice.
Should you decide to publish, you will need to:
Package your server: Containerizing your application with Docker is the industry standard.
Host it online: Deploy your container to a cloud provider like AWS, Google Cloud, or a platform like Heroku.
Expose a public URL: Your server needs a stable, public address so clients can find it.
Remember to validate your server thoroughly in a local environment before considering publication. The MCP Registry is a discovery tool, but a solid, well tested server should always be your primary focus. The server market was valued at USD 342.09 billion in 2025 and is on track to hit USD 1,027.83 billion by 2033. This incredible growth, detailed in server market research from Grand View Research, means the compute power needed to host and scale your server is becoming more accessible and robust every day.

When building an MCP server, it is easy to fall into common traps. Avoiding them will save you a lot of time and frustration.
Building without a clear use case: A server with vague tools is useless. Start with a specific problem to solve.
Returning poorly structured outputs: AI clients expect predictable, structured data. Messy or inconsistent JSON will cause failures.
Skipping local testing: Always test your server locally before connecting it to a client. This helps you isolate issues quickly.
Overbuilding too early: Start with one simple tool. Get the end to end flow working perfectly before adding more complexity.
Ignoring security or authorization: Never expose tools that access sensitive data without robust authentication. Learn how to run an MCP security audit to keep your server safe.
Confusing resources with tools: Remember that tools perform actions, while resources provide knowledge. Use the right one for the job.
Here is a concise checklist to ensure your MCP server is ready.
Clear server purpose: The server solves a specific, well defined problem.
At least one useful tool or resource: The server exposes at least one working capability.
Local testing completed: The server runs and passes tests on your local machine.
Client connection working: The server successfully connects to an MCP client.
Errors handled properly: The server handles failures gracefully and returns informative errors.
Security reviewed: Authentication and authorization are in place if needed.
Packaging and publication considered: You have a plan for deployment if you intend to share the server.

Here are answers to some frequently asked questions about building MCP servers.
The most practical approach is to use a language with an official MCP SDK. TypeScript is an excellent choice for beginners due to its strong typing and great documentation. Python is another solid option. Using an official SDK saves you from reinventing the wheel and ensures you follow protocol standards.
Start by building a tool. A tool represents an action, like getWeather, which is more intuitive to grasp than a resource. Once you have mastered the tool building workflow, you can add resources to provide the AI with more structured context for reasoning.
Yes, and you absolutely should. Always build and test your server in a local environment first. Most MCP clients support connecting to a local server, which allows you to debug and iterate quickly before you consider publishing it.
No, the MCP Registry is optional. It is a discovery service for public servers. For internal tools, personal projects, or local development, you can run your server privately and connect to it directly from your client.
Security is essential, especially for servers handling sensitive data. The most common pattern is to require an API key or bearer token in the request header. Your server logic must validate this token before processing any request. For internal servers, you can also use network level security like a firewall or a VPC.
A local MCP server runs on your own machine (e.g., localhost) and is typically used for development and testing. A hosted MCP server is deployed to a cloud provider (like AWS or Google Cloud) and is accessible from the public internet via a URL. You develop locally, and then host when you are ready to share your server more broadly. The total number of production-ready MCP servers hit 1,412 by February 2026, a staggering 232% increase in just six months. This shows that MCP is no longer a novelty but a critical piece of infrastructure for companies building practical, real-world AI solutions. For a deeper look at the architecture, you can explore more about MCP servers.
Ready to discover, compare, build with the best AI tools, Agents and MCP servers? Flaex.ai is your central building hub for assembling a modern AI stack workflow. Find top-tier MCP servers, AI agents, and more to accelerate your next project. Explore the directory at https://www.flaex.ai and sign up, to obtain free tools, and quest incentives.