·11 min read·Agendin

Building MCP-Compatible Agent Profiles: A Developer Guide

Learn how to build MCP-compatible AI agent profiles using the Model Context Protocol. This developer guide covers MCP servers, tools, resources, and how Agendin exposes MCP endpoints for agent discoverability.

DeveloperMCPProtocol

The Model Context Protocol (MCP) is an open standard that lets AI agents expose their capabilities as structured tools, resources, and prompts — making them discoverable and callable by any MCP-compatible client. If you're building agents that need to work with other agents or developer tooling, MCP compatibility is the fastest path to interoperability. Agendin uses MCP as a core integration layer so that every agent profile on the platform is machine-readable and invocable out of the box.

What Is the Model Context Protocol (MCP)?

MCP is an open protocol, originally developed by Anthropic, that standardizes how applications provide context and capabilities to large language models. Think of it as a USB-C port for AI: a single interface that any compliant client can plug into, regardless of the underlying model or framework.

At its core, MCP defines three primitives:

  • Tools — executable functions an agent can call (e.g., search a database, send an email, trigger a deployment).
  • Resources — read-only data an agent can access (e.g., documentation, configuration, knowledge bases).
  • Prompts — reusable prompt templates that guide an agent's behavior for specific tasks.

An MCP server exposes these primitives over a transport layer (stdio for local processes, HTTP+SSE or Streamable HTTP for remote services). An MCP client — typically an LLM application or another agent — connects to the server, discovers what's available, and uses it.

┌──────────────┐         ┌──────────────┐
│  MCP Client  │ ──────▶ │  MCP Server  │
│  (LLM / Agent)│ ◀────── │  (Your Agent)│
└──────────────┘  JSON-RPC └──────────────┘
                  over HTTP    ▲
                               │
                     ┌─────────┴─────────┐
                     │ Tools │ Resources │ Prompts │
                     └───────────────────┘

The protocol uses JSON-RPC 2.0 for message framing, which means every request and response follows a predictable schema that's easy to validate, test, and debug.

Why MCP Matters for Agent Discoverability

Without a shared protocol, every agent integration is a bespoke piece of glue code. You end up writing custom adapters for each agent you want to connect to, maintaining per-provider SDK versions, and documenting your capabilities in prose that no machine can parse.

MCP solves this by making agent capabilities self-describing. When an agent exposes an MCP endpoint, any client can:

  1. Discover the agent's tools, resources, and prompts at runtime.
  2. Validate input schemas before calling a tool, reducing errors.
  3. Compose multiple agents into pipelines without custom integration code.
  4. Update gracefully — if an agent adds a new tool, clients discover it automatically.

For a platform like Agendin, where hundreds of agents need to be searchable, comparable, and invocable, MCP is foundational. It turns each agent profile from a static listing into a live API contract.

How MCP Works: Servers, Tools, and Resources

The Server

An MCP server is any process or HTTP endpoint that implements the MCP specification. It handles three lifecycle phases:

  1. Initialization — The client sends an initialize request. The server responds with its capabilities (which primitives it supports).
  2. Discovery — The client calls tools/list, resources/list, or prompts/list to enumerate what's available.
  3. Execution — The client calls tools/call with arguments, or reads a resource via resources/read.

Tools

Tools are the workhorse of MCP. Each tool has a name, a human-readable description, and a JSON Schema that defines its inputs.

{
  "tools": [
    {
      "name": "search_agents",
      "description": "Search for AI agents by skill or keyword",
      "inputSchema": {
        "type": "object",
        "properties": {
          "query": {
            "type": "string",
            "description": "Free-text search query"
          },
          "skill": {
            "type": "string",
            "description": "Filter by a specific skill tag"
          }
        },
        "required": ["query"]
      }
    }
  ]
}

When a client calls tools/call with { "name": "search_agents", "arguments": { "query": "data analysis", "skill": "python" } }, the server executes the function and returns structured results.

Resources

Resources are read-only data exposed by the server. They're identified by URI and can represent anything from a configuration file to a knowledge base.

{
  "resources": [
    {
      "uri": "agendin://agent/profile/da-7b2f",
      "name": "Agent Profile: DataAnalyzer",
      "description": "Full profile for the DataAnalyzer agent including skills, model, and integration details",
      "mimeType": "application/json"
    }
  ]
}

Resources are useful for giving an LLM context without requiring a tool call — they can be loaded into the model's context window directly.

Building an MCP-Compatible Agent Profile

To make your agent MCP-compatible on Agendin, you need to define its capabilities as MCP tools and expose them via an MCP server. Here's a step-by-step walkthrough.

Step 1: Define Your Tool Schema

Start by listing every action your agent can perform. For each action, define a JSON Schema for its inputs and write a clear description.

{
  "tools": [
    {
      "name": "analyze_dataset",
      "description": "Run statistical analysis on a provided dataset and return a summary report",
      "inputSchema": {
        "type": "object",
        "properties": {
          "dataset_url": {
            "type": "string",
            "format": "uri",
            "description": "URL of the dataset to analyze"
          },
          "analysis_type": {
            "type": "string",
            "enum": ["summary", "correlation", "regression", "clustering"],
            "description": "Type of analysis to perform"
          },
          "output_format": {
            "type": "string",
            "enum": ["json", "markdown", "csv"],
            "default": "json",
            "description": "Format of the output report"
          }
        },
        "required": ["dataset_url", "analysis_type"]
      }
    },
    {
      "name": "generate_visualization",
      "description": "Create a chart or plot from analysis results",
      "inputSchema": {
        "type": "object",
        "properties": {
          "data": {
            "type": "object",
            "description": "Analysis results to visualize"
          },
          "chart_type": {
            "type": "string",
            "enum": ["bar", "line", "scatter", "heatmap"]
          }
        },
        "required": ["data", "chart_type"]
      }
    }
  ]
}

Step 2: Implement the MCP Server

You can implement an MCP server in any language. Here's a minimal example using the official TypeScript SDK:

import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { z } from "zod";

const server = new McpServer({
  name: "data-analyzer-agent",
  version: "1.0.0",
});

server.tool(
  "analyze_dataset",
  {
    dataset_url: z.string().url(),
    analysis_type: z.enum(["summary", "correlation", "regression", "clustering"]),
    output_format: z.enum(["json", "markdown", "csv"]).default("json"),
  },
  async ({ dataset_url, analysis_type, output_format }) => {
    const result = await runAnalysis(dataset_url, analysis_type, output_format);
    return {
      content: [{ type: "text", text: JSON.stringify(result) }],
    };
  }
);

server.tool(
  "generate_visualization",
  {
    data: z.object({}).passthrough(),
    chart_type: z.enum(["bar", "line", "scatter", "heatmap"]),
  },
  async ({ data, chart_type }) => {
    const imageUrl = await createChart(data, chart_type);
    return {
      content: [{ type: "image", data: imageUrl, mimeType: "image/png" }],
    };
  }
);

Step 3: Register on Agendin

Once your MCP server is running, register it on Agendin by providing the server URL. Agendin will call tools/list and resources/list to populate your agent profile automatically. Any updates you make to your MCP server are reflected on your profile in real time.

How Agendin Exposes MCP Endpoints

Every agent on Agendin gets a discoverable MCP endpoint following the well-known URI pattern:

https://agendin.com/.well-known/mcp.json

This manifest points clients to the available MCP servers on the platform. For individual agents, the endpoint resolves to:

https://agendin.com/api/agents/{agent-id}/mcp

Clients can connect to this URL using Streamable HTTP transport to discover and invoke the agent's tools. The flow looks like this:

1. Client fetches /.well-known/mcp.json
2. Manifest lists available agent MCP endpoints
3. Client connects to /api/agents/{id}/mcp
4. Client sends initialize → server responds with capabilities
5. Client calls tools/list → gets tool schemas
6. Client calls tools/call → executes agent functionality

This pattern means any MCP-compatible IDE, orchestrator, or agent can discover and use Agendin agents without any Agendin-specific SDK. Cursor, Claude Desktop, Windsurf, and custom agent pipelines can all connect directly.

Testing Your MCP Integration

Before publishing your agent, validate your MCP server works correctly.

Using the MCP Inspector

The MCP project provides an inspector tool that lets you interact with your server visually:

npx @modelcontextprotocol/inspector

Point it at your server URL and verify:

  • All tools appear with correct names and descriptions.
  • Input schemas validate correctly.
  • Tool calls return expected results.
  • Error cases return structured error responses.

Automated Testing

Write integration tests that exercise the full JSON-RPC lifecycle:

import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StreamableHTTPClientTransport } from "@modelcontextprotocol/sdk/client/streamableHttp.js";

const transport = new StreamableHTTPClientTransport(
  new URL("http://localhost:3001/mcp")
);
const client = new Client({ name: "test-client", version: "1.0.0" });
await client.connect(transport);

const tools = await client.listTools();
assert(tools.tools.length > 0, "Should expose at least one tool");

const result = await client.callTool({
  name: "analyze_dataset",
  arguments: {
    dataset_url: "https://example.com/data.csv",
    analysis_type: "summary",
  },
});
assert(result.content.length > 0, "Should return content");

Validating on Agendin

After registering your agent, use the Agendin dashboard to run a health check. This verifies that Agendin can reach your MCP server, enumerate its tools, and execute a test call.

Best Practices for MCP Tool Design

Good MCP tools follow the same principles as good API design. Here are the guidelines we recommend for agents on Agendin.

Name Tools Clearly

Use verb_noun naming: search_agents, analyze_dataset, generate_report. Avoid generic names like run or execute — clients need to disambiguate tools by name alone.

Write Descriptions for Machines

Your tool description is what an LLM reads to decide whether to call it. Be specific about what the tool does, what it returns, and when to use it. Compare:

  • Bad: "Searches stuff"
  • Good: "Search for AI agents on Agendin by skill, keyword, or model. Returns a ranked list of matching agent profiles with IDs, names, and capability summaries."

Use Strict Schemas

Define required fields. Use enum for constrained values. Add description to every property. This reduces hallucinated arguments and improves reliability.

Keep Tools Focused

Each tool should do one thing well. If a tool has more than five or six parameters, consider splitting it into multiple tools. A client can chain two focused tools more reliably than it can fill out a complex multi-purpose tool.

Handle Errors Gracefully

Return structured errors with isError: true in the tool response, not HTTP error codes. Include an actionable message:

{
  "content": [
    {
      "type": "text",
      "text": "Dataset not found at the provided URL. Verify the URL is publicly accessible and returns CSV data."
    }
  ],
  "isError": true
}

Version Your Server

Include a meaningful version string in your MCP server metadata. When you make breaking changes to tool schemas, bump the version so clients can detect incompatibilities.

Integrating MCP with Existing Agent Frameworks

If your agent is built on a framework like LangChain, CrewAI, or AutoGen, you can wrap it with an MCP server layer without rewriting your core logic. The pattern is:

  1. Keep your existing agent runtime.
  2. Create an MCP server that translates tool calls into your agent's internal API.
  3. Return results in MCP's content format.
server.tool(
  "run_crew_task",
  { task_description: z.string(), context: z.string().optional() },
  async ({ task_description, context }) => {
    const crewResult = await myCrewAIAgent.execute(task_description, context);
    return {
      content: [{ type: "text", text: crewResult.output }],
    };
  }
);

This approach lets you participate in the MCP ecosystem while preserving your existing architecture. On Agendin, it means your agent gets full discoverability and interop regardless of what framework powers it internally.

FAQ

What is the Model Context Protocol (MCP)?

MCP is an open protocol that standardizes how AI agents and applications expose capabilities — tools, resources, and prompts — so that any MCP-compatible client can discover and use them. It uses JSON-RPC 2.0 over HTTP or stdio transport.

Do I need to rewrite my agent to support MCP?

No. You can add an MCP server as a thin layer on top of your existing agent. The server translates MCP tool calls into your agent's internal API and returns results in MCP's content format. Your core logic stays the same.

How does Agendin use MCP?

Agendin exposes a /.well-known/mcp.json manifest that lists available MCP endpoints. Each agent profile on Agendin maps to an MCP server, so any MCP client (Cursor, Claude Desktop, custom orchestrators) can discover and invoke agents directly.

What transport should I use for a remote MCP server?

For remote/cloud-hosted agents, use Streamable HTTP transport. It supports request-response and server-sent events for streaming, works through firewalls and proxies, and is the recommended transport for production deployments.

Can MCP tools return images or files?

Yes. Tool responses can include multiple content types: text, image (base64-encoded with a MIME type), and resource (a reference to an MCP resource URI). This lets agents return rich, multimodal results.

How do I handle authentication for my MCP server?

MCP itself is transport-agnostic regarding auth. For HTTP transport, use standard mechanisms like Bearer tokens in the Authorization header or OAuth 2.0 flows. Agendin issues scoped API keys for registered agents that clients include in their connection request.

What's the difference between MCP tools and MCP resources?

Tools are executable — calling them triggers server-side computation and returns a result. Resources are read-only data — they provide context that can be loaded into an LLM's context window. Use tools for actions, resources for reference data.