Skip to main content

What is MCP and why it matters

Model Context Protocol (MCP) is Anthropic’s open standard that defines how AI agents communicate with external tools and services. Instead of each AI platform inventing its own proprietary format for tool integration, MCP provides a unified specification. MCP standardizes:
  • Tool discovery — how agents learn what tools are available
  • Parameter schemas — how tool inputs and outputs are defined
  • Execution — how agents call tools and receive responses
  • Resource access — how agents read data exposed by integrations

Why this matters for you

Interoperability

Tools built for MagOneAI work with any MCP client. Tools built for other MCP clients work with MagOneAI.

Ecosystem compatibility

Leverage the growing ecosystem of MCP tools instead of building everything yourself.

Future-proofing

As MCP becomes widely adopted, your tool investments remain valuable across platforms.

No vendor lock-in

You’re not dependent on proprietary tool formats that only work in one system.

MCP architecture in MagOneAI

The Model Context Protocol operates through a client-server architecture. MagOneAI acts as an MCP client that connects to multiple MCP servers.

MCP servers

Each integration is an MCP server — a separate process that exposes tools to agents. For example:
  • The Google integration runs as an MCP server exposing Gmail and Calendar tools
  • The database integration runs as an MCP server exposing SQL query tools
  • Your custom CRM integration would run as its own MCP server
MCP servers are lightweight processes that can run alongside the MagOneAI platform or on separate infrastructure.

Tool definitions

Each MCP server declares its tools using a standardized format:
{
  "name": "send_email",
  "description": "Send an email with subject, body, and recipients",
  "parameters": {
    "type": "object",
    "properties": {
      "to": {
        "type": "array",
        "items": { "type": "string" },
        "description": "Email recipients"
      },
      "subject": {
        "type": "string",
        "description": "Email subject line"
      },
      "body": {
        "type": "string",
        "description": "Email body content"
      }
    },
    "required": ["to", "subject", "body"]
  }
}
The agent sees this definition and understands:
  • What the tool does (from the description)
  • What parameters it needs (from the schema)
  • Which parameters are required vs optional
  • What types each parameter expects

Tool execution flow

When an agent decides to use a tool, here’s what happens:
1

Agent requests tool execution

The agent sends a tool call with the tool name and parameters: send_email(to=["[email protected]"], subject="Hello", body="...")
2

Platform validates parameters

MagOneAI checks that the parameters match the tool’s schema. Invalid calls are rejected before execution.
3

Credentials are injected

The platform retrieves necessary credentials from Vault and injects them into the tool execution context.
4

MCP server executes the tool

The MCP server receives the request, performs the action (e.g., sends the email), and returns a structured response.
5

Response returned to agent

The agent receives the response and continues reasoning. The response might confirm success or include retrieved data.

Resource access

MCP servers can also expose resources — data that agents can read but not modify. For example:
  • A knowledge base MCP server might expose documents as resources
  • A monitoring MCP server might expose metrics and logs as resources
  • A file system MCP server might expose directory contents as resources
Agents can query available resources and read their contents during reasoning.

How MagOneAI manages MCP connections

MagOneAI handles the entire lifecycle of MCP connections so you don’t have to manage servers manually.

Central tool registry

Each project has a tool registry that tracks:
  • Which MCP servers are enabled
  • What tools each server exposes
  • Which agents have access to which tools
  • Connection status and health
You manage this through the project settings UI. Enable a tool, and it becomes available to agents in that project.

Connection lifecycle management

MagOneAI automatically:
  • Starts MCP servers when a project is activated
  • Maintains connections throughout the project lifecycle
  • Monitors health by periodically pinging MCP servers
  • Reconnects automatically if a server becomes temporarily unavailable
  • Stops servers when projects are deactivated to conserve resources
You don’t manage server processes. The platform handles it.

Health monitoring

The platform monitors each MCP connection:
  • Response times — are tools responding quickly enough?
  • Error rates — are tool calls frequently failing?
  • Availability — is the MCP server reachable?
If a tool becomes unhealthy, the platform alerts you and may temporarily disable it to prevent workflow failures.

Credential handling: 3-tier system

MagOneAI uses a three-tier credential management system that separates concerns and maximizes security.

Tier 1: Platform credentials

What: API keys and secrets for platform-level integrations (OpenAI, Anthropic, third-party APIs) Where stored: HashiCorp Vault with encryption at rest Who manages: SuperAdmin and org admins configure these in the Admin Portal When used: Injected when agents call tools that need platform-level authentication Example: OpenAI API key for model inference, SendGrid API key for email notifications

Tier 2: User tokens

What: OAuth access tokens for user-authenticated services (Google, Microsoft, Salesforce) Where stored: HashiCorp Vault with user-scoped encryption Who manages: End users authorize once; platform handles refresh automatically When used: Injected when agents call tools on behalf of a specific user Example: A user’s Google OAuth token to read their calendar and send emails

Tier 3: MCP connections

What: Active runtime connections with credentials injected from Vault Where stored: In memory during execution only; never persisted Who manages: Platform handles automatically When used: During tool execution; discarded immediately after Example: When an agent calls send_email, the platform retrieves the user’s OAuth token from Vault, injects it into the MCP connection, executes the tool, then discards the token from memory

Why this architecture matters

Workflow definitions, agent prompts, and logs reference credentials by Vault path (e.g., vault:openai/api_key). The actual credential value is never exposed.
Credentials only exist in memory during tool execution. They’re fetched from Vault, used, then immediately discarded. No long-lived credentials in memory.
Each user’s OAuth tokens are stored separately. Agents can only access tools with credentials for the user who triggered the workflow.
Every credential retrieval from Vault is logged. You can see when and why credentials were accessed.

MCP vs proprietary tool formats

Many AI platforms use proprietary formats for tool integration. Here’s how MCP compares:
AspectMCP (Open Standard)Proprietary Format
PortabilityTools work across MCP clientsTools only work in one platform
EcosystemShared tool library across platformsEach platform has separate tools
DevelopmentUse standard SDKs and examplesLearn platform-specific format
Future-proofingStandard evolves with community inputChanges at vendor’s discretion
Lock-inNone — switch platforms freelyTools must be rebuilt for new platform
MCP is an open protocol. Tools built for MagOneAI work with any MCP-compatible client, and tools built for other MCP clients work with MagOneAI. This means you can leverage the entire MCP ecosystem, not just MagOneAI-specific tools.

Building MCP-compatible tools

Want to add custom tools to MagOneAI? You’ll build an MCP server:
  1. Choose an SDK: Python or TypeScript/Node.js MCP SDKs are available
  2. Define your tools: Specify tool names, descriptions, and parameter schemas
  3. Implement tool logic: Write functions that execute when tools are called
  4. Handle authentication: Integrate with your APIs using credentials from MagOneAI
  5. Test locally: Use the MCP inspector to verify your server works correctly
  6. Deploy: Run your MCP server alongside MagOneAI or on separate infrastructure
  7. Register: Add the server to your MagOneAI project settings
Learn more in the Custom MCP tools guide.

Next steps