What is MCP and why it matters
Model Context Protocol (MCP) is Anthropic’s open standard that defines how AI agents communicate with external tools and services. Instead of each AI platform inventing its own proprietary format for tool integration, MCP provides a unified specification. MCP standardizes:- Tool discovery — how agents learn what tools are available
- Parameter schemas — how tool inputs and outputs are defined
- Execution — how agents call tools and receive responses
- Resource access — how agents read data exposed by integrations
Why this matters for you
Interoperability
Tools built for MagOneAI work with any MCP client. Tools built for other MCP clients work with MagOneAI.
Ecosystem compatibility
Leverage the growing ecosystem of MCP tools instead of building everything yourself.
Future-proofing
As MCP becomes widely adopted, your tool investments remain valuable across platforms.
No vendor lock-in
You’re not dependent on proprietary tool formats that only work in one system.
MCP architecture in MagOneAI
The Model Context Protocol operates through a client-server architecture. MagOneAI acts as an MCP client that connects to multiple MCP servers.MCP servers
Each integration is an MCP server — a separate process that exposes tools to agents. For example:- The Google integration runs as an MCP server exposing Gmail and Calendar tools
- The database integration runs as an MCP server exposing SQL query tools
- Your custom CRM integration would run as its own MCP server
Tool definitions
Each MCP server declares its tools using a standardized format:- What the tool does (from the description)
- What parameters it needs (from the schema)
- Which parameters are required vs optional
- What types each parameter expects
Tool execution flow
When an agent decides to use a tool, here’s what happens:Agent requests tool execution
The agent sends a tool call with the tool name and parameters:
send_email(to=["[email protected]"], subject="Hello", body="...")Platform validates parameters
MagOneAI checks that the parameters match the tool’s schema. Invalid calls are rejected before execution.
Credentials are injected
The platform retrieves necessary credentials from Vault and injects them into the tool execution context.
MCP server executes the tool
The MCP server receives the request, performs the action (e.g., sends the email), and returns a structured response.
Resource access
MCP servers can also expose resources — data that agents can read but not modify. For example:- A knowledge base MCP server might expose documents as resources
- A monitoring MCP server might expose metrics and logs as resources
- A file system MCP server might expose directory contents as resources
How MagOneAI manages MCP connections
MagOneAI handles the entire lifecycle of MCP connections so you don’t have to manage servers manually.Central tool registry
Each project has a tool registry that tracks:- Which MCP servers are enabled
- What tools each server exposes
- Which agents have access to which tools
- Connection status and health
Connection lifecycle management
MagOneAI automatically:- Starts MCP servers when a project is activated
- Maintains connections throughout the project lifecycle
- Monitors health by periodically pinging MCP servers
- Reconnects automatically if a server becomes temporarily unavailable
- Stops servers when projects are deactivated to conserve resources
Health monitoring
The platform monitors each MCP connection:- Response times — are tools responding quickly enough?
- Error rates — are tool calls frequently failing?
- Availability — is the MCP server reachable?
Credential handling: 3-tier system
MagOneAI uses a three-tier credential management system that separates concerns and maximizes security.Tier 1: Platform credentials
What: API keys and secrets for platform-level integrations (OpenAI, Anthropic, third-party APIs) Where stored: HashiCorp Vault with encryption at rest Who manages: SuperAdmin and org admins configure these in the Admin Portal When used: Injected when agents call tools that need platform-level authentication Example: OpenAI API key for model inference, SendGrid API key for email notificationsTier 2: User tokens
What: OAuth access tokens for user-authenticated services (Google, Microsoft, Salesforce) Where stored: HashiCorp Vault with user-scoped encryption Who manages: End users authorize once; platform handles refresh automatically When used: Injected when agents call tools on behalf of a specific user Example: A user’s Google OAuth token to read their calendar and send emailsTier 3: MCP connections
What: Active runtime connections with credentials injected from Vault Where stored: In memory during execution only; never persisted Who manages: Platform handles automatically When used: During tool execution; discarded immediately after Example: When an agent callssend_email, the platform retrieves the user’s OAuth token from Vault, injects it into the MCP connection, executes the tool, then discards the token from memory
Why this architecture matters
Credentials never appear in configurations
Credentials never appear in configurations
Workflow definitions, agent prompts, and logs reference credentials by Vault path (e.g.,
vault:openai/api_key). The actual credential value is never exposed.Credentials are injected just-in-time
Credentials are injected just-in-time
Credentials only exist in memory during tool execution. They’re fetched from Vault, used, then immediately discarded. No long-lived credentials in memory.
User tokens are scoped per user
User tokens are scoped per user
Each user’s OAuth tokens are stored separately. Agents can only access tools with credentials for the user who triggered the workflow.
Audit trail for all credential usage
Audit trail for all credential usage
Every credential retrieval from Vault is logged. You can see when and why credentials were accessed.
MCP vs proprietary tool formats
Many AI platforms use proprietary formats for tool integration. Here’s how MCP compares:| Aspect | MCP (Open Standard) | Proprietary Format |
|---|---|---|
| Portability | Tools work across MCP clients | Tools only work in one platform |
| Ecosystem | Shared tool library across platforms | Each platform has separate tools |
| Development | Use standard SDKs and examples | Learn platform-specific format |
| Future-proofing | Standard evolves with community input | Changes at vendor’s discretion |
| Lock-in | None — switch platforms freely | Tools must be rebuilt for new platform |
Building MCP-compatible tools
Want to add custom tools to MagOneAI? You’ll build an MCP server:- Choose an SDK: Python or TypeScript/Node.js MCP SDKs are available
- Define your tools: Specify tool names, descriptions, and parameter schemas
- Implement tool logic: Write functions that execute when tools are called
- Handle authentication: Integrate with your APIs using credentials from MagOneAI
- Test locally: Use the MCP inspector to verify your server works correctly
- Deploy: Run your MCP server alongside MagOneAI or on separate infrastructure
- Register: Add the server to your MagOneAI project settings