Skip to main content

Overview

This page documents how data enters, moves through, is stored in, and exits the MagOneAI platform. Understanding these flows helps you assess data exposure, plan compliance, and make informed decisions about model selection and tool integrations.

Platform architecture

MagOneAI is a layered system where all data flows through a secure API layer. No external service connects directly to any internal data store.

Workflow execution flow

This is the primary data path — how user input travels through MagOneAI and becomes AI-generated output.

Data entering the platform

All data enters MagOneAI through authenticated API endpoints. There are no direct connections to internal stores from outside.
Entry PointAuthenticationWhat Data
Studio / Hub UIJWT session (HttpOnly cookies)Workflow definitions, agent configs, chat messages, file uploads
REST APIJWT bearer tokenExecution inputs, CRUD operations
WebhooksAPI key (hashed, never stored in plaintext)Freeform JSON payload to trigger workflows
Scheduled triggersInternal (no external entry)Pre-configured input for recurring workflows
OAuth callbacksState token verification (CSRF protection)Authorization codes from Google/Microsoft

Data leaving the platform

Understanding what data exits your environment is critical for compliance. MagOneAI sends data externally only through two paths: LLM calls and MCP tool calls.

What goes to LLM providers

Data SentDescription
System promptAgent persona, role, and instructions
User inputThe input data provided to the workflow or chat message
Conversation contextPrevious activity outputs flowing through the workflow
Tool schemasDefinitions of available tools (function names, parameters)
When using cloud LLM providers, all of the above data is sent to the provider’s API. To keep everything within your environment, use privately hosted models via any OpenAI-compatible endpoint (vLLM, Ollama, LM Studio, TGI, etc.). MagOneAI treats private models identically to cloud models — no workflow changes needed.

What goes to tool APIs

ToolData Sent Externally
Google GmailEmail content, recipients, OAuth token
Google CalendarEvent details, attendees, OAuth token
Microsoft OutlookEmail content, recipients, OAuth token
Microsoft CalendarEvent details, attendees, OAuth token
Web SearchSearch query text

Tools that stay local

These MCP tools process data entirely within your environment:
ToolWhat It DoesExternal Calls
File ToolsExtract text from PDFs, Excel, CSVNone
Database / VannaQuery your own databases with SQL or natural languageNone (connects to your DB)
FilesystemRead/write local filesNone

Secrets and credential management

MagOneAI separates sensitive credentials from application data. Credentials are never stored in the application database.

How credentials are resolved

When a workflow needs credentials (e.g., to call Google Calendar), MagOneAI uses a scoped fallback chain: This allows flexible credential management:
  • User-level: Individual team members connect their own Google/Microsoft accounts
  • Project-level: Shared credentials for a team (e.g., a shared service account)
  • Organization-level: Default credentials for the entire org

OAuth integration flow

When connecting to Google or Microsoft services, MagOneAI uses standard OAuth 2.0 with PKCE for security.

File processing flow

Files uploaded to MagOneAI are stored in private object storage and processed for use in workflows.
  • Files are streamed in chunks to prevent memory issues
  • Original files and extracted text are stored separately
  • Access is controlled via time-limited signed URLs (no public access)
  • Files are scoped to the project — only project members can access them

Human-in-the-loop flow

Workflows can pause for human approval or input, then resume automatically.
When a workflow is paused for human input, no compute resources are consumed. The workflow engine (Temporal) durably persists the state and resumes exactly where it left off — even if servers restart in the meantime.

Data protection summary

Sensitive data handling

Data TypeHow It’s Protected
User passwordsBcrypt hashed — never stored in plaintext
Session tokensHttpOnly + Secure + SameSite cookies with short expiry
LLM API keysStored exclusively in encrypted Vault — never in application database
OAuth tokensStored exclusively in encrypted Vault — auto-refreshed on expiry
Tool credentialsSplit storage: non-sensitive config in database, secrets in Vault
Webhook API keysHashed before storage — never stored in plaintext
Uploaded filesPrivate object storage — access via time-limited signed URLs only

Encryption

LayerProtection
In transitTLS/HTTPS for all external and client-facing communication
At restAES-256 encryption in Secrets Vault; database and storage encryption configurable per deployment
SecretsVault seal mechanism with support for cloud KMS auto-unseal

Access control

ScopeWho Can Access
OrganizationMembers of that organization only
ProjectProject members with appropriate role (Viewer, Member, Owner, Admin)
Execution dataProject members only — isolated per project
User credentialsOnly the user who created them
Org-level secretsPlatform administrators only

AI sovereignty

MagOneAI is designed for organizations that need complete control over their AI data:
  • Private LLM support: Use any OpenAI-compatible model endpoint — your prompts and data never leave your network
  • Self-hosted deployment: Run the entire platform on your infrastructure (Docker Compose or Kubernetes)
  • Local tools: File processing, database queries, and filesystem access happen entirely within your environment
  • No telemetry: MagOneAI does not phone home or send usage data externally
For maximum data sovereignty, deploy MagOneAI with privately hosted LLMs and use only local tools (File Tools, Database, Filesystem). In this configuration, zero data leaves your network boundary.

Next steps

Secrets management

How Vault integration works and the credential lifecycle

RBAC

Role-based access control and permission scopes

Audit logging

What gets logged and how to review audit trails

Infrastructure

Deployment architecture and network configuration