This page documents how data enters, moves through, is stored in, and exits the MagOneAI platform. Understanding these flows helps you assess data exposure, plan compliance, and make informed decisions about model selection and tool integrations.
Understanding what data exits your environment is critical for compliance. MagOneAI sends data externally only through two paths: LLM calls and MCP tool calls.
The input data provided to the workflow or chat message
Conversation context
Previous activity outputs flowing through the workflow
Tool schemas
Definitions of available tools (function names, parameters)
When using cloud LLM providers, all of the above data is sent to the provider’s API. To keep everything within your environment, use privately hosted models via any OpenAI-compatible endpoint (vLLM, Ollama, LM Studio, TGI, etc.). MagOneAI treats private models identically to cloud models — no workflow changes needed.
Workflows can pause for human approval or input, then resume automatically.
When a workflow is paused for human input, no compute resources are consumed. The workflow engine (Temporal) durably persists the state and resumes exactly where it left off — even if servers restart in the meantime.
MagOneAI is designed for organizations that need complete control over their AI data:
Private LLM support: Use any OpenAI-compatible model endpoint — your prompts and data never leave your network
Self-hosted deployment: Run the entire platform on your infrastructure (Docker Compose or Kubernetes)
Local tools: File processing, database queries, and filesystem access happen entirely within your environment
No telemetry: MagOneAI does not phone home or send usage data externally
For maximum data sovereignty, deploy MagOneAI with privately hosted LLMs and use only local tools (File Tools, Database, Filesystem). In this configuration, zero data leaves your network boundary.