Skip to main content

Purpose

The Agent node executes an AI agent as a workflow step. The agent receives input from previous nodes, performs LLM reasoning with its configured persona and instructions, optionally calls tools or retrieves from knowledge bases, and produces structured output. Agent nodes are the intelligence layer of your workflows. They interpret context, make decisions, analyze data, and perform complex reasoning tasks that would be difficult or impossible with traditional code.

How it works

When execution reaches an Agent node:
  1. Input mapping — Data from previous nodes or the trigger is mapped to the agent’s input
  2. Agent execution — The agent processes the input using its configured model, persona, tools, and knowledge bases
  3. Tool calling — If the agent has tools available, it can decide to call them based on reasoning
  4. Knowledge retrieval — The agent can search knowledge bases to ground its responses
  5. Output generation — The agent produces structured output based on its reasoning
  6. Variable storage — The output is stored in the variable store for downstream nodes
Each Agent node is a single agent execution. For multi-agent patterns, chain multiple Agent nodes or use Parallel nodes to run agents simultaneously.

Configuration

Configure an Agent node to execute the right agent with the right context and behavior.

Select an agent

Choose which agent from your project will execute in this node. The agent’s configuration determines:
  • Model — Which LLM to use (GPT-4, Claude, Gemini, etc.)
  • Persona — The agent’s personality, expertise, and behavior
  • Instructions — Task-specific guidance and constraints
  • Available tools — MCP tools the agent can call during reasoning
  • Knowledge bases — Vector stores the agent can query for context
You can select any agent created in MagOneAI Studio’s agent configuration.

Input mapping

Map data from the variable store to the agent’s input. The agent receives this data as context for its reasoning. Common input mappings:
  • Trigger data{{trigger.input.document_url}}
  • Previous agent output{{previous_agent.output.analysis}}
  • Tool results{{api_call.output.response}}
  • Static values — Hardcoded strings or numbers for consistent context
Example input mapping:
{
  "document_text": "{{document_processor.output.extracted_text}}",
  "document_type": "{{trigger.input.type}}",
  "customer_context": "{{crm_lookup.output.customer_profile}}"
}

Output mapping

Define how the agent’s output is stored in the variable store. You can:
  • Store the entire response under a custom variable name
  • Extract specific fields from structured output
  • Transform the output before storing
Example output mapping:
{
  "analysis_result": "{{agent.output}}",
  "risk_score": "{{agent.output.risk_score}}",
  "requires_review": "{{agent.output.requires_human_review}}"
}

Retry settings

Configure how the system handles agent execution failures:
  • Number of retries — How many times to retry on failure (default: 3)
  • Backoff strategy — Fixed delay or exponential backoff between retries
  • Retry on — Which error types trigger retries (rate limits, timeouts, model errors)
Agent executions can fail due to LLM rate limits, timeouts, or temporary service issues. Proper retry configuration ensures resilience.

Timeout

Set a maximum execution time for the agent. If the agent doesn’t complete within this time:
  • The execution is terminated
  • The retry policy determines whether to retry
  • The workflow can handle the timeout with error handling logic
Recommended timeouts:
  • Simple analysis tasks: 30-60 seconds
  • Complex reasoning with multiple tool calls: 2-5 minutes
  • Document processing: 1-3 minutes
Set realistic timeouts based on your agent’s expected behavior. Agents with many tool calls or large knowledge bases may need longer timeouts.

How context flows

Understanding how data flows into and out of Agent nodes is crucial for building effective workflows.

Input flow

  1. Previous activities store their outputs in the variable store
  2. You define input mapping to select which variables the agent receives
  3. The agent receives mapped data as context
  4. The agent processes the context with its persona and instructions

Output flow

  1. The agent completes reasoning and produces output
  2. Output mapping transforms or extracts specific fields
  3. Mapped output is stored in the variable store
  4. Downstream nodes can access the stored data

Context accumulation

As workflows execute, context accumulates in the variable store. Downstream agents can access outputs from all previous nodes:
{
  "trigger": {
    "document_url": "https://...",
    "customer_id": "12345"
  },
  "document_extractor": {
    "text": "...",
    "metadata": {...}
  },
  "compliance_agent": {
    "is_compliant": true,
    "risk_score": 0.3,
    "findings": [...]
  },
  "financial_agent": {
    "amount": 50000,
    "currency": "AED",
    "requires_approval": true
  }
}
Each agent builds on the work of previous agents, creating rich, contextualized reasoning.

Example: Document processing workflow

Let’s build a complete document processing workflow using Agent nodes.

Scenario

Process incoming contracts: extract text, analyze compliance, assess risk, and generate a summary report.

Workflow structure

1

Document extraction agent

Receives the document URL from the trigger. Uses vision model to extract text and metadata.Input:
{
  "document_url": "{{trigger.input.document_url}}"
}
Output:
{
  "extracted_text": "...",
  "document_type": "contract",
  "metadata": {
    "pages": 12,
    "language": "en"
  }
}
2

Compliance analysis agent

Analyzes the extracted text for compliance with company policies. Has access to compliance knowledge base.Input:
{
  "document_text": "{{document_extractor.output.extracted_text}}",
  "document_type": "{{document_extractor.output.document_type}}"
}
Output:
{
  "is_compliant": true,
  "risk_score": 0.3,
  "findings": [
    "All required clauses present",
    "Standard payment terms",
    "No unusual liability provisions"
  ],
  "requires_legal_review": false
}
3

Financial analysis agent

Extracts and analyzes financial terms, amounts, and obligations.Input:
{
  "document_text": "{{document_extractor.output.extracted_text}}"
}
Output:
{
  "contract_value": 250000,
  "currency": "AED",
  "payment_terms": "Net 30",
  "duration_months": 24,
  "auto_renewal": true
}
4

Summary report agent

Synthesizes all previous analyses into an executive summary.Input:
{
  "compliance_analysis": "{{compliance_agent.output}}",
  "financial_analysis": "{{financial_agent.output}}",
  "document_metadata": "{{document_extractor.output.metadata}}"
}
Output:
{
  "summary": "24-month contract worth AED 250,000...",
  "key_findings": [...],
  "recommendation": "Approved for signature",
  "next_steps": [...]
}

Result

Each agent builds on the previous agent’s work, creating a comprehensive document analysis pipeline. The variable store accumulates context, and the final report agent synthesizes everything into actionable insights.

Best practices

Design each agent to do one thing well. Instead of a single “analyze everything” agent, use multiple focused agents: compliance checker, financial analyzer, risk assessor, etc.Focused agents are easier to test, debug, and reuse across workflows.
When mapping outputs, use clear, descriptive names that indicate the source and content.Good: compliance_agent.risk_assessment Poor: result1Future you (and your teammates) will thank you.
Use Condition nodes after Agent nodes to validate outputs meet your criteria. Route to error handling or human review if validation fails.This prevents invalid data from propagating through your workflow.
The more context you provide to agents, the better their reasoning. Map all relevant data from previous nodes, even if it seems redundant.Agents perform best when they have complete information.
Balance between allowing enough time for thorough reasoning and preventing stuck executions. Monitor actual execution times and adjust.Most agent executions complete in 30-90 seconds, but complex tasks with many tool calls may need more time.
Use Parallel nodes to run multiple Agent nodes simultaneously when they’re analyzing the same input from different perspectives. This dramatically reduces total workflow execution time.

Next steps