Skip to main content

The persona system

Every agent in MagOneAI has a persona that defines its identity and behavior. The persona is the foundation of how your agent reasons, responds, and operates within workflows. A well-crafted persona produces consistent, reliable outputs across thousands of workflow executions.

Persona components

Name

The agent’s display name — clear and descriptiveExamples: “Contract Compliance Analyst”, “Meeting Scheduler”, “Customer Onboarding Assistant”

Role

What the agent does and its area of expertiseExample: “You analyze contracts for regulatory compliance and flag potential legal risks”

Instructions

Detailed behavioral instructions that shape reasoning and responsesExample: “Review each clause for GDPR compliance, highlight data processing terms, suggest revisions”

Constraints

Explicit boundaries defining what the agent should NOT doExample: “Never approve contracts without human review, do not provide legal advice”

Name

The agent’s name appears throughout the MagOneAI interface — in the workflow canvas, execution logs, and activity history. Choose names that clearly communicate the agent’s purpose: Good names:
  • “Invoice Data Extractor”
  • “Customer Support Router”
  • “Compliance Risk Assessor”
Poor names:
  • “Agent 1”
  • “Helper”
  • “AI Assistant”
Clear naming makes workflows self-documenting and easier to maintain.

Role

The role defines the agent’s core identity. This is typically a single sentence that establishes what the agent does: Effective role definitions:
  • “You are a financial analyst who evaluates loan applications for credit risk.”
  • “You are a technical support specialist who troubleshoots API integration issues.”
  • “You are an HR assistant who answers employee questions about benefits and policies.”
Ineffective role definitions:
  • “You are a helpful AI assistant.” (too generic)
  • “You help users with various tasks.” (lacks specificity)
  • “You are smart and can do many things.” (vague and unhelpful)
The role should immediately clarify the agent’s domain and purpose.

Instructions

Instructions provide detailed behavioral guidelines. This is where you specify HOW the agent should perform its role: What to include:
  • Step-by-step process the agent should follow
  • Expected input format and how to interpret it
  • Desired output structure and formatting
  • Tone and communication style
  • How to handle edge cases and errors
  • Examples of desired behavior
Example instruction set:
You review customer support tickets and determine the appropriate department for routing.

Process:
1. Read the ticket description and extract the main issue
2. Identify keywords related to billing, technical issues, or account management
3. Classify the ticket into one category: billing, technical, or account
4. Provide a confidence score (0-1) for your classification
5. If confidence is below 0.7, classify as "general" for human review

Tone: Professional and concise
Output: Return only the structured classification, no explanatory text

Constraints

Constraints define boundaries — what the agent should explicitly NOT do. This is critical for production safety: Common constraint categories:
  • Decision boundaries — “Never approve transactions over $10,000 without human review”
  • Information security — “Never share customer PII in logs or outputs”
  • Legal/compliance — “Do not provide legal advice or final compliance determinations”
  • Tool usage limits — “Do not delete data, only read and create operations allowed”
  • Scope limits — “Only analyze the provided document, do not search for additional information”
Example constraint set:
Constraints:
- Never make final approval decisions, always route to human reviewer
- Do not access customer data outside the provided input
- Do not send emails or notifications without explicit instruction
- If you encounter ambiguous requirements, flag for human review rather than guessing
- Never override explicit user instructions with your own judgment
Constraints are not foolproof. LLMs may occasionally violate constraints despite clear instructions. Use guardrails and structured I/O validation (covered below) to enforce critical boundaries programmatically.

System prompt best practices

The system prompt encompasses the role, instructions, and constraints. Well-written system prompts are the difference between agents that work reliably in production and agents that produce inconsistent, unpredictable outputs.

Be specific about role and boundaries

Specificity produces consistency. Compare these two prompts:
You are a helpful AI assistant. You help users with their questions
and try to be accurate and friendly. Use your knowledge to provide
good answers.
Problems:
  • No defined role or expertise area
  • Vague instructions like “try to be accurate”
  • No constraints or boundaries
  • No output format specification
  • Will produce inconsistent behavior across executions
You are a Contract Compliance Analyst specializing in GDPR and data
processing regulations.

Your role is to review contract clauses and identify potential GDPR
compliance risks.

Process:
1. Read the contract clause provided in the input
2. Identify any data processing, storage, or transfer terms
3. Evaluate compliance with GDPR Articles 6, 13, 14, and 28
4. Flag specific risks with article references
5. Suggest compliant alternative language if issues found

Output format:
- compliance_status: "compliant" | "risk_identified" | "non_compliant"
- risk_level: "low" | "medium" | "high"
- issues: array of issue descriptions with GDPR article references
- suggestions: array of suggested revisions

Constraints:
- Only analyze the specific clause provided, do not infer other contract terms
- Never provide final legal determinations, only flag potential risks
- If clause is outside your expertise (non-GDPR), return compliance_status "out_of_scope"
Why this works:
  • Clear domain expertise (GDPR compliance)
  • Step-by-step process
  • Structured output format
  • Explicit constraints and boundaries
  • Handles edge cases (out of scope scenarios)

Define expected input format

Tell the agent what to expect as input and how to interpret it:
Input format:
You will receive a JSON object with the following structure:
{
  "document_text": "The full text of the document to analyze",
  "document_type": "contract" | "policy" | "agreement",
  "analysis_focus": "compliance" | "risk" | "general"
}

If document_type is "contract", apply strict compliance standards.
If analysis_focus is "risk", prioritize identifying potential liabilities.

Specify output format and structure

Ambiguous output formats lead to parsing errors in downstream workflow nodes. Be explicit:
Output format:
Return a JSON object with this exact structure:
{
  "is_approved": boolean,
  "confidence": number between 0 and 1,
  "risk_score": number between 0 and 100,
  "issues_found": array of strings,
  "recommendation": "approve" | "reject" | "review_required"
}

Do not include explanatory text outside this JSON structure.
Do not wrap the JSON in markdown code blocks.
Return only valid JSON.

Include examples of desired behavior

Examples are powerful teaching tools for LLMs. Include 1-2 examples in your system prompt:
Example interaction:

Input:
{
  "ticket_text": "I was charged twice for my subscription this month",
  "customer_tier": "premium"
}

Expected output:
{
  "category": "billing",
  "priority": "high",
  "confidence": 0.95,
  "suggested_action": "refund_duplicate_charge"
}

Set explicit constraints

Repeat critical constraints in multiple formats for emphasis:
CRITICAL CONSTRAINTS:
1. NEVER approve transactions over $5,000
2. NEVER share customer email addresses or phone numbers
3. NEVER make changes to production databases
4. ALWAYS flag ambiguous cases for human review

If you are uncertain about any decision, err on the side of caution and
route to human reviewer.

DSPy structured I/O

MagOneAI uses DSPy (Declarative Self-improving Python) to define and enforce structured input and output schemas for agents. This ensures type safety and predictable data flow through your workflows.

What is DSPy?

DSPy is a framework for programming with language models using structured signatures. Instead of relying on prompt engineering alone, you define input/output contracts that the system optimizes and enforces.

Defining input schemas

Input schemas specify what data the agent expects and in what format:
from dspy import InputField, Signature

class ContractAnalysisInput(Signature):
    """Input schema for contract compliance analysis"""

    document_text: str = InputField(
        desc="The full text of the contract to analyze"
    )
    document_type: str = InputField(
        desc="Type of document: contract, agreement, or policy"
    )
    analysis_focus: str = InputField(
        desc="Analysis focus: compliance, risk, or general",
        default="compliance"
    )
When you attach this schema to an agent, MagOneAI validates incoming data before execution and provides clear error messages if the input doesn’t match the expected structure.

Defining output schemas

Output schemas ensure the agent produces data in the exact format your workflow expects:
from dspy import OutputField, Signature

class ContractAnalysisOutput(Signature):
    """Output schema for contract compliance analysis"""

    is_compliant: bool = OutputField(
        desc="Whether the contract meets compliance standards"
    )
    confidence: float = OutputField(
        desc="Confidence score between 0.0 and 1.0"
    )
    risk_level: str = OutputField(
        desc="Risk level: low, medium, or high"
    )
    issues: list[str] = OutputField(
        desc="List of compliance issues identified"
    )
    recommendations: list[str] = OutputField(
        desc="List of recommended changes"
    )

Integration with workflow variable mapping

Structured I/O schemas integrate directly with MagOneAI’s workflow variable system:
1

Define schemas

Create input and output schemas for your agent using DSPy signatures
2

Map workflow variables to input schema

In the workflow canvas, map variables from previous nodes or workflow parameters to the agent’s input fields
3

Agent executes with validated input

MagOneAI validates the input against the schema before agent execution
4

Output is validated and stored

Agent output is validated against the output schema and stored in the workflow variable store
5

Downstream nodes access structured output

Subsequent workflow nodes access the agent’s output fields by name with guaranteed type safety

Example: Full schema definition

Here’s a complete example of input and output schemas for a loan underwriting agent:
from dspy import InputField, OutputField, Signature

class LoanApplicationInput(Signature):
    """Input for loan underwriting analysis"""

    applicant_name: str = InputField(desc="Full name of applicant")
    requested_amount: float = InputField(desc="Loan amount requested")
    credit_score: int = InputField(desc="Credit score (300-850)")
    annual_income: float = InputField(desc="Annual income in USD")
    employment_years: int = InputField(desc="Years at current employer")
    loan_purpose: str = InputField(desc="Purpose of the loan")

class LoanApplicationOutput(Signature):
    """Output from loan underwriting analysis"""

    decision: str = OutputField(
        desc="Decision: approved, denied, or review_required"
    )
    approved_amount: float = OutputField(
        desc="Approved loan amount (may differ from requested)"
    )
    interest_rate: float = OutputField(
        desc="Approved interest rate as percentage"
    )
    confidence: float = OutputField(
        desc="Confidence in decision (0.0-1.0)"
    )
    risk_score: int = OutputField(
        desc="Risk score (0-100, higher is riskier)"
    )
    explanation: str = OutputField(
        desc="Brief explanation of the decision"
    )
    conditions: list[str] = OutputField(
        desc="Any conditions attached to approval",
        default=[]
    )
When this agent executes in a workflow, you have guaranteed type safety and structure for both input and output.

Guardrails and output validation

Guardrails are programmatic constraints that enforce reliability and safety beyond prompt instructions. While personas and prompts guide LLM behavior, guardrails ENFORCE it.

Output format enforcement

DSPy schemas provide compile-time structure, but guardrails provide runtime validation:
  • Type checking — Ensure outputs match expected data types
  • Range validation — Verify numeric values fall within acceptable ranges
  • Enum validation — Confirm string values match allowed options
  • Required fields — Block execution if critical fields are missing
Example guardrail configuration:
guardrails:
  output_validation:
    confidence:
      type: float
      min: 0.0
      max: 1.0
      required: true
    decision:
      type: string
      enum: ["approved", "denied", "review_required"]
      required: true
    risk_score:
      type: integer
      min: 0
      max: 100
      required: true

Content filtering and safety

Content guardrails prevent agents from producing unsafe, inappropriate, or policy-violating outputs:
  • PII detection — Block outputs containing personally identifiable information
  • Profanity filtering — Prevent inappropriate language
  • Sensitive data screening — Detect and redact credit card numbers, SSNs, etc.
  • Bias detection — Flag potentially biased or discriminatory content
  • Toxicity scoring — Measure and block toxic outputs
Example content filter configuration:
guardrails:
  content_filters:
    - type: pii_detection
      action: block
      fields: [email, phone, ssn, credit_card]
    - type: toxicity_check
      threshold: 0.7
      action: block
    - type: bias_detection
      categories: [gender, race, age]
      action: flag_for_review

Confidence thresholds

Many production workflows require minimum confidence levels before taking actions:
guardrails:
  confidence_threshold: 0.85
  fallback_action: "route_to_human_review"
If the agent’s confidence score falls below 0.85, the workflow automatically routes to human review instead of proceeding with the agent’s decision.

Fallback behavior when validation fails

Define what happens when guardrails are triggered:
Stop the workflow and raise an error
guardrails:
  on_validation_failure: block
Use when the workflow cannot proceed without valid output

Combining guardrails for production reliability

In production workflows, use multiple layers of guardrails:
agent:
  name: "Credit Decision Agent"
  guardrails:
    # Schema validation
    output_schema: CreditDecisionOutput

    # Content safety
    content_filters:
      - type: pii_detection
        action: block

    # Confidence requirements
    confidence_threshold: 0.9

    # Range validation
    output_validation:
      approved_amount:
        min: 1000
        max: 100000
      interest_rate:
        min: 3.5
        max: 29.99

    # Fallback behavior
    on_validation_failure: human_review
    review_queue: "credit_decisions"
This multi-layered approach ensures the agent behaves reliably even when individual guardrails occasionally fail.
Avoid overly generic prompts like “You are a helpful assistant.” Specific, constrained personas produce more reliable outputs in production workflows. Generic prompts lead to inconsistent behavior, hallucinations, and unpredictable tool usage.

Persona iteration and testing

Building effective personas is an iterative process:
1

Start with a basic persona

Create a simple role and instruction set based on your requirements
2

Test with representative inputs

Run the agent against real-world examples from your workflow
3

Analyze outputs and edge cases

Review agent behavior and identify failure modes
4

Refine instructions and constraints

Add specific instructions to address observed issues
5

Add guardrails for enforcement

Implement programmatic validation for critical requirements
6

Test again and measure improvement

Quantify agent performance before and after changes
MagOneAI’s workflow execution logs capture every agent invocation with full input, output, and reasoning traces. Use these logs to identify patterns in agent behavior and systematically improve your personas.

Next steps