Every agent in MagOneAI has a persona that defines its identity and behavior. The persona is the foundation of how your agent reasons, responds, and operates within workflows. A well-crafted persona produces consistent, reliable outputs across thousands of workflow executions.
The agent’s display name — clear and descriptiveExamples: “Contract Compliance Analyst”, “Meeting Scheduler”, “Customer Onboarding Assistant”
Role
What the agent does and its area of expertiseExample: “You analyze contracts for regulatory compliance and flag potential legal risks”
Instructions
Detailed behavioral instructions that shape reasoning and responsesExample: “Review each clause for GDPR compliance, highlight data processing terms, suggest revisions”
Constraints
Explicit boundaries defining what the agent should NOT doExample: “Never approve contracts without human review, do not provide legal advice”
The agent’s name appears throughout the MagOneAI interface — in the workflow canvas, execution logs, and activity history. Choose names that clearly communicate the agent’s purpose:Good names:
“Invoice Data Extractor”
“Customer Support Router”
“Compliance Risk Assessor”
Poor names:
“Agent 1”
“Helper”
“AI Assistant”
Clear naming makes workflows self-documenting and easier to maintain.
Instructions provide detailed behavioral guidelines. This is where you specify HOW the agent should perform its role:What to include:
Step-by-step process the agent should follow
Expected input format and how to interpret it
Desired output structure and formatting
Tone and communication style
How to handle edge cases and errors
Examples of desired behavior
Example instruction set:
Copy
You review customer support tickets and determine the appropriate department for routing.Process:1. Read the ticket description and extract the main issue2. Identify keywords related to billing, technical issues, or account management3. Classify the ticket into one category: billing, technical, or account4. Provide a confidence score (0-1) for your classification5. If confidence is below 0.7, classify as "general" for human reviewTone: Professional and conciseOutput: Return only the structured classification, no explanatory text
Constraints define boundaries — what the agent should explicitly NOT do. This is critical for production safety:Common constraint categories:
Decision boundaries — “Never approve transactions over $10,000 without human review”
Information security — “Never share customer PII in logs or outputs”
Legal/compliance — “Do not provide legal advice or final compliance determinations”
Tool usage limits — “Do not delete data, only read and create operations allowed”
Scope limits — “Only analyze the provided document, do not search for additional information”
Example constraint set:
Copy
Constraints:- Never make final approval decisions, always route to human reviewer- Do not access customer data outside the provided input- Do not send emails or notifications without explicit instruction- If you encounter ambiguous requirements, flag for human review rather than guessing- Never override explicit user instructions with your own judgment
Constraints are not foolproof. LLMs may occasionally violate constraints despite clear instructions. Use guardrails and structured I/O validation (covered below) to enforce critical boundaries programmatically.
The system prompt encompasses the role, instructions, and constraints. Well-written system prompts are the difference between agents that work reliably in production and agents that produce inconsistent, unpredictable outputs.
Specificity produces consistency. Compare these two prompts:
Poor prompt example (too generic)
Copy
You are a helpful AI assistant. You help users with their questionsand try to be accurate and friendly. Use your knowledge to providegood answers.
Problems:
No defined role or expertise area
Vague instructions like “try to be accurate”
No constraints or boundaries
No output format specification
Will produce inconsistent behavior across executions
Good prompt example (specific and bounded)
Copy
You are a Contract Compliance Analyst specializing in GDPR and dataprocessing regulations.Your role is to review contract clauses and identify potential GDPRcompliance risks.Process:1. Read the contract clause provided in the input2. Identify any data processing, storage, or transfer terms3. Evaluate compliance with GDPR Articles 6, 13, 14, and 284. Flag specific risks with article references5. Suggest compliant alternative language if issues foundOutput format:- compliance_status: "compliant" | "risk_identified" | "non_compliant"- risk_level: "low" | "medium" | "high"- issues: array of issue descriptions with GDPR article references- suggestions: array of suggested revisionsConstraints:- Only analyze the specific clause provided, do not infer other contract terms- Never provide final legal determinations, only flag potential risks- If clause is outside your expertise (non-GDPR), return compliance_status "out_of_scope"
Tell the agent what to expect as input and how to interpret it:
Copy
Input format:You will receive a JSON object with the following structure:{ "document_text": "The full text of the document to analyze", "document_type": "contract" | "policy" | "agreement", "analysis_focus": "compliance" | "risk" | "general"}If document_type is "contract", apply strict compliance standards.If analysis_focus is "risk", prioritize identifying potential liabilities.
Ambiguous output formats lead to parsing errors in downstream workflow nodes. Be explicit:
Copy
Output format:Return a JSON object with this exact structure:{ "is_approved": boolean, "confidence": number between 0 and 1, "risk_score": number between 0 and 100, "issues_found": array of strings, "recommendation": "approve" | "reject" | "review_required"}Do not include explanatory text outside this JSON structure.Do not wrap the JSON in markdown code blocks.Return only valid JSON.
Examples are powerful teaching tools for LLMs. Include 1-2 examples in your system prompt:
Copy
Example interaction:Input:{ "ticket_text": "I was charged twice for my subscription this month", "customer_tier": "premium"}Expected output:{ "category": "billing", "priority": "high", "confidence": 0.95, "suggested_action": "refund_duplicate_charge"}
Repeat critical constraints in multiple formats for emphasis:
Copy
CRITICAL CONSTRAINTS:1. NEVER approve transactions over $5,0002. NEVER share customer email addresses or phone numbers3. NEVER make changes to production databases4. ALWAYS flag ambiguous cases for human reviewIf you are uncertain about any decision, err on the side of caution androute to human reviewer.
MagOneAI uses DSPy (Declarative Self-improving Python) to define and enforce structured input and output schemas for agents. This ensures type safety and predictable data flow through your workflows.
DSPy is a framework for programming with language models using structured signatures. Instead of relying on prompt engineering alone, you define input/output contracts that the system optimizes and enforces.
Input schemas specify what data the agent expects and in what format:
Copy
from dspy import InputField, Signatureclass ContractAnalysisInput(Signature): """Input schema for contract compliance analysis""" document_text: str = InputField( desc="The full text of the contract to analyze" ) document_type: str = InputField( desc="Type of document: contract, agreement, or policy" ) analysis_focus: str = InputField( desc="Analysis focus: compliance, risk, or general", default="compliance" )
When you attach this schema to an agent, MagOneAI validates incoming data before execution and provides clear error messages if the input doesn’t match the expected structure.
Guardrails are programmatic constraints that enforce reliability and safety beyond prompt instructions. While personas and prompts guide LLM behavior, guardrails ENFORCE it.
This multi-layered approach ensures the agent behaves reliably even when individual guardrails occasionally fail.
Avoid overly generic prompts like “You are a helpful assistant.” Specific, constrained personas produce more reliable outputs in production workflows. Generic prompts lead to inconsistent behavior, hallucinations, and unpredictable tool usage.
Building effective personas is an iterative process:
1
Start with a basic persona
Create a simple role and instruction set based on your requirements
2
Test with representative inputs
Run the agent against real-world examples from your workflow
3
Analyze outputs and edge cases
Review agent behavior and identify failure modes
4
Refine instructions and constraints
Add specific instructions to address observed issues
5
Add guardrails for enforcement
Implement programmatic validation for critical requirements
6
Test again and measure improvement
Quantify agent performance before and after changes
MagOneAI’s workflow execution logs capture every agent invocation with full input, output, and reasoning traces. Use these logs to identify patterns in agent behavior and systematically improve your personas.