What you’ll build
You’ll build an intelligent RFP (Request for Proposal) analysis system that evaluates vendor proposals using five specialized AI agents working in parallel. Each agent focuses on a specific evaluation dimension—technical merit, vendor capability, commercial terms, compliance, and delivery timelines—before a synthesis agent compiles a comprehensive scoring report. This workflow demonstrates how to:- Orchestrate multiple specialist agents for complex analysis tasks
- Run parallel evaluations to reduce analysis time from hours to minutes
- Generate structured, consistent evaluation reports
- Implement domain-specific agent personas for expert analysis

Prerequisites
Before you begin, ensure you have:- MagOneAI instance with workflow builder access
- LLM provider configured (GPT-4, Claude 3.5 Sonnet, or equivalent)
- Sample RFP documents for testing (PDF or DOCX format)
- Evaluation criteria (optional) - can be embedded in agent personas or stored in a knowledge base
Architecture
The RFP analysis workflow uses a parallel specialist pattern where domain experts analyze different aspects simultaneously:Why parallel specialist agents?
Faster Analysis
Five parallel agents complete analysis in ~30 seconds vs. 2+ minutes for sequential processing
Expert Focus
Each agent specializes in one domain, producing deeper, more accurate evaluations
Consistent Scoring
Structured output schemas ensure every proposal is evaluated on the same criteria
Comprehensive Coverage
No evaluation dimension is overlooked when each has a dedicated specialist
Step-by-step build
Create the specialist agents
You’ll create six agents total: five specialists and one synthesis agent.
1. Technical Evaluation Agent
Name: Technical Evaluator Model: GPT-4 or Claude 3.5 SonnetPersona:2. Vendor Assessment Agent
Name: Vendor Assessor Model: GPT-4 or Claude 3.5 SonnetPersona:3. Commercial Analysis Agent
Name: Commercial Analyst Model: GPT-4 or Claude 3.5 SonnetPersona:4. Compliance & Risk Agent
Name: Compliance & Risk Assessor Model: GPT-4 or Claude 3.5 SonnetPersona:5. Timeline & Delivery Agent
Name: Timeline & Delivery Evaluator Model: GPT-4 or Claude 3.5 SonnetPersona:6. Synthesis Agent
Name: RFP Synthesis Coordinator Model: GPT-4 or Claude 3.5 SonnetPersona:Build the workflow structure
Now construct the workflow in the MagOneAI workflow builder.
-
Add Trigger Node
- Type: Manual Trigger or API Trigger
- Configure inputs:
rfp_document(file) - The proposal PDF or DOCXrfp_title(text) - Name/identifier for the proposalvendor_name(text) - Vendor submitting the proposalevaluation_weights(optional JSON) - Custom scoring weights
-
Add Parallel Node
- Drag a Parallel Node onto the canvas
- Connect it to the trigger
- This will house your five specialist agents
Configure the five parallel evaluation branches
Set up five parallel branches in the Parallel Node, one for each specialist.Inside the Parallel Node:
-
Branch 1 - Technical Evaluation
- Add Agent Node: Technical Evaluator
- Input:
{{trigger.rfp_document}} - Enable structured output with the JSON schema from the persona
- Timeout: 60 seconds
-
Branch 2 - Vendor Assessment
- Add Agent Node: Vendor Assessor
- Input:
{{trigger.rfp_document}} - Enable structured output
- Timeout: 60 seconds
-
Branch 3 - Commercial Analysis
- Add Agent Node: Commercial Analyst
- Input:
{{trigger.rfp_document}} - Enable structured output
- Timeout: 60 seconds
-
Branch 4 - Compliance & Risk
- Add Agent Node: Compliance & Risk Assessor
- Input:
{{trigger.rfp_document}} - Enable structured output
- Timeout: 60 seconds
-
Branch 5 - Timeline & Delivery
- Add Agent Node: Timeline & Delivery Evaluator
- Input:
{{trigger.rfp_document}} - Enable structured output
- Timeout: 60 seconds
All five agents will execute simultaneously, analyzing the same document from different perspectives. Total execution time should be ~30-60 seconds depending on document length.
Add the synthesis agent
After the Parallel Node, add the synthesis agent to compile results.
- Add Agent Node: RFP Synthesis Coordinator
- Configure inputs to receive all five evaluation outputs:
- Enable structured output to ensure consistent JSON format
- Timeout: 45 seconds (synthesis typically takes less time than analysis)
Configure structured output and validation
Ensure all agents produce consistent, machine-readable outputs.For each specialist agent:
- Enable JSON mode or structured output
- Define output schema matching the persona’s JSON format
- Add validation rules to ensure scores are 0-10
- Configure retry logic if output parsing fails
- Define comprehensive output schema
- Include all dimension scores, recommendations, and summaries
- Add fields for comparison (when evaluating multiple proposals)
Add output formatting and delivery
Format and deliver the final evaluation report.Option A: Store in database
- Add Tool Node or API Node to save results to your database
- Include all specialist outputs and synthesis report
- Tag with vendor name, RFP title, evaluation date
- Add Tool Node to generate a formatted PDF report
- Include executive summary, scores, charts, detailed findings
- Save to document management system
- Add Email Tool Node
- Send synthesis report to procurement team
- Include link to full evaluation details
- Attach original proposal for reference
- Use a Parallel Node to execute all three outputs simultaneously
- Ensures results are persisted, documented, and communicated
Test with sample RFP proposals
Validate your workflow with real or sample RFP documents.
-
Prepare test documents:
- Use an actual past RFP proposal (redact if needed)
- Or create a comprehensive sample proposal covering all evaluation areas
- Ensure document is in PDF or DOCX format
-
Run the workflow:
- Upload the test proposal
- Provide RFP title and vendor name
- Submit and monitor execution
-
Verify outputs:
- Check that all five specialist agents completed successfully
- Validate that scores are in 0-10 range
- Ensure recommendations are one of the defined values
- Confirm synthesis agent properly merged all findings
-
Review quality:
- Read through specialist evaluations for accuracy
- Check if identified strengths and weaknesses make sense
- Validate that overall recommendation aligns with scores
-
Test edge cases:
- Very short proposal (incomplete submission)
- Very long proposal (100+ pages)
- Proposal with missing sections
- Non-compliant proposal (missing mandatory requirements)
Optimize and fine-tune
Refine your workflow based on test results.Persona refinement:
- Adjust evaluation criteria based on your organization’s priorities
- Modify scoring weights to match importance
- Add industry-specific evaluation dimensions
- Fine-tune what constitutes “STRONG_YES” vs “YES”
- Monitor agent execution times
- Optimize prompts for faster responses without sacrificing quality
- Consider using smaller models for simpler evaluation dimensions
- Add more detailed output fields if needed
- Include confidence scores for each evaluation
- Add comparative analysis fields for multi-proposal evaluation
- Generate actionable next steps automatically
Key concepts demonstrated
Multi-Agent Orchestration
Coordinate five specialist agents to analyze different dimensions of complex documents simultaneously
Parallel Execution
Reduce analysis time by 70-80% through concurrent agent execution instead of sequential processing
Structured Output Schemas
Enforce consistent, machine-readable JSON outputs that enable automated decision-making and comparison
Agent Persona Design
Create expert personas with specific evaluation criteria, scoring rubrics, and output formats
Synthesis and Aggregation
Combine multiple specialist perspectives into unified insights and recommendations
Domain Specialization
Assign focused responsibilities to each agent for deeper, more accurate analysis than generalist approaches
Customization ideas
Extend this workflow to match your specific RFP evaluation needs:Add RAG with past evaluations
Add RAG with past evaluations
Enhance consistency by learning from historical evaluations:
- Create a knowledge base of past RFP evaluations
- Include successful and unsuccessful proposals
- Attach the knowledge base to each specialist agent
- Agents can reference past scoring patterns and learn from historical decisions
- Include lessons learned and evaluation best practices
Include Human Task for final approval
Include Human Task for final approval
Add human oversight before final decisions:
- Add a Condition Node after synthesis agent
- Route proposals with borderline scores (e.g., 6.5-7.5) to human review
- Create a Human Task with:
- Complete evaluation summary
- All specialist reports attached
- Approval/Rejection/Request More Info actions
- Auto-approve strong proposals (8.5+) and auto-reject weak ones (below 6.0)
Add scoring weights per dimension
Add scoring weights per dimension
Customize importance of evaluation dimensions by project type:
- Create weight profiles (e.g., “Technical-Heavy”, “Cost-Sensitive”, “Compliance-Critical”)
- Pass weights as workflow input:
evaluation_weightsJSON - Modify synthesis agent to apply custom weights:
- Store weight profiles in a database for reuse
- Allow users to select profile at workflow trigger time
Build multi-proposal comparison
Build multi-proposal comparison
Evaluate multiple proposals side-by-side:
- Modify trigger to accept multiple RFP documents
- Use a Loop Node or multiple Parallel Nodes to evaluate all proposals
- Create a Comparison Agent that ranks proposals:
- Side-by-side scoring tables
- Relative strengths and weaknesses
- Winner recommendation with justification
- Risk-adjusted scoring
- Generate comparison matrix visualizations
- Export comparison report for stakeholder review
Add sentiment and writing quality analysis
Add sentiment and writing quality analysis
Create an additional specialist for proposal quality:
- Writing Quality Agent evaluates:
- Clarity and professionalism
- Completeness of responses
- Attention to detail
- Persuasiveness and communication skills
- Lower scores may indicate vendor capability concerns
- Add as a 6th parallel branch
Integrate with procurement systems
Integrate with procurement systems
Connect your RFP workflow to enterprise systems:
- CRM integration: Pull vendor history and relationship data
- ERP integration: Compare pricing with budget and past contracts
- Contract management: Auto-create contracts for approved proposals
- Approval workflows: Route high-value RFPs through multi-level approvals
- Vendor portals: Automatically notify vendors of evaluation results
Add reference checking automation
Add reference checking automation
Automate vendor reference verification:
- Extract reference contacts from proposal
- Send automated reference check emails with structured questionnaires
- Create a Reference Verification Agent to analyze responses
- Add reference scores to vendor assessment
- Flag discrepancies between vendor claims and references
Example evaluation output
Here’s what a synthesis report looks like:Next steps
Now that you’ve built an RFP analysis workflow, explore related cookbooks:- Sales Intelligence Assistant - Another parallel research pattern for meeting preparation
- Document Compliance Review - Apply similar analysis to contracts and legal documents
- KYB Document Verification - See parallel processing for multi-document workflows
Need help customizing this workflow for your specific RFP evaluation process? Contact our solutions team for guidance.