What you’ll build
In this quickstart, you’ll create a web research bot — an AI agent that takes a topic, searches the web, and delivers a structured summary. It takes about 5 minutes. By the end, you’ll understand the three building blocks of every MagOneAI bot:- Agent — the AI persona that reasons and acts
- Workflow — the sequence of steps the agent follows
- Execution — a single run of the workflow with real input
Prerequisites
Before you start, make sure you have:- Access to a MagOneAI instance (Studio portal)
- An LLM provider configured (GPT-4o, Claude, or any OpenAI-compatible model)
- The Web Search tool available in your project
Step-by-step
Create a project
If you don’t have a project yet:
- Open MagOneAI Studio
- Click New Project
- Give it a name (e.g., “My First Bots”)
Create a workflow (Use Case)
- Go to Use Cases in the left sidebar
- Click Create Use Case
- Name it “Web Research Bot”
- Open the Canvas (workflow builder)
Add an Agent node and configure it
Your workflow needs just three nodes:
- The Start and End nodes are already on the canvas
- Drag an Agent node from the sidebar onto the canvas
- Connect: Start → Agent → End
- Click the Agent node to open its properties panel
- Configure the agent:
| Field | Value |
|---|---|
| Name | Web Researcher |
| Role | Research analyst |
- Set the persona instructions:
- Under Tools, add the Web Search tool
- Enable Can execute tools
- Click Save
Run your bot
- Click Execute (or Run) on the canvas
- Enter your input:
-
Watch the execution progress:
- Start node processes input
- Agent node calls the LLM, which searches the web and synthesizes results
- End node returns the final output
- View the result — a structured research summary with sources
What just happened?
Here’s what MagOneAI did under the hood:1. Temporal created a durable workflow
1. Temporal created a durable workflow
Your execution runs on Temporal — the same engine used by Uber and Stripe. If the server crashed mid-execution, it would resume exactly where it left off.
2. The agent reasoned about your query
2. The agent reasoned about your query
The LLM received your persona instructions, the user’s input, and the list of available tools. It decided to call the web search tool with relevant search queries.
3. Tools executed automatically
3. Tools executed automatically
The agent called the Web Search MCP server, which returned search results. The agent then synthesized these results into a structured summary — all within a single Agent node.
4. Structured output was produced
4. Structured output was produced
MagOneAI uses DSPy to ensure the agent produces structured, validated output — not just raw text. This makes outputs reliable and machine-readable.
Make it more powerful
Now that you have a working bot, here are quick ways to level it up:Add conditional logic
Route to different agents based on the query type — questions go one way, research requests go another
Add human approval
Pause the workflow for human review before sending results to a customer
Run agents in parallel
Research multiple topics simultaneously — 3 agents running at the same time
Process files and data
Upload CSVs or PDFs and have agents analyze, summarize, or review them
Next: Build more bots
Ready to build something more advanced? Check out these quick bot recipes — each one takes under 10 minutes:5 bots you can build today
Step-by-step recipes for support router, company research, competitor analysis, CSV analyzer, and document reviewer — no OAuth setup needed
Sales intelligence assistant
A full multi-agent workflow with parallel research — the most impressive demo