Purpose
The ForEach node loops over a collection and executes a set of activities for each item. Process arrays, lists, or any iterable data structure within your workflows. Essential for batch processing, document collections, and any scenario where you need to apply the same logic to multiple items. ForEach nodes bring the power of iteration to visual workflows, enabling you to process dynamic collections without hard-coding the number of items.How it works
The ForEach node executes a child use case (sub-workflow) for each item in a collection. Items are processed in configurable batches for parallel throughput.Resolve source collection
The ForEach node resolves the source path to an array from the variable store. This could be a list of documents, customers, transactions, or any array of data produced by a previous node.
Validate configuration
The node validates that the source is an array, the child use case exists in the same project, and item count is within limits (default max: 1,000 items, hard max: 10,000).
Map inputs for each item
For each item, inputs are mapped to the child use case using one of two modes:
- Passthrough — The entire item is passed under an
item_variablekey - Field mapping — Specific fields from each item are mapped to child input fields
Execute in batches
Items are processed in concurrent batches (default batch size: 10). Each batch runs child workflows in parallel via Temporal.
ForEach nodes execute a child use case for each item — not inline activities. This means you need to create a separate use case (workflow) that processes a single item, then reference it from the ForEach node.
Configuration
Configure a ForEach node with the source collection, child use case, and input mapping.Source (collection path)
Thesource parameter specifies the dot-notation path to an array in the variable store.
Example source paths:
Child use case
Theuse_case_id specifies which use case (workflow) to execute for each item. The child use case must be in the same project.
Create a separate workflow that processes a single item, then reference it from the ForEach node. This promotes reusability — the same child use case can be used by multiple ForEach nodes or called directly.
Input modes
ForEach supports two mutually exclusive input modes for passing data to child workflows:- Passthrough mode (default)
- Field mapping mode
The entire item is passed to the child use case under a variable key.The child use case receives
{"row": <entire_item>} as input.If item_variable is not specified, it’s automatically derived from the source path:rows→rowcustomers→customerentries→entryitems→item
Static inputs
Pass constant values to every child execution alongside the item data:Batch size and concurrency
Items are processed in concurrent batches:ceil(items / batch_size) × time_per_slowest_child_in_batch
Error handling
Configure how to handle errors during iteration:- Continue on error (default)
- Fail fast
Continue processing remaining items even if some fail.Use when: Item failures are independent. Process what you can, collect errors at the end.
Missing field handling
When using field mapping mode, configure how to handle missing fields in items:| Strategy | Behavior |
|---|---|
"null" (default) | Missing fields resolve to null |
"error" | Missing fields cause the item to fail |
"skip" | Items with missing fields are skipped |
ForEach output
After all items are processed, the ForEach node stores its results in the variable store. The output includes:- Results array — Output from each child execution, in order
- Success/failure counts — How many items succeeded or failed
- Error details — For failed items, error messages and which items failed
Examples
Example 1: Process batch of documents
Scenario: Process multiple uploaded documents, extracting data from each. First, create a child use case called “Process Single Document” that handles one document. Then configure the ForEach:{"document": {"url": "...", "type": "..."}} and runs its own workflow: extract → validate → store.
Result: All documents processed in parallel batches of 5, with failures collected at the end.
Example 2: Evaluate candidates with field mapping
Scenario: Evaluate job candidates using a reusable evaluation workflow.evaluation_criteria are passed to every child execution.
Example 3: Dynamic field mapping from an agent
Scenario: An agent analyzes a data source and determines the field mappings dynamically.mapping-agent produces a field mapping dict like {"customer_name": "col_a", "email": "col_c"}, which is then used to map each row to the child use case’s inputs. Rows missing required fields are skipped.
Advanced patterns
ForEach + Parallel for multi-step item processing
Create a child use case that uses Parallel nodes internally:Chained ForEach with filtering
Use a Condition node after a ForEach to filter results, then feed into another ForEach:ForEach with Sub Use Case children
The child use case itself can contain Sub Use Case nodes, enabling deep composition:Best practices
Design focused child use cases
Design focused child use cases
The child use case should do one thing well — process a single document, evaluate one candidate, handle one transaction. Keep it focused and testable independently.
Use semantic item variable names
Use semantic item variable names
Instead of relying on the default derived name, explicitly set
item_variable to a descriptive name: document, customer, transaction, candidate.Set appropriate batch sizes
Set appropriate batch sizes
The default batch size of 10 works well for most cases. Consider:
- Lower (3-5) for expensive child workflows or rate-limited APIs
- Higher (15-20) for lightweight child workflows
- API rate limits and resource availability
Use continue mode for independent items
Use continue mode for independent items
When items are independent (processing one doesn’t affect others), use
on_item_error: "continue" to maximize throughput. Review failures after completion.Set max_items as a safety limit
Set max_items as a safety limit
Always set
max_items to prevent unexpected large collections from consuming excessive resources. The default limit is 1,000 items.Test child use case independently
Test child use case independently
Before connecting a child use case to a ForEach node, test it independently with sample items. This catches issues early.
Use field mapping for type safety
Use field mapping for type safety
When child use cases expect specific input fields, use
input_mapping instead of passthrough. This makes the contract between ForEach and child explicit.Performance considerations
Batch processing
Items are processed in concurrent batches:Cost optimization
LLM costs can add up quickly with ForEach loops: Example:- Collection: 1,000 items
- Agent cost per item: $0.05
- Total cost: $50
- Filter collection before iteration (use a Condition node to remove irrelevant items)
- Use cheaper models for simple tasks in the child use case
- Set
max_itemsto prevent runaway costs - Monitor child execution costs in the analytics dashboard
Next steps
Parallel node
Compare ForEach with Parallel execution patterns
Condition node
Use conditions within ForEach loops
Sub Use Case
Call sub-workflows within ForEach loops
Memory system
Access loop variables and results