Skip to main content

Purpose

The ForEach node loops over a collection and executes a set of activities for each item. Process arrays, lists, or any iterable data structure within your workflows. Essential for batch processing, document collections, and any scenario where you need to apply the same logic to multiple items. ForEach nodes bring the power of iteration to visual workflows, enabling you to process dynamic collections without hard-coding the number of items.

How it works

The ForEach node executes a child use case (sub-workflow) for each item in a collection. Items are processed in configurable batches for parallel throughput.
1

Resolve source collection

The ForEach node resolves the source path to an array from the variable store. This could be a list of documents, customers, transactions, or any array of data produced by a previous node.
2

Validate configuration

The node validates that the source is an array, the child use case exists in the same project, and item count is within limits (default max: 1,000 items, hard max: 10,000).
3

Map inputs for each item

For each item, inputs are mapped to the child use case using one of two modes:
  • Passthrough — The entire item is passed under an item_variable key
  • Field mapping — Specific fields from each item are mapped to child input fields
4

Execute in batches

Items are processed in concurrent batches (default batch size: 10). Each batch runs child workflows in parallel via Temporal.
5

Collect results

Results from all child executions are collected. Error handling is configurable: continue processing remaining items or fail fast on first error.
ForEach nodes execute a child use case for each item — not inline activities. This means you need to create a separate use case (workflow) that processes a single item, then reference it from the ForEach node.

Configuration

Configure a ForEach node with the source collection, child use case, and input mapping.

Source (collection path)

The source parameter specifies the dot-notation path to an array in the variable store. Example source paths:
input.documents          → Array from trigger input
extract-tool.rows        → Array produced by a tool node
agent-xyz.items          → Array produced by an agent node
input.customers          → Array of customer objects
The source must resolve to an array. The ForEach node validates this at execution time.

Child use case

The use_case_id specifies which use case (workflow) to execute for each item. The child use case must be in the same project. Create a separate workflow that processes a single item, then reference it from the ForEach node. This promotes reusability — the same child use case can be used by multiple ForEach nodes or called directly.

Input modes

ForEach supports two mutually exclusive input modes for passing data to child workflows:
The entire item is passed to the child use case under a variable key.
{
  "source": "extract-tool.rows",
  "use_case_id": "process-row-workflow",
  "item_variable": "row"
}
The child use case receives {"row": <entire_item>} as input.If item_variable is not specified, it’s automatically derived from the source path:
  • rowsrow
  • customerscustomer
  • entriesentry
  • itemsitem

Static inputs

Pass constant values to every child execution alongside the item data:
{
  "source": "input.documents",
  "use_case_id": "process-document",
  "item_variable": "document",
  "static_inputs": {
    "verification_level": "standard",
    "output_format": "json"
  }
}

Batch size and concurrency

Items are processed in concurrent batches:
{
  "batch_size": 10,          // Items per batch (default: 10)
  "max_items": 1000,         // Safety limit (default: 1000, max: 10000)
  "child_timeout_minutes": 30 // Per-child timeout (default: 30)
}
Within each batch, child workflows run in parallel via Temporal. Batches are processed sequentially. Processing time: ceil(items / batch_size) × time_per_slowest_child_in_batch

Error handling

Configure how to handle errors during iteration:
Continue processing remaining items even if some fail.
{
  "on_item_error": "continue"
}
Use when: Item failures are independent. Process what you can, collect errors at the end.

Missing field handling

When using field mapping mode, configure how to handle missing fields in items:
StrategyBehavior
"null" (default)Missing fields resolve to null
"error"Missing fields cause the item to fail
"skip"Items with missing fields are skipped

ForEach output

After all items are processed, the ForEach node stores its results in the variable store. The output includes:
  • Results array — Output from each child execution, in order
  • Success/failure counts — How many items succeeded or failed
  • Error details — For failed items, error messages and which items failed
Downstream nodes can access these results using the ForEach node’s ID in the variable store.

Examples

Example 1: Process batch of documents

Scenario: Process multiple uploaded documents, extracting data from each. First, create a child use case called “Process Single Document” that handles one document. Then configure the ForEach:
{
  "source": "input.documents",
  "use_case_id": "process-single-document",
  "item_variable": "document",
  "batch_size": 5,
  "on_item_error": "continue",
  "child_timeout_minutes": 10
}
The child use case “Process Single Document” receives {"document": {"url": "...", "type": "..."}} and runs its own workflow: extract → validate → store. Result: All documents processed in parallel batches of 5, with failures collected at the end.

Example 2: Evaluate candidates with field mapping

Scenario: Evaluate job candidates using a reusable evaluation workflow.
{
  "source": "agent-xyz.candidates",
  "use_case_id": "evaluate-candidate",
  "input_mapping": {
    "candidate_name": "name",
    "resume_url": "resume.url",
    "years_experience": "experience.years"
  },
  "static_inputs": {
    "evaluation_criteria": "technical_lead",
    "department": "engineering"
  },
  "batch_size": 10,
  "on_item_error": "continue"
}
Each candidate item’s fields are mapped to the child use case’s expected inputs. Static inputs like evaluation_criteria are passed to every child execution.

Example 3: Dynamic field mapping from an agent

Scenario: An agent analyzes a data source and determines the field mappings dynamically.
{
  "source": "extract-tool.rows",
  "use_case_id": "process-row",
  "input_mapping_source": "mapping-agent.field_mappings",
  "on_missing_field": "skip",
  "batch_size": 20
}
The mapping-agent produces a field mapping dict like {"customer_name": "col_a", "email": "col_c"}, which is then used to map each row to the child use case’s inputs. Rows missing required fields are skipped.

Advanced patterns

ForEach + Parallel for multi-step item processing

Create a child use case that uses Parallel nodes internally:
Child Use Case: "Evaluate Candidate"
  1. Parallel Node:
     ├─ Agent: "Technical assessment"
     ├─ Agent: "Experience evaluation"
     └─ Agent: "Culture fit"
  2. Agent: "Synthesize scores"
  3. Tool: "Store evaluation"
Then use ForEach to process all candidates through this pipeline:
{
  "source": "input.candidates",
  "use_case_id": "evaluate-candidate",
  "batch_size": 5
}

Chained ForEach with filtering

Use a Condition node after a ForEach to filter results, then feed into another ForEach:
1. ForEach: "Validate all documents" → produces results array
2. Agent: "Filter valid documents from results"
3. ForEach: "Process valid documents" (using filtered array)

ForEach with Sub Use Case children

The child use case itself can contain Sub Use Case nodes, enabling deep composition:
ForEach → Child Use Case: "Process Order"
  └─ Sub Use Case: "Validate Payment"
  └─ Sub Use Case: "Update Inventory"
  └─ Sub Use Case: "Send Confirmation"

Best practices

The child use case should do one thing well — process a single document, evaluate one candidate, handle one transaction. Keep it focused and testable independently.
Instead of relying on the default derived name, explicitly set item_variable to a descriptive name: document, customer, transaction, candidate.
The default batch size of 10 works well for most cases. Consider:
  • Lower (3-5) for expensive child workflows or rate-limited APIs
  • Higher (15-20) for lightweight child workflows
  • API rate limits and resource availability
When items are independent (processing one doesn’t affect others), use on_item_error: "continue" to maximize throughput. Review failures after completion.
Always set max_items to prevent unexpected large collections from consuming excessive resources. The default limit is 1,000 items.
Before connecting a child use case to a ForEach node, test it independently with sample items. This catches issues early.
When child use cases expect specific input fields, use input_mapping instead of passthrough. This makes the contract between ForEach and child explicit.
ForEach child workflows are independent Temporal executions. Each child has its own variable store — they cannot access the parent’s variables or each other’s state.

Performance considerations

Batch processing

Items are processed in concurrent batches:
Items: [A, B, C, D, E, F, G, H, I, J]
Batch size: 5
Time per item: 1 minute

Batch 1: A, B, C, D, E (parallel) → 1 minute
Batch 2: F, G, H, I, J (parallel) → 1 minute
Total time: ~2 minutes (not 10 minutes)

Cost optimization

LLM costs can add up quickly with ForEach loops: Example:
  • Collection: 1,000 items
  • Agent cost per item: $0.05
  • Total cost: $50
Optimization strategies:
  • Filter collection before iteration (use a Condition node to remove irrelevant items)
  • Use cheaper models for simple tasks in the child use case
  • Set max_items to prevent runaway costs
  • Monitor child execution costs in the analytics dashboard

Next steps

Parallel node

Compare ForEach with Parallel execution patterns

Condition node

Use conditions within ForEach loops

Sub Use Case

Call sub-workflows within ForEach loops

Memory system

Access loop variables and results