Parallel Block YAML Schema
YAML configuration reference for Parallel blocks
Schema Definition
type: object
required:
- type
- name
- inputs
- connections
properties:
type:
type: string
enum: [parallel]
description: Block type identifier
name:
type: string
description: Display name for this parallel block
inputs:
type: object
required:
- parallelType
properties:
parallelType:
type: string
enum: [count, collection]
description: Type of parallel execution
count:
type: number
description: Number of parallel instances (for 'count' type)
minimum: 1
maximum: 100
collection:
type: string
description: Collection to distribute across instances (for 'collection' type)
maxConcurrency:
type: number
description: Maximum concurrent executions
default: 10
minimum: 1
maximum: 50
connections:
type: object
required:
- parallel
properties:
parallel:
type: object
required:
- start
properties:
start:
type: string
description: Target block ID to execute inside each parallel instance
end:
type: string
description: Target block ID after all parallel instances complete (optional)
success:
type: string
description: Target block ID after all instances complete (alternative format)
error:
type: string
description: Target block ID for error handling
Connection Configuration
Parallel blocks use a special connection format with a parallel
section:
connections:
parallel:
start: <string> # Target block ID to execute inside each parallel instance
end: <string> # Target block ID after all instances complete (optional)
error: <string> # Target block ID for error handling (optional)
Alternative format (legacy):
connections:
success: <string> # Target block ID after all instances complete
error: <string> # Target block ID for error handling (optional)
Child Block Configuration
Blocks inside a parallel block must have their parentId
set to the parallel block ID:
parallel-1:
type: parallel
name: "Process Items"
inputs:
parallelType: collection
collection: <start.items>
connections:
parallel:
start: process-item
end: aggregate-results
# Child block inside the parallel
process-item:
type: agent
name: "Process Item"
parentId: parallel-1 # References the parallel block
inputs:
systemPrompt: "Process this item"
userPrompt: <parallel.currentItem>
model: gpt-4o
apiKey: '{{OPENAI_API_KEY}}'
Examples
Count-Based Parallel Processing
worker-parallel:
type: parallel
name: "Worker Parallel"
inputs:
parallelType: count
count: 5
maxConcurrency: 3
connections:
parallel:
start: worker-task
end: collect-worker-results
worker-task:
type: api
name: "Worker Task"
parentId: worker-parallel
inputs:
url: "https://api.worker.com/process"
method: POST
headers:
- key: "Authorization"
value: "Bearer {{WORKER_API_KEY}}"
body: |
{
"instanceId": <parallel.index>,
"timestamp": "{{new Date().toISOString()}}"
}
connections:
success: worker-complete
Collection-Based Parallel Processing
api-parallel:
type: parallel
name: "API Parallel"
inputs:
parallelType: collection
collection: <start.apiEndpoints>
maxConcurrency: 10
connections:
parallel:
start: call-api
end: merge-api-results
call-api:
type: api
name: "Call API"
parentId: api-parallel
inputs:
url: <parallel.currentItem.endpoint>
method: <parallel.currentItem.method>
headers:
- key: "Authorization"
value: "Bearer {{API_TOKEN}}"
connections:
success: api-complete
Complex Parallel Processing Pipeline
data-processing-parallel:
type: parallel
name: "Data Processing Parallel"
inputs:
parallelType: collection
collection: <data-loader.records>
maxConcurrency: 8
connections:
parallel:
start: validate-data
end: final-aggregation
error: parallel-error-handler
validate-data:
type: function
name: "Validate Data"
parentId: data-processing-parallel
inputs:
code: |
const record = <parallel.currentItem>;
const index = <parallel.index>;
// Validate record structure
if (!record.id || !record.content) {
throw new Error(`Invalid record at index ${index}`);
}
return {
valid: true,
recordId: record.id,
validatedAt: new Date().toISOString()
};
connections:
success: process-data
error: validation-error
process-data:
type: agent
name: "Process Data"
parentId: data-processing-parallel
inputs:
systemPrompt: "Process and analyze this data record"
userPrompt: |
Record ID: <validatedata.recordId>
Content: <parallel.currentItem.content>
Instance: <parallel.index>
model: gpt-4o
temperature: 0.3
apiKey: '{{OPENAI_API_KEY}}'
connections:
success: store-result
store-result:
type: function
name: "Store Result"
parentId: data-processing-parallel
inputs:
code: |
const processed = <processdata.content>;
const recordId = <validatedata.recordId>;
return {
recordId,
processed,
completedAt: new Date().toISOString(),
instanceIndex: <parallel.index>
};
Concurrent AI Analysis
multi-model-parallel:
type: parallel
name: "Multi-Model Analysis"
inputs:
parallelType: collection
collection: |
[
{"model": "gpt-4o", "focus": "technical accuracy"},
{"model": "claude-3-5-sonnet-20241022", "focus": "creative quality"},
{"model": "gemini-2.0-flash-exp", "focus": "factual verification"}
]
maxConcurrency: 3
connections:
parallel:
start: analyze-content
end: combine-analyses
analyze-content:
type: agent
name: "Analyze Content"
parentId: multi-model-parallel
inputs:
systemPrompt: |
You are analyzing content with a focus on <parallel.currentItem.focus>.
Provide detailed analysis from this perspective.
userPrompt: |
Content to analyze: <start.content>
Analysis focus: <parallel.currentItem.focus>
model: <parallel.currentItem.model>
apiKey: '{{OPENAI_API_KEY}}'
connections:
success: analysis-complete
Parallel Variables
Inside parallel child blocks, these special variables are available:
# Available in all child blocks of the parallel
<parallel.index> # Instance number (0-based)
<parallel.currentItem> # Item for this instance (collection type)
<parallel.items> # Full collection (collection type)
Output References
After a parallel block completes, you can reference its aggregated results:
# In blocks after the parallel
final-processor:
inputs:
all-results: <parallel-name.results> # Array of all instance results
total-count: <parallel-name.count> # Number of instances completed
Best Practices
- Use appropriate maxConcurrency to avoid overwhelming APIs
- Ensure operations are independent and don't rely on each other
- Include error handling for robust parallel execution
- Test with small collections first
- Monitor rate limits for external APIs
- Use collection type for distributing work, count type for fixed instances
- Consider memory usage with large collections