Sim

Parallel

The Parallel block is a container block in Sim that allows you to execute multiple instances of blocks concurrently for faster workflow processing.

The Parallel block supports two types of concurrent execution:

Parallel blocks are container nodes that execute their contents multiple times simultaneously, unlike loops which execute sequentially.

Overview

The Parallel block enables you to:

Distribute work: Process multiple items concurrently

Speed up execution: Run independent operations simultaneously

Handle bulk operations: Process large datasets efficiently

Aggregate results: Collect outputs from all parallel executions

Configuration Options

Parallel Type

Choose between two types of parallel execution:

Count-based Parallel - Execute a fixed number of parallel instances:

Count-based parallel execution

Use this when you need to run the same operation multiple times concurrently.

Example: Run 5 parallel instances
- Instance 1 ┐
- Instance 2 ├─ All execute simultaneously
- Instance 3 │
- Instance 4 │
- Instance 5 ┘

Collection-based Parallel - Distribute a collection across parallel instances:

Collection-based parallel execution

Each instance processes one item from the collection simultaneously.

Example: Process ["task1", "task2", "task3"] in parallel
- Instance 1: Process "task1" ┐
- Instance 2: Process "task2" ├─ All execute simultaneously
- Instance 3: Process "task3" ┘

How to Use Parallel Blocks

Creating a Parallel Block

  1. Drag a Parallel block from the toolbar onto your canvas
  2. Configure the parallel type and parameters
  3. Drag a single block inside the parallel container
  4. Connect the block as needed

Accessing Results

After a parallel block completes, you can access aggregated results:

  • <parallel.results>: Array of results from all parallel instances

Example Use Cases

Batch API Processing

Scenario: Process multiple API calls simultaneously

  1. Parallel block with collection of API endpoints
  2. Inside parallel: API block calls each endpoint
  3. After parallel: Process all responses together

Multi-Model AI Processing

Scenario: Get responses from multiple AI models

  1. Collection-based parallel over a list of model IDs (e.g., ["gpt-4o", "claude-3.7-sonnet", "gemini-2.5-pro"])
  2. Inside parallel: Agent's model is set to the current item from the collection
  3. After parallel: Compare and select best response

Advanced Features

Result Aggregation

Results from all parallel instances are automatically collected:

// In a Function block after the parallel
const allResults = input.parallel.results;
// Returns: [result1, result2, result3, ...]

Instance Isolation

Each parallel instance runs independently:

  • Separate variable scopes
  • No shared state between instances
  • Failures in one instance don't affect others

Limitations

Container blocks (Loops and Parallels) cannot be nested inside each other. This means:

  • You cannot place a Loop block inside a Parallel block
  • You cannot place another Parallel block inside a Parallel block
  • You cannot place any container block inside another container block

Parallel blocks can only contain a single block. You cannot have multiple blocks connected to each other inside a parallel - only the first block would execute in that case.

While parallel execution is faster, be mindful of:

  • API rate limits when making concurrent requests
  • Memory usage with large datasets
  • Maximum of 20 concurrent instances to prevent resource exhaustion

Parallel vs Loop

Understanding when to use each:

FeatureParallelLoop
ExecutionConcurrentSequential
SpeedFaster for independent operationsSlower but ordered
OrderNo guaranteed orderMaintains order
Use caseIndependent operationsDependent operations
Resource usageHigherLower

Inputs and Outputs

  • Parallel Type: Choose between 'count' or 'collection'

  • Count: Number of instances to run (count-based)

  • Collection: Array or object to distribute (collection-based)

  • parallel.currentItem: Item for this instance

  • parallel.index: Instance number (0-based)

  • parallel.items: Full collection (collection-based)

  • parallel.results: Array of all instance results

  • Access: Available in blocks after the parallel

Best Practices

  • Independent operations only: Ensure operations don't depend on each other
  • Handle rate limits: Add delays or throttling for API-heavy workflows
  • Error handling: Each instance should handle its own errors gracefully
Parallel