tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Agentic AI > Multi-Agent Coordination > Parallel Agent Execution with CrewAI

Parallel Agent Execution with CrewAI

Author: Venkata Sudhakar

ShopMax India's product launch team needs to simultaneously analyze competitor pricing in Mumbai, assess customer sentiment from reviews, and check inventory levels across Delhi warehouses. Running these three tasks sequentially wastes time. CrewAI's async_execution flag lets all three agents work in parallel, cutting total analysis time significantly compared to sequential runs.

CrewAI supports parallel task execution by setting async_execution=True on independent tasks. Each async task runs in its own thread while the Crew collects all results before moving to dependent tasks. Tasks that must consume the output of parallel tasks are left synchronous and placed after the async group. This gives fine-grained control over which parts of the workflow parallelize safely.

The example below shows three ShopMax India analyst agents running their tasks in parallel and printing the combined output after all three finish.


It gives the following output,

Pricing: ShopMax Rs 28,990 vs Flipkart Rs 29,500 vs Amazon Rs 29,990
Sentiment: Battery life praised; Bluetooth range flagged in 18% of reviews
Inventory: Connaught Place 42 units, Lajpat Nagar 17 units, Saket 31 units

In production, set async_execution=True only on tasks with no data dependencies between them. Tasks that consume outputs from the parallel group should run synchronously after all async tasks complete. Add rate limiting when multiple agents call the same LLM API in parallel to avoid hitting token-per-minute limits, especially during high-volume ShopMax sale campaigns.


 
  


  
bl  br