|
|
Shared Memory and Context Passing Between Agents
Author: Venkata Sudhakar
ShopMax India's recommendation engine uses three agents in sequence: one fetches a customer's recent purchases, another builds a buyer profile, and a third generates personalized product suggestions. Each agent reads from and writes to a shared state object that persists across all three steps, eliminating redundant database calls and keeping the full context available to every agent in the pipeline.
LangGraph's TypedDict state acts as the shared memory. Every node receives the full current state and returns only the fields it updates. The graph merges these partial updates automatically - downstream agents always see the latest values written by upstream ones. This approach avoids passing large context objects manually between function calls and makes each agent independently testable.
The example below shows a three-node pipeline where a history agent, a profile agent, and a recommendation agent each enrich the shared context before passing control to the next node.
It gives the following output,
1. Bose QuietComfort 45 Headphones - Rs 29,900
2. Sony LinkBuds S Earbuds - Rs 14,990
3. JBL Xtreme 3 Portable Speaker - Rs 22,499
In production, replace the simulated history fetch with a real database call keyed on customer_id. Add LangGraph's SqliteSaver checkpointer so the shared context survives process restarts mid-pipeline. For customers with large purchase histories, summarize the list before passing it to downstream agents to stay within LLM context window limits and reduce token costs.
|
|