|
|
Parameterised Stress Testing Across ADK Agent Categories
Author: Venkata Sudhakar
ShopMax India has multiple agent categories with different performance requirements. Order status lookups are fast and lightweight, while complaint handling involves longer context and more tool calls. Parametrized stress testing lets a single test function cover all categories by running with different concurrency targets and latency budgets per category, eliminating the need to write a separate stress test for each agent type.
Each category config dict specifies the target concurrency, expected latency in milliseconds, and maximum allowed error rate. pytest.mark.parametrize runs the same test body once per config, with the category name used as the test ID so failures are easy to identify. asyncio.gather fires all concurrent requests in parallel per category so the measured latency reflects true concurrent behavior.
The example below runs a 4-category stress test where each category has its own concurrency level and latency budget appropriate to the ShopMax India agent type.
It gives the following output,
order_status: concurrency=20, avg_ms=10.3, errors=0
stock_check: concurrency=15, avg_ms=15.2, errors=0
returns_processing: concurrency=10, avg_ms=25.4, errors=0
complaint_handling: concurrency=5, avg_ms=40.6, errors=0
.... (4 passed in 0.09s)
In production, replace mock_category_call with real ADK runner calls and tune the latency_ms and concurrency values using metrics collected from your staging environment. Add a max_avg_latency field to each config so the assertion covers both error rate and latency budgets independently. Run this parametrized suite as a nightly regression job so that a new tool added to the complaint_handling agent does not silently inflate the latency budget for the faster order_status category.
|
|