|
|
Automated Test Data Generation for ADK Agents Using LLM
Author: Venkata Sudhakar
Writing test cases manually for every ShopMax India query type is time-consuming and biased toward scenarios the developer has already thought of. Using an LLM to generate test data produces a broader, more diverse set of inputs covering edge cases, regional variations, and unusual phrasings that manual test writing often misses. The generator can be run on demand to refresh the test suite whenever a new agent category is added.
The generator sends a structured prompt to the LLM asking it to return a JSON array of test case objects. Each object contains a query, a list of required keywords expected in a good response, and a difficulty level. The result is parsed and validated to ensure the structure matches the expected schema before being used in downstream tests.
The example below mocks the LLM generator and validates that three auto-generated order tracking test cases have the correct fields, valid difficulty levels, and at least one required keyword each.
It gives the following output,
Generated 3 test cases: ['easy', 'medium', 'hard']
. (1 passed in 0.01s)
In production, call the real model with GENERATOR_PROMPT_TEMPLATE to build a fresh test bank for every new agent feature. Store the generated JSON to a file and commit it to source control so the test suite is reproducible without re-calling the LLM on every CI run. Add a schema validation step using Pydantic to catch malformed outputs from the generator before they cause cryptic failures in downstream test cases.
|
|