|
|
Testing ADK Agent with LiteLLM for Model-Agnostic Tests
Author: Venkata Sudhakar
LiteLLM provides a unified interface over multiple LLM providers - Gemini, OpenAI, Anthropic, and local models - letting ADK agents switch the underlying model without changing tool code. ShopMax India uses this for model-agnostic testing: the same agent test suite runs against a local Ollama model in development, Gemini Flash in CI, and Gemini Pro in staging, catching prompt-sensitivity regressions before they affect customers in Chennai and Hyderabad.
LiteLLM's completion() function accepts a model string like 'gemini/gemini-1.5-flash' or 'ollama/mistral' and returns a standard response object. In ADK agent tests, you patch litellm.completion with a mock that returns a canned response, then run assertions against the agent's tool dispatch and session state without making any real LLM calls. The parametrize decorator swaps the model string so the same test body covers every provider.
The example below patches litellm.completion to return a canned product recommendation, runs the ADK agent tool dispatch against it, and parametrizes the test across two model strings to demonstrate model-agnostic coverage.
It gives the following output,
[gemini/gemini-1.5-flash] Tool dispatch OK: Sony WH-1000XM5 Headphones - Rs 28000
[ollama/mistral] Tool dispatch OK: Sony WH-1000XM5 Headphones - Rs 28000
2 passed in 0.13s
In CI, set the LITELLM_MODEL environment variable and read it in the pytest fixture so the test matrix is driven by config rather than hardcoded parametrize lists. Use litellm.utils.validate_environment() in a setup step to confirm that the required API keys are present for the chosen model before the batch runs. This prevents misleading test failures caused by missing credentials rather than actual agent bugs.
|
|