tl  tr
  Home | Tutorials | Articles | Videos | Products | Tools | Search
Interviews | Open Source | Tag Cloud | Follow Us | Bookmark | Contact   
 Agentic AI > ADK Agent Testing > ADK Agent Test Reporting with pytest-html

ADK Agent Test Reporting with pytest-html

Author: Venkata Sudhakar

ShopMax India's engineering team needs visibility into which agent test cases are passing and failing across different query categories. pytest-html generates a rich HTML report after each test run that can include custom metadata, per-test attachments, and response details. This makes it easy to share test results with product managers and QA teams without requiring them to read terminal output.

pytest-html adds two key integration points: conftest.py hooks to set the report title and environment metadata, and the extra fixture that individual tests use to attach custom text or JSON blocks to their report row. Each parametrized test case gets its own row in the report, with the attached query and response visible on click for quick debugging.

The example below shows a conftest.py that configures the report title and project metadata, plus a parametrized test that attaches query and response details to each report row via the extra fixture.


It gives the following output,

Q: Track order ORD-7821 Mumbai
A: Order ORD-7821 shipped, arrives by 26 Apr.
Q: Samsung TV stock Delhi
A: Samsung 55-inch TV in stock at Delhi warehouse.
Q: Cancel ORD-9012 Bangalore
A: Order ORD-9012 cancelled. Refund in 5-7 days.
... (3 passed in 0.04s)
Generated HTML report: report.html

In production, run the report generation step in CI after every agent test suite execution and publish report.html as a build artifact. Add token count, latency, and category fields to the config._metadata dict so the report header shows the test environment at a glance. Use the extra fixture to attach the full agent JSON response for failed tests so that engineers in Hyderabad or Chennai can debug failures without re-running tests locally.


 
  


  
bl  br