A new, powerful Citizen Portal experience is ready. Switch now

Fiddler AI urges operational visibility and guardrails for generative and agentic models

July 23, 2025 | Oversight and Reform: House Committee, Standing Committees - House & Senate, Congressional Hearings Compilation, Legislative, Federal


This article was created by AI summarizing key points discussed. AI makes mistakes, so for full details and context, please refer to the video of the full meeting. Please report any errors so we can fix them. Report an error »

Fiddler AI urges operational visibility and guardrails for generative and agentic models
Amit Parker, co‑founder and COO of Fiddler AI, told the Oversight roundtable that agencies deploying predictive, generative and agentic AI need a "purpose built neutral AI command center" to provide operational visibility, detect hallucinations and prevent bias.

Parker said deployed models can be opaque and produce incorrect assertions called "hallucinations," and he showed a sonar image where a model mislabelled a plane on the seabed as a ship because it focused on a sonar shadow. "If AI is not trained and deployed correctly, you can also introduce bias in decisions which can have an outsized impact," Parker said, citing credit lending as an example and recommending fairness metrics and audit trails to comply with statutes such as the Equal Credit Opportunity Act and the Fair Housing Act.

Fiddler demonstrated dashboards that track model accuracy, data drift, disparate‑impact metrics and per‑interaction audit trails. The platform flags potential jailbreaks and hallucinations in real time and provides explainability tools (for example, colorized pixel maps showing which image areas influenced a model’s output). Parker said Fiddler works with enterprise and federal clients, including a named Navy program working on seabed imagery, and aligned its recommendations to five ethical AI principles.

Committee members asked whether human oversight could persist as models become more autonomous; Parker said human trust is necessary for sustained adoption and that observability is a prerequisite for safe scaling. Other speakers, including Anthropic representatives, agreed on the need for third‑party testing and governance structures.

Less critical details: Fiddler described specific dashboard features (trace/span views for agentic workflows, root‑cause analysis and real‑time alerting) and said the company has worked with large financial institutions to surface and remediate bias in production models.

View the Full Meeting & All Its Details

This article offers just a summary. Unlock complete video, transcripts, and insights as a Founder Member.

Watch full, unedited meeting videos
Search every word spoken in unlimited transcripts
AI summaries & real-time alerts (all government levels)
Permanent access to expanding government content
Access Full Meeting

30-day money-back guarantee