A new, powerful Citizen Portal experience is ready. Switch now

Expert witness warns unregulated AI could pose catastrophic risk

May 01, 2026 | Senate Committee on Health, Education, Labor, and Pensions, Senate Committees, U.S. Senate, Legislative, Federal


This article was created by AI summarizing key points discussed. AI makes mistakes, so for full details and context, please refer to the video of the full meeting. Please report any errors so we can fix them. Report an error »

Expert witness warns unregulated AI could pose catastrophic risk
A witness testifying before the hearing warned that rapidly developing, unregulated artificial intelligence could lead to catastrophic outcomes for humanity.

At the hearing, the witness said, “No. They’re not exaggerating it,” in response to whether experts such as Nobel laureate Geoffrey Hinton were overstating extinction risks. The witness added, “I think it’s likely to be a lot higher than 20% risk that we basically end civilization as we know it,” citing a recent paper his team published at the NeurIPS conference as the basis for that judgment.

The Chair framed the exchange by asking whether the rapid and uncontrolled development of AI poses a substantial threat to the human race and whether the 10–20% figure cited by some researchers might be an exaggeration. The witness responded that, based on recent research, those estimates were not an overstatement and that companies openly aiming for “superintelligence” — systems that could outperform humans at many tasks and control robots — increase the potential for systemic harm if not constrained.

Doctor Zhang, introduced by the Chair as participating from Beijing, said the world currently lacks “scientific evidence and [a] practical way to keep superintelligence safe enough” and warned that the global environment is not prepared to treat superintelligence as a controllable tool. Doctor Zhang added later in the exchange, “Without scientific evidence of how to secure ourselves, it’s really dangerous to do this, for the way that we are doing for the current AI.”

Doctor Tsai twice affirmed the core concern when asked directly by the Chair, indicating agreement that the risk of loss of control is real. The transcript records Doctor Tsai’s brief confirmations but does not include further elaboration in the provided excerpt.

The witnesses’ testimony emphasized two linked points: that some researchers’ public estimates of existential risk from AI may understate the danger, and that the scientific and practical methods needed to guarantee safety against hypothetical superintelligent systems are not yet established. The hearing participants did not record a formal motion or vote on policy in the provided excerpt.

The discussion closed with the witnesses reiterating caution about continuing current development approaches without validated safety mechanisms; no legislative action or regulatory decision was documented in this transcript excerpt.

Don't Miss a Word: See the Full Meeting!

Go beyond summaries. Unlock every video, transcript, and key insight with a Founder Membership.

Get instant access to full meeting videos
Search and clip any phrase from complete transcripts
Receive AI-powered summaries & custom alerts
Enjoy lifetime, unrestricted access to government data
Access Full Meeting

30-day money-back guarantee