A new, powerful Citizen Portal experience is ready. Switch now

Experts tell Senate HELP committee unchecked AI development could threaten humanity

May 01, 2026 | Senate Committee on Health, Education, Labor, and Pensions, Senate Committees, U.S. Senate, Legislative, Federal


This article was created by AI summarizing key points discussed. AI makes mistakes, so for full details and context, please refer to the video of the full meeting. Please report any errors so we can fix them. Report an error »

Experts tell Senate HELP committee unchecked AI development could threaten humanity
Members of the Senate Committee on Health, Education, Labor, and Pensions pressed experts on whether rapidly advancing artificial intelligence could pose an existential threat to humanity, with witnesses warning the risks are real and not fully understood.

A researcher who testified during the panel said external estimates of a 10–20% chance of catastrophic AI outcomes—often cited publicly by figures such as Jeffrey Hinton—are not an exaggeration. "They're not exaggerating it," the researcher said, adding that a paper his team recently published at the NeurIPS conference suggests the risk could be "a lot higher than 20%" if development proceeds without regulation. He described company efforts to build what he called "superintelligence," capable of outperforming humans across many tasks and controlling robots and other machines, and warned that unleashing vastly smarter-than-human agents at scale could have catastrophic consequences.

A second expert, identified in the hearing transcript as "Doctor Zhang" and participating remotely from Beijing, told the committee there is currently no scientific evidence or practical method to ensure superintelligence can be made safe. "Without scientific evidence of how to secure ourselves, it's really dangerous to do this, for the way that we are doing for the current AI," the expert said. The chair summarized that line of testimony and in the transcript referred to the remote witness later as "Doctor Tsai," a naming inconsistency in the record.

Committee members cited public estimates by researchers including Jeffrey Hinton and asked witnesses to assess whether those probabilities are credible and whether policymakers and industry have tools to reduce existential risk. The witnesses emphasized uncertainty about control methods and the scale of effort by some companies to push toward general or superintelligent systems.

The exchange in the excerpted portion of the hearing focused on the scope of the risk and the absence of proven containment or safety techniques, rather than on specific legislative proposals. No motions or votes are recorded in this excerpt.

The committee's questioning underscored a central tension for policymakers: publicized risk estimates have prompted urgent calls for regulatory guardrails, while experts say proven technical solutions to make advanced AI safe do not yet exist. The hearing continued beyond the excerpt provided.

Don't Miss a Word: See the Full Meeting!

Go beyond summaries. Unlock every video, transcript, and key insight with a Founder Membership.

Get instant access to full meeting videos
Search and clip any phrase from complete transcripts
Receive AI-powered summaries & custom alerts
Enjoy lifetime, unrestricted access to government data
Access Full Meeting

30-day money-back guarantee