A new, powerful Citizen Portal experience is ready. Switch now

Experts and industry groups debate Massachusetts proposals for model audits, trust fund and high‑risk AI rules

September 11, 2025 | 2025 Legislature MA, Massachusetts


This article was created by AI summarizing key points discussed. AI makes mistakes, so for full details and context, please refer to the video of the full meeting. Please report any errors so we can fix them. Report an error »

Experts and industry groups debate Massachusetts proposals for model audits, trust fund and high‑risk AI rules
State and industry witnesses — including engineers, insurers, legal experts and trade groups — addressed a set of bills that would set state standards for AI model training, auditing, risk management and use in healthcare and insurance.

Supporters urged clear requirements for security, model documentation, independent audits, human review and alignment with federal guidance. Opponents and industry groups warned that overly broad definitions could sweep in longstanding predictive tools and discourage startups, and they urged sector‑specific approaches and safe‑harbor mechanisms.

Senator testimony introduced S37 and related proposals, describing a framework for model governance that would require annual safety assessments, public posting of model security protocols, third‑party audits and incident reporting to the attorney general. "This bill proposes a framework to govern AI model training," a sponsor told the committee.

Health and life‑sciences witnesses said rules should preserve human clinical judgment. Malik Patel, an engineer working in life sciences, described proposals that would require developers of covered models to implement cybersecurity protections, independent annual audits and continuous monitoring, and urged alignment with FDA "good machine learning practice" and the NIST AI Risk Management Framework.

The property‑casualty industry, represented by Christopher Stark of the Massachusetts Insurance Federation, supported flexible regulatory tools and pointed to the Division of Insurance’s existing bulletin on AI as an example. He warned that some draft definitions of "artificial intelligence systems" in H94 and H97 could be so broad as to capture decades‑old predictive models and asked the committee to clarify language and exempt legitimate risk‑mitigation uses such as cybersecurity and certain safety tools.

AI attorney John Weaver recommended policymakers consider targeted private enforcement and sanctions carefully. "Private rights of action allow governments to crowdsource enforcement," he said, but urged careful drafting to avoid frivolous litigation and asked the committee to consider mechanisms that require plaintiffs to meet defined thresholds.

Industry trade groups urged a balanced approach. The Massachusetts High Technology Council and Chamber of Progress recommended aligning state rules with federal frameworks, protecting trade secrets where appropriate and avoiding measures that would drive firms out of state. Several witnesses suggested sandboxing and sectoral rules rather than a single omnibus statute.

The committee received technical and policy testimony across several bills and asked for additional written comments and drafting suggestions. No formal votes were taken at the hearing.

Less critical detail: witnesses suggested adoption of NIST and FDA guidance as compliance anchors and asked for explicit carve‑outs for existing regulated activities and safety‑critical use cases.

Don't Miss a Word: See the Full Meeting!

Go beyond summaries. Unlock every video, transcript, and key insight with a Founder Membership.

Get instant access to full meeting videos
Search and clip any phrase from complete transcripts
Receive AI-powered summaries & custom alerts
Enjoy lifetime, unrestricted access to government data
Access Full Meeting

30-day money-back guarantee