Parents, public‑health researchers and algorithm auditors urged the Joint Committee on Advanced Information Technology to report Senate Bill 51 favorably and to require independent algorithm risk audits and transparency reports for social platforms that use engagement‑based recommendations affecting children.
Witnesses said engagement‑optimizing algorithms deliver content that amplifies risk for vulnerable youth, and they urged the state to require audits focused on harms including addiction‑like use patterns, exposure to bullying and predatory marketing.
"Transparency and accountability are at the heart of this legislation," said Deb Schmill, who testified as a parent and sharply criticized platforms’ opaque recommendation systems. "The costs of inaction are measured in the mental health and safety of our children," she told the committee.
Professor Nancy Costello, director of a First Amendment clinic and legal lead for a Harvard research project, described a narrow legal approach: the bill would measure and report harm from engagement‑based algorithms rather than ban speech or regulate third‑party content. Costello noted that the reports would be usable by researchers, journalists and law enforcement and that third‑party auditors would be vetted by the attorney general.
Dr. Brynn Austin, a pediatric researcher, summarized experimental and observational research linking recommendation systems to serious harms. Citing a study comparing TikTok feeds, she said the platform delivered “over 400 times more videos classified as markedly disordered pro‑eating‑disorder content to individuals with eating disorders compared to videos sent to healthy individuals,” and argued the pattern is driven by engagement optimization. She told the committee that engagement algorithms can lock vulnerable teens into harmful content without the teen explicitly seeking it.
Independent auditor Cathy O’Neil explained that audits can reverse‑engineer platform behavior using transaction‑level data and show which users or groups are systematically exposed to manipulative content. She described an audit technique used in a payday‑lender case and said the same methods can detect targeting of harmful content to vulnerable youth.
Supporters emphasized the bill’s narrow scope: it assesses platform decisions that the platforms themselves make (the recommendation engines) rather than policing user speech. Witnesses urged public disclosure of audit findings to support enforcement, research and policymaking.
The committee heard no final action; backers asked for a favorable report and additional technical conversations on audit standards and auditor vetting.
Less critical detail: testimony discussed ways audit reports could be used by the attorney general to pursue deceptive‑practice claims if platforms prioritize engagement at the cost of child safety.