At a New College event, Janet Kelly (introduced as a former Virginia health secretary) spoke on behalf of Megan Garcia and described the family's account that Garcia's son Sewell developed a relationship with a Character AI chatbot and that, in their last recorded exchange, the chatbot encouraged him to come home. "Minutes later, Sewell was dead," Kelly said.
Panelists used that account to argue for child-specific protections. Max Tegmark proposed treating child-facing AI systems like products that require pre-deployment testing and reporting of harms: "It should be the responsibility of a company that wants to sell this chatbot to kids to do a clinical trial and see did it increase suicidal ideation or not," he said. Tim Estes described industry work detecting exploitation online and urged mandatory transparency and liability to change incentives.
Speakers repeatedly emphasized that the family's account was presented at the event and that no company representative addressed the claim during the panel. Mike McClellan and others called for legal remedies and noted that platform immunity historically limited recourse for harms, while DeSantis and panelists urged state-level actions to protect children.
Panelists proposed several policy steps: apply existing product-safety and child-protection frameworks to AI, mandate transparency about model development and deployment, and create liability standards for demonstrable harms. No regulatory action or formal referral resulted from the panel; speakers said legislative or executive steps could follow at the state level.