FDA staff described ongoing work using AI to support both internal review and regulated products. Will Liu (AI governance) reviewed enterprise generative tools (ELSA, Gemini Enterprise), the AI Internal Council and AI Policy Council, and multiple pilot use cases including submission triage, adverse‑event signal detection and a drug‑shortage predictive model. He framed agency AI use as assistive and subject to validation and human review.
Center speakers gave examples: Gautham Mehta (Oncology) explained a prompt‑library pilot and a pediatric study‑plan pilot where superusers test prompts against prior reviews and staff use AI outputs in parallel with traditional review to build trust and refine prompts. Chi Li and Johnny Lam (CDER/CBER) described AI for image‑based endpoints, wearable‑data endpoints, biomarker discovery and the January 2025 cross‑center draft guidance on risk‑based credibility assessments for AI in regulatory decision‑making. CDRH highlighted device‑life‑cycle oversight, the value of predefined change control plans for adaptive AI devices and a draft guidance on AI‑enabled devices.
Why it matters: AI can accelerate review of large submissions and increase consistency, particularly important as rare‑disease product submissions (and AI components in those submissions) increase. Officials emphasized risk‑based validation, documentation of provenance, and post‑market monitoring for AI systems.
Practical guidance: speakers urged developers to engage early with FDA, prespecify AI context‑of‑use and validation plans, and consider generalizability and data provenance when proposing AI‑derived endpoints.