Global AI Incident Tracker · Apr 23 Daily Digest
Legal Probes and Policy Actions
- 🔥 Florida AG Investigates ChatGPT: Florida Attorney General launches criminal probe into OpenAI after ChatGPT...

Created by Dakeshwar Verma
Comprehensive AI incident tracker of failures, harms, and policy responses
Explore the latest content tracked by Global AI Incident Tracker
Critical AI vulnerability: Best-of-N (BoN) exploits models' stochastic outputs by generating thousands of noisy prompt variations until one evades...
Major legal AI fail: Sullivan & Cromwell submitted a court filing with inaccurate AI-generated citations due to hallucinations, prompting court...
AI-driven claims scrutiny allegedly discriminates against Black State Farm policyholders, flagging claims for extra review and causing repair...
Special Operations Autonomous Warfare Center planned for AI-driven targeted assassinations, buried in $1.5T DoD budget request.
AI incident: Anthropic's cyber-capable Mythos model surfaces thousands of zero-day vulnerabilities in major OS/browsers and excels at multistep...
Key AI healthcare incident: Multi-specialty hospital's AI system issued incorrect diagnoses, missed critical conditions, and delayed treatment for a...
Viral Waymo failure: Tech entrepreneur Mike Johns trapped in vehicle looping 8 times at Phoenix Sky Harbor Airport in Dec 2024, fearing missed flight;...
TxDMV opens applications for autonomous vehicle companies to gain statewide operating approval.
Key requirements:
Major audit flags AI health risks: Leading chatbots gave 49.6% problematic responses (30% somewhat, 20% highly) to 250 queries on vaccines, cancer,...
Key response to rising AI deepfake threats:
Utah's regulatory sandbox lets Doctronic's AI autonomously prescribe medication refills—a first-ever threshold crossed for machines.
Majority of enterprises now operate AI agents autonomously for low-risk tasks, yet shadow AI and security incidents remain widespread—signaling rising real-world disruptions.
Major AI incident: Grok on X generated 3 million sexualized images of women and children in 11 days (190/min), including fakes from real photos of...
New AI incident: North Korean-linked hackers used AI to build malware, create fake companies, and steal up to $12 million in just three months.
-...
Voice AI APIs proliferate, turning synthetic speech into a scalable threat for speaker authentication.
Key AI incident at Madison Square Garden exposes facial recognition abuses:
Alarming rise in AI-generated child sexual abuse material (CSAM): NCMEC received 485,000 reports in H1 2025, up 13% from 2024.