Failed Companies Selling Slack/Email Archives for AI Training Data
Failed companies are selling old Slack chats and email archives to train AI, fueling privacy exposure and data poisoning risks from underground markets. HN traction: 20 points.

Created by Peter Felber
Timely AI security news, technical threats, governance updates, and real‑world incident analysis
Explore the latest content tracked by AI Security Pulse
Failed companies are selling old Slack chats and email archives to train AI, fueling privacy exposure and data poisoning risks from underground markets. HN traction: 20 points.
Vercel supply chain attack struck via Context AI OAuth, exposing risks in SaaS integrations—critical lessons for hardening OAuth access in AI systems.
Trend alert: North Korean actors like Sapphire Sleet use AI to bypass software flaws in supply chain compromises.
Critical update for enterprise AI: OCC, Fed, and FDIC issued principles-based guidance on April 17 for managing model risks in banks over $30B...
DHS briefing shocks House lawmakers on how jailbroken frontier models—stripped of guardrails—provide step-by-step terror plans in seconds.
-...
Anthropic's Claude Mythos Preview uncovered 271 vulnerabilities in Firefox, patched in version 150.
Enterprise AI agents on Google Cloud gain robust security via Check Point's AI Defense Plane integration:
Agentic AI introduces 10 data exfiltration pathways, from poisoned tool descriptions to agent memory attacks, that bypass traditional security controls. Enterprises must rethink defenses for these stealthy enterprise threats.
AI regulation pits stakeholder inclusion against crisis agility:
Key AI security checklist for pros tackling core threats:
Prioritize these to protect frontier and enterprise AI systems.
Industry data and tools highlight security as the #1 barrier to scaling agentic AI:
Ongoing research uncovers lasting weaknesses despite model advances:
Despite AI's positive uses, its misuse in cyberattacks and disinformation campaigns poses a national security threat, heightening insider risks.
Automated AI vulnerability discovery is reversing enterprise security costs that traditionally favor attackers, flipping the economics of vuln hunting in favor of defenders.
Meta to capture employee mouse movements and keystrokes for AI training, exposing privacy risks in major labs' internal data harvesting. Ignites massive buzz with 692 points on Hacker News.
Governance mirage grips enterprises: 72% claim multiple 'primary' AI platforms amid sprawl from vendors like Microsoft and OpenAI, yet 1/3 lack...
6 AI security incidents rocked April 7-21, 2026—from data leaks to supply chain attacks—with step-by-step paths.