AI Safety & Governance Digest

**Enterprise governance: NY RAISE Act, IASCA cert, Int'l AI Safety Report, PAI 2025, TRUMP AI Act, US corps war crimes, MOUs, NIST RMF, RAND reporting, IBM playbook, CA EO, OpenAI safety net/Fellowship/Altman scandals, US-China collab, JAG TribunalAI** [climaxing]

**Enterprise governance: NY RAISE Act, IASCA cert, Int'l AI Safety Report, PAI 2025, TRUMP AI Act, US corps war crimes, MOUs, NIST RMF, RAND reporting, IBM playbook, CA EO, OpenAI safety net/Fellowship/Altman scandals, US-China collab, JAG TribunalAI** [climaxing]

Key Questions

What is the OpenAI Safety Fellowship?

The OpenAI Safety Fellowship is a pilot program launched to support independent research on AI safety and alignment. It focuses on evaluations, robustness, and building a talent pipeline following cuts to superalignment efforts. It aims to bring fresh external researchers into AI safety work.

What does the US Navy JAG propose for military tribunals?

The US Navy Judge Advocate General’s Corps (JAG) proposes using TribunalAI to replace human panelists for faster trials and verdicts. This aims to improve efficiency but raises concerns about bias and propaganda risks. It highlights tensions between speed and fairness in AI governance.

What is the NIST AI Risk Management Framework 1.0?

The NIST AI RMF 1.0 is a guide for managing AI risks, with 72% of S&P companies disclosing compliance. It provides essential practices for organizations to address AI risks systematically. Practitioners use it to track compliance and mitigate risks.

How does the IBM Agentic AI Governance Playbook perform?

The IBM playbook addresses agentic AI governance, but 40% of implementations fail key benchmarks. It covers topics like analytics, automation, and IT infrastructure. It emphasizes the need for better corporate AI governance beyond ethics.

What is the status of US-China AI safety collaboration?

US-China collaboration focuses on AI safety amid vulnerabilities like DeepSeek's 12x higher malicious outputs and 94% jailbreak rate. It aims to make AI safer despite economic competition. Practitioners track global threats including these vulnerabilities.

What does the Partnership on AI (PAI) 2025 report cover?

The PAI 2025 annual report highlights advancing AI safety and responsibility globally. It details community expansion and actions for trustworthy AI systems. It aligns with broader governance efforts like industrial policy prioritizing people.

What risks are associated with US corporations and AI in 2026?

US corporations allegedly used AI to aid war crimes, making them military targets. This underscores governance failures in enterprise use of AI. Reports emphasize compliance, risk management, and avoiding such liabilities.

What is the RAND report on AI incident reporting?

The RAND report designs incident reporting systems for harms from general-purpose AI across 7 dimensions. It aids practitioners in tracking compliance and global threats. It supports structured reporting to mitigate AI risks effectively.

NY RAISE transparency; IASCA cert; Int'l Rpt fraud/cyber; PAI 2025; Anthropic Aus MOU; RAND 7-dim reporting; NIST RMF 1.0 (72% S&P disclose); IBM playbook (40% fail); CA EO procurement/safety; OpenAI external Safety Fellowship (evals/robustness/talent pipeline post-superalignment cuts); US-China safety collab amid DeepSeek vulns (12x malicious/94% jailbreak); US Navy JAG proposes TribunalAI replacing panelists for faster trials (efficiency vs bias/propaganda risks); industrial policy people-first. Practitioners track compliance/mil risks/HDP delegation/global threats.

Sources (26)
Updated Apr 8, 2026