Regulatory guidance and frameworks for AI in government and finance
Government & Financial AI Governance
The regulatory and governance landscape for artificial intelligence (AI) and cybersecurity within government and financial sectors is reaching a critical inflection point. Building on earlier mandates and frameworks, recent developments underscore a growing emphasis on supply chain transparency, shadow AI risk management, secure AI client design, and precise procurement controls—all vital to securing AI deployments amid rapidly evolving threats and operational complexities.
Evolving Government Cybersecurity Mandates and AI Governance
The Cybersecurity and Infrastructure Security Agency (CISA) continues to lead with aggressive directives aimed at shoring up the security posture of federal and state networks:
- Emergency Directive (ED) 26-03 mandates immediate mitigation of critical vulnerabilities in Cisco SD-WAN systems, a backbone technology for government network infrastructure. This directive highlights the ongoing vulnerability surface introduced by essential network components supporting AI and other digital services.
- Binding Operational Directive (BOD) 26-02 elevates lifecycle management of edge devices, compelling agencies to inventory, remove, or replace unsupported or insecure hardware. This is pivotal as edge computing increasingly supports AI inference and data processing close to the source, exposing new attack vectors if unmanaged.
Together, these directives mark a shift toward proactive lifecycle risk management—recognizing that AI integrations depend heavily on secure, well-maintained edge infrastructure.
Complementing these mandates, the National Institute of Standards and Technology (NIST) has furthered its role as a cornerstone in AI governance:
- The Open Security Controls Assessment Language (OSCAL) framework now enables state and local governments to automate and standardize security control assessments. This streamlines compliance reporting and provides auditors with transparent, evidence-based assurance—critical for agencies adopting AI tools that demand rigorous privacy and security postures.
- The upcoming 2026 update to the Financial Services AI Risk Management Framework (AI RMF) embeds Privacy-Enhancing Technologies (PETs) and aligns AI risk management with the broader NIST Privacy Framework, reinforcing operational resilience and ethical AI deployment in financial institutions.
Government Chief Information Officers (CIOs) are also urged to adopt a Zero Trust architecture, prepare for post-quantum cryptography, and integrate AI-specific safeguards—forming a triad of modernization imperatives to secure the digital ecosystem underpinning AI solutions.
Treasury Department’s Expanded AI Guidance for Financial Services
The U.S. Department of the Treasury has issued updated guidance that emphasizes transparency, bias mitigation, and operational resilience in AI deployments across the financial sector. This guidance aligns with regulators’ growing concerns over AI’s impact on financial markets, customer protections, and systemic risk.
Key recommendations include:
- Incorporating ethical AI principles into risk management
- Ensuring compliance with existing financial laws while navigating AI-specific challenges
- Enhancing governance frameworks to include AI lifecycle management and risk assessment
This guidance is designed to prepare financial institutions for an environment where AI is both a critical innovation driver and a source of novel vulnerabilities.
New Challenges Highlighted: Supply Chain Risks and Shadow AI
Recent high-profile developments reveal deeper complexities in AI governance:
- The Pentagon’s abrupt cutoff of Anthropic technology vendors exposed how little many organizations understand their AI supply chains and vendor dependencies. This action illuminated the hidden AI dependency maps that most enterprises lack, raising urgent calls for improved supply chain transparency and due diligence in AI vendor management.
- The rise of Shadow AI—AI tools adopted and used by employees outside official IT channels—has introduced a glaring security blind spot. Shadow AI risks include unmanaged data exfiltration, inadvertent exposure of sensitive information, and the erosion of formal governance controls. As one security analyst put it, "When everyone becomes a data leak waiting to happen, traditional perimeter defenses become moot."
These revelations underscore that beyond technology controls, organizational policies, user training, and continuous monitoring are indispensable to managing AI risks.
Concrete Tools for AI Procurement and Secure Deployment
In response to these emerging risks, new practical resources have entered the market to assist government and financial institutions in their AI adoption journeys:
- A newly released RFP template for AI usage control and AI governance offers a standardized approach to procurement, ensuring that vendors commit to transparency, security, and compliance requirements. This tool helps agencies embed governance expectations directly into contracts, mitigating risks arising from opaque AI supply chains or uncontrolled AI functionalities.
- Security researchers uncovered a critical vulnerability in the Perplexity AI-powered Comet browser, where malicious calendar invites could access local files. This flaw highlights the emerging attack surface in AI client interfaces—particularly AI browsers and chat clients—and the urgent need for secure design principles and continuous assurance throughout AI lifecycles.
Best Practices for AI Governance in State and Local Governments
As generative AI adoption accelerates, states and municipalities are recommended to integrate:
- Data Loss Prevention (DLP) tools to monitor and control sensitive information flow
- Prompt filtering and sanitization to reduce accidental exposure of confidential data to AI systems
- Browser and session isolation techniques to contain AI-driven interactions within controlled environments
Adopting these controls as part of a comprehensive AI governance framework helps maintain privacy, security, and trust in public-sector AI applications.
Significance and Outlook
The maturing regulatory environment for AI and cybersecurity across government and finance sectors reflects a comprehensive, multi-layered approach:
- Mandates (CISA ED 26-03, BOD 26-02) enforce immediate and lifecycle security controls for critical infrastructure supporting AI.
- Frameworks (NIST OSCAL, AI RMF 2026) provide standardized, automated tools for compliance and risk management.
- Guidance (Treasury AI guidelines) promotes responsible, ethical AI use aligned with regulatory expectations.
- New challenges (Pentagon vendor cutoff, Shadow AI, AI client vulnerabilities) expose gaps in supply chain transparency, user behavior management, and secure AI design.
- Procurement tools (RFP templates) enable agencies to embed governance and security controls into AI acquisition processes.
Collectively, these developments highlight that AI security and governance are no longer abstract ambitions but immediate operational imperatives. Success will depend on integrating robust technical controls, transparent supply chain oversight, vigilant user governance, and continuous assurance mechanisms.
As AI becomes increasingly embedded in government functions and financial services, these regulatory and guidance frameworks will shape how trust, security, and compliance are maintained at scale, ensuring AI’s transformative potential is harnessed safely and responsibly.