Gamified Systems Radar

Norms and debates around AI’s use in defense, governance, and information operations

Norms and debates around AI’s use in defense, governance, and information operations

AI Governance, Military Use and PsyWar

The Norms and Debates Surrounding AI’s Use in Defense, Governance, and Information Operations

As artificial intelligence continues to permeate critical sectors such as defense, governance, and information operations, the landscape is increasingly fraught with ethical, security, and geopolitical challenges. Central to these debates are questions about the appropriate norms for deploying AI in sensitive environments, the potential for misuse, and the need for effective international regulation.

Engagement and Backlash Around AI in Defense

Major AI companies like OpenAI and Anthropic are navigating complex relationships with government and military entities. Recently, OpenAI announced a “compromise” with the Pentagon, allowing U.S. military access to its technologies in classified settings—a move that Anthropic feared could set a problematic precedent for AI’s role in defense. OpenAI CEO Sam Altman defended this decision, acknowledging that “the optics don’t look good,” yet emphasizing the importance of integrating AI into national security frameworks.

This industry-government engagement is increasingly contentious. Many AI firms grapple with the ethical implications of their technology’s use in warfare and surveillance, raising concerns about dual-use dilemmas—where tools designed for innovation can also be exploited for harm. As No one has a good plan for how AI companies should work with the government, the risk of misuse or escalation looms large, especially as governments seek to leverage AI for cognitive warfare and propaganda.

AI in Cognitive Warfare, Propaganda, and Governance

AI’s capacity to influence public perception and manipulate narratives has sparked significant concern. Platforms and tools that harness AI for prediction markets and disinformation campaigns are now being exploited for geopolitical influence operations. For instance, Nasdaq’s recent filings for prediction market-style options on the Nasdaq-100 could redefine risk management but also open avenues for market manipulation.

A notable example involves Polymarket, where bets on geopolitical events—such as the health or stability of Iran’s leadership—can amplify or distort public sentiment. An account linked to George Cottrell, a confidant of Nigel Farage, engaged in a $550,000 wager on the likelihood of military action against Iran, illustrating how prediction markets are weaponized for influence operations.

Beyond markets, state actors are actively deploying AI-powered propaganda. The “invisible battlefield” of cognitive warfare involves manipulating perceptions and beliefs through sophisticated AI-driven disinformation, which complicates efforts to maintain truth and stability in geopolitical conflicts.

Ethical and Normative Challenges

The deployment of AI in these domains raises pressing questions about ethical standards, transparency, and oversight. While some nations are investing in regional cloud networks and decentralized infrastructure—aimed at reducing dependence on Western-controlled systems and safeguarding against cyber threats—there remains significant regulatory fragmentation. Countries like Dubai have begun enforcing licensing for AI and crypto firms, illustrating the push for oversight, but global consensus on norms remains elusive.

Articles like "AI Governance Briefing" and "Will We Govern AI, or Will AI Govern Us?" highlight the urgency of establishing robust governance frameworks. Without clear international standards, there is a risk that AI could be weaponized not only in direct conflict but also in information operations, cyber espionage, and narrative shaping—all of which threaten democratic processes and global stability.

The Path Forward

The evolving landscape underscores the urgent need for international cooperation on AI safety protocols, cybersecurity, and ethical standards in defense and governance. As OpenClaw—a zero-click exploit capable of hijacking developer AI agents—demonstrates, vulnerabilities are rapidly emerging, emphasizing the importance of robust safeguards.

At the same time, regulatory efforts are fragmented. While some nations push for more oversight, others resist, citing concerns over sovereignty and economic competitiveness. The proliferation of AI-enhanced influence operations and disinformation campaigns further complicates these efforts.

In summary, the norms surrounding AI’s use in defense and governance are still in flux. The international community faces the challenge of balancing technological innovation with ethical responsibility, security, and trust. Establishing clear norms and effective regulation will be critical in preventing misuse, managing geopolitical tensions, and ensuring that AI serves as a force for stability rather than chaos. As AI continues to evolve, so too must our collective efforts to govern its deployment ethically and responsibly.

Sources (7)
Updated Mar 7, 2026
Norms and debates around AI’s use in defense, governance, and information operations - Gamified Systems Radar | NBot | nbot.ai