Use and misuse of AI in defense, national security, exploitation and online harm
AI in Defense, Security and Abuse
The Use and Misuse of AI in Defense, Security, and Online Harm: Navigating Ethical Challenges and Geopolitical Tensions
As artificial intelligence continues to evolve rapidly, its application in defense and security contexts has become a double-edged sword—offering unprecedented strategic advantages while posing significant ethical and systemic risks. The current landscape reveals a complex interplay between technological innovation, geopolitical disputes, and societal harms that demand careful scrutiny.
Military and Defense Applications: Ethical Dilemmas and Geopolitical Disputes
AI's integration into military systems has sparked intense debates over ethics, safety, and sovereignty. Autonomous weapons, surveillance, and decision-support systems promise enhanced capabilities but also raise profound concerns about accountability and the potential for unintended escalation. A salient example is the recent dispute involving Anthropic, a leading AI firm, and the U.S. government. In 2026, former President Donald Trump ordered federal agencies to “IMMEDIATELY CEASE” using Anthropic’s AI systems, citing ethical concerns and security risks, especially related to military misuse and foreign dependencies. Anthropic rejected these directives, asserting their safety standards and ethical commitments, highlighting tensions between private innovation and government oversight.
This conflict exemplifies broader issues: as AI tools are considered for classified military operations—such as the Pentagon’s approval of Elon Musk’s Grok chatbot for classified tasks—questions of regulation, liability, and security become more pressing. Reports indicate that agencies are concerned about Grok’s safety and reliability, especially given its potential use in sensitive military contexts, raising fears of unintended escalation and security vulnerabilities.
Furthermore, the disputes over AI supply chains—with the U.S. Defense Department emphasizing resilience and foreign interference mitigation—underscore the strategic importance of technological sovereignty. The labeling of companies like Anthropic as a “supply chain risk” by defense officials exemplifies the growing focus on security vulnerabilities stemming from dependence on foreign AI providers.
Cognitive Warfare and Disinformation: Emerging Threats
Beyond conventional military applications, AI is increasingly exploited in cognitive warfare—a form of psychological operations that manipulate perceptions, spread disinformation, and influence societal stability. Articles such as “The Invisible Battlefield: AI, Cognitive Warfare, and the Battle for Your Mind” detail how advanced AI tools amplify disinformation campaigns and social manipulation, threatening democratic institutions and national security.
Disinformation, often propagated through deepfake videos and emotion AI, erodes public trust and can destabilize societies. The rise of AI-driven disinformation is compounded by the lack of comprehensive international norms, making it a fertile ground for misuse by state and non-state actors.
Exploitation and Online Harm: Societal Risks Amplified by AI
The misuse of AI extends into the realm of online harm, with societal harms becoming more pervasive and sophisticated:
- Child Exploitation: AI’s ability to generate synthetic media at scale has led to a surge in child abuse material, complicating detection and prosecution efforts. Authorities warn that AI-facilitated exploitation presents a new legal and technological challenge requiring specialized safeguards.
- Deepfakes and Disinformation: Realistic deepfake videos threaten public trust and democratic processes. As verification technologies lag behind, societies face increased risks of manipulation and misinformation campaigns.
- Gender-Based Violence: Exploitation of deepfake revenge porn and harassment bots exacerbates online gendered violence, prompting urgent calls for regulatory safeguards.
- Mass Surveillance and Privacy Erosion: AI-driven mass surveillance and data harvesting fuel civil liberties concerns, with pervasive monitoring threatening civil rights and personal privacy.
Ethical and Leadership Challenges
In response to these threats, discussions around ethical AI and responsible leadership have gained prominence. Initiatives emphasize the importance of transparency, public engagement, and adaptive regulation to maintain societal trust amid the proliferation of disinformation and online harms. Notably, debates continue over who should define the moral values embedded within AI systems—raising questions about cultural diversity, power imbalances, and global equity.
The Path Forward
The current landscape underscores the necessity for harmonized, enforceable, and ethically grounded governance frameworks. International cooperation is crucial to develop global standards that balance innovation with security and ethics. Building trust among governments, industry, civil society, and international organizations is essential to prevent misuse and conflict escalation.
Key steps include:
- Establishing international norms to regulate AI in military and cognitive warfare.
- Developing transparent supply chains and security protocols to safeguard critical AI infrastructure.
- Implementing technological safeguards against exploitation, deepfakes, and disinformation.
- Promoting ethical standards that respect cultural diversity and human rights while mitigating power imbalances.
In conclusion, as AI becomes deeply embedded in defense and security domains, maintaining ethical integrity, security, and public trust remains paramount. The decisions made today will shape whether AI serves as a force for stability and societal benefit or becomes a catalyst for division, conflict, and harm in an interconnected world.