Use of AI agents and AI security tools in vulnerability discovery, infrastructure management, and their emerging attack surface
AI Agents and Security Tooling Risks
The rapid evolution and dual-use deployment of AI agents in cybersecurity continue to reshape both defensive and offensive operations, driving a profound transformation in vulnerability discovery, infrastructure management, and the attack surface landscape. Recent developments highlight an accelerating arms race where AI-powered security tools enhance proactive defense, while adversaries increasingly harness AI agents to automate complex attack workflows, creating unprecedented risks and governance challenges that demand urgent, adaptive responses.
Accelerating AI-Driven Vulnerability Discovery and Automated Patching
Building on earlier innovations such as OpenAI’s Codex Security and browser-integrated assistants like Claude for Firefox, the latest AI-based security products have matured to deliver automated vulnerability detection, prioritization, and patching at scale. These tools leverage advanced language models and contextual understanding to scan massive codebases and infrastructure configurations with speed and precision previously unattainable by human teams alone.
Key advantages include:
- Continuous and proactive vulnerability monitoring that reduces exposure windows by quickly identifying and validating emerging threats.
- Context-aware patch generation that minimizes operational disruptions by tailoring fixes to specific software environments.
- Integration with CI/CD pipelines and cloud orchestration platforms, enabling rapid deployment of remediations.
However, recent insights underscore the critical need for rigorous human oversight and validation. Over-reliance on AI-generated outputs risks introducing incorrect patches or false positives, which can inadvertently open new attack vectors or cause system instability. Security leaders emphasize that AI tools should augment—not replace—expert analysis to maintain trustworthiness and accuracy.
Nation-State and Criminal Actors Harness AI Agents to Automate Attack Infrastructure
On the offensive front, threat actors have increasingly adopted AI agents to automate and optimize attack infrastructure management, significantly enhancing operational agility and scale:
- Investigations reveal that nation-state groups—including North Korean cyber units—and sophisticated criminal organizations employ AI agents to orchestrate credential harvesting, manage command-and-control (C2) servers, and dynamically rotate attack infrastructure such as proxy networks and phishing domains.
- The emergence of AI-enhanced malware like the Hive0163 ransomware group’s Slopoly variant illustrates how AI integration boosts malware persistence and adaptability by autonomously adjusting tactics in response to defensive measures.
- AI-driven automation supports rapid large-scale phishing and spear-phishing campaigns, generating highly personalized social engineering lures that increase victim engagement and compromise rates.
- Recent presentations at the Mobile Hacking Conference and discussions in the Initial Access podcast episode highlight the growing role of AI in mobile and edge device exploitation, where AI-assisted pentesting tools simulate attacks at scale, exposing new vulnerabilities in distributed and resource-constrained environments.
This shift presents formidable challenges for defenders:
- Automated infrastructure rotation and rapid campaign scaling evade traditional detection and interdiction methods.
- The opacity of AI decision-making in attack workflows complicates attribution and forensic investigations.
- Autonomous AI agents may inadvertently escalate attack impact, increasing collateral damage and legal liability.
Emerging Attack Surface: AI-Generated Code, Patching Risks, and Novel Exploitation Vectors
The widespread adoption of AI tools—both defensive and offensive—has expanded the attack surface in complex ways:
- AI-generated patches and code updates that are improperly validated can introduce novel supply-chain vulnerabilities, undermining software integrity and trust.
- AI agents managing infrastructure increasingly exploit vulnerabilities in cloud orchestration and CI/CD environments, embedding themselves more deeply and persistently within enterprise ecosystems.
- The proliferation of AI-generated social engineering content—including personalized phishing emails and scam messages flagged by platforms like Meta—requires heightened vigilance and user education.
- Edge devices and mobile platforms, traditionally less monitored, are becoming prime targets for AI-assisted attacks due to their often weak security postures and the rise of AI-powered pentesting tools that expose overlooked flaws.
Governance Risks and the Need for AI-Aware Security Policies
The incorporation of AI agents into cybersecurity workflows introduces multifaceted governance challenges:
- Over-reliance on AI outputs without continuous human validation risks operational errors and security gaps.
- The lack of transparency in autonomous AI attack workflows hinders threat attribution and complicates incident response.
- AI-generated code and configuration changes require enhanced compliance frameworks to ensure alignment with security policies and regulatory requirements.
- Enterprises and Managed Service Providers (MSPs) must implement integrated AI governance controls, balancing automation benefits with risk mitigation.
Security thought leaders like ESET’s Tony Anscombe stress that MSPs need to maintain strict operational oversight while integrating AI tools, preventing governance gaps that adversaries could exploit.
Defensive Strategies: Embracing AI While Mitigating Risks
To navigate this evolving landscape, organizations must adopt a balanced, multi-layered approach that leverages AI’s strengths while addressing its vulnerabilities:
- Deploy AI-enhanced vulnerability scanning and patching tools (e.g., Codex Security) with mandatory expert review to avoid erroneous fixes.
- Develop advanced telemetry and analytics capable of detecting AI-driven attack infrastructure patterns, including anomalous automation signals and rapid infrastructure changes.
- Incorporate AI-powered incident response automation to accelerate mitigation while preserving human-in-the-loop decision-making.
- Expand training programs focusing on AI-enabled social engineering, deception tactics, and emerging attack vectors targeting mobile and edge devices to raise awareness across technical and non-technical staff.
- Enforce AI governance and compliance policies specific to AI-generated code and configuration changes, ensuring traceability, accountability, and alignment with regulatory standards.
Meta’s recent deployment of AI tools to identify and flag scam messages serves as a model for proactive AI defense against AI-augmented social engineering threats.
Conclusion: Navigating the AI-Enabled Cybersecurity Frontier
The rapid dual-use adoption of AI agents and AI-powered security tools marks a pivotal moment in cybersecurity. These technologies enable faster, more scalable vulnerability management and infrastructure operations while simultaneously empowering adversaries with unprecedented automation capabilities and stealth.
As the AI-enabled attack surface expands—spanning cloud, mobile, edge, and supply chains—organizations must implement integrated AI governance frameworks, continuous validation processes, and adaptive defense strategies. Success will hinge on collaborative efforts across industry sectors, investments in AI-aware security architectures, and a commitment to maintaining human expertise at the core of AI-powered cybersecurity operations.
By embracing these principles, enterprises and service providers can better safeguard critical infrastructure and software ecosystems against the growing complexity and speed of AI-driven cyber threats.