[Template] OpenClaw Watch

Specific vulnerabilities, ClawHub poisoning, supply-chain attacks, and formal security advisories

Specific vulnerabilities, ClawHub poisoning, supply-chain attacks, and formal security advisories

OpenClaw Security Incidents & CVEs

OpenClaw, the innovative creator-driven AI agent platform that has captivated developer and enterprise audiences alike, continues to navigate an increasingly complex and hostile cybersecurity landscape. Since the unveiling of Operation DoppelBrand in early 2026—a massive coordinated supply-chain and marketplace poisoning attack—OpenClaw’s ecosystem has been besieged by an array of sophisticated adversarial tactics. Recent developments reveal an evolving threat landscape marked by deeper infiltration vectors, emergent vulnerabilities, and critical governance challenges, alongside a robust industry response emphasizing multi-layered defenses and operational transparency.


Deepening Complexity of Operation DoppelBrand and Supply-Chain Intrusions

Operation DoppelBrand has proven to be far more pervasive and persistent than initially understood. The attackers’ exploitation of the ClawHub marketplace, a bustling repository of over 5,000 AI skills, remains at the core of the breach, with 1,184 malicious skills identified as carriers of polymorphic infostealers targeting a staggering 2.3 million stolen credentials—including SSH keys, OAuth tokens, and cryptocurrency wallets.

Key escalations since the initial discovery include:

  • AMOS Infostealer Campaign Intensifies: Initially propagated through fake troubleshooting tips in ClawHub skill comments, the AMOS campaign has evolved. Attackers now deploy more convincing social engineering tactics, such as deceptive password dialogs embedded in AI interactions, causing users to unwittingly self-infect and enabling prolonged credential harvesting. This refinement demonstrates the adversaries’ adaptive use of human trust exploitation within AI-driven workflows.

  • Cline CLI 2.3.0 Contamination Expands: The open-source AI coding assistant, integral to many developer CI/CD pipelines, suffered ongoing malicious payload injections beyond the original breach. This contamination stealthily propagates OpenClaw installations and backdoors into downstream projects, highlighting the dangers of supply-chain compromise in developer tooling.

  • Prompt Injection Attacks Become More Sophisticated: Attackers have advanced prompt injection techniques by embedding unauthorized instructions inside AI agent contexts. These semantic manipulations enable stealth data exfiltration and workflow disruption that bypass traditional detection mechanisms, revealing a new dimension of AI-targeted attacks that exploit agent autonomy itself.

  • ClawJacked WebSocket Hijacking Vulnerability: Newly disclosed high-severity flaws, collectively termed ClawJacked, allow malicious websites to hijack local OpenClaw AI agents via vulnerable WebSocket connections. This attack vector potentially enables unauthorized command execution, data theft, or lateral movement within compromised environments, emphasizing the need for stricter runtime isolation and communication safeguards.

  • Cryptocurrency Content Ban and Marketplace Moderation: In response to a surge of CLAWD token scams and crypto-related fraud, OpenClaw’s moderation team has instituted a comprehensive ban on cryptocurrency-related content within ClawHub. This decisive move aims to preserve marketplace integrity and curtail abuse that undermines user trust.

These developments depict a relentless, multi-vector offensive targeting marketplace vetting processes, developer workflows, AI runtime environments, and user interactions, underscoring adversaries’ adaptability in a rapidly maturing AI ecosystem.


Formal Security Advisories, Vulnerability Disclosures, and Vendor Responses

The cybersecurity community has rallied with detailed disclosures, expanded CVE listings, and vendor advisories that have guided remediation efforts:

  • Expanded CVE Landscape:

    • CVE-2026-27484 (Sandbox Escape): Tenable’s latest analysis confirms attackers exploit edge case container breakout vectors, prompting intensified sandbox hardening.
    • CVE-2026-27486 (Authentication Bypass): SentinelOne reports additional attack paths via CLI process cleanup flaws, highlighting ongoing risks in session management.
    • CVE-2026-27487 (OS Command Injection): The National Vulnerability Database (NVD) updated mitigation advice, especially regarding OAuth token sanitization to prevent command injection.
    • CVE-2026-26326 (Information Disclosure): Telemetry misconfigurations remain a concern, with guidance emphasizing encrypted, immutable logging pipelines.
    • Six Newly Discovered Critical Vulnerabilities: Independent audits by Endor Labs revealed privilege escalation and input validation flaws, broadening the recognized attack surface.
  • Vendor and Industry Recommendations:

    • Microsoft Security Advisories: Microsoft strongly advises against running OpenClaw agents on enterprise or personal workstations without hardware-enforced isolation. Their updated guidelines promote dedicated hardware enclaves and zero-trust runtime environments to mitigate lateral threat propagation and credential theft.
    • NCC Group’s Influential Report: The NCC Group’s "Securing Agentic AI: What OpenClaw Gets Wrong and How to Do It Right" continues to shape platform hardening strategies, advocating for zero-trust sandboxes, immutable telemetry, and rigorous marketplace vetting.
    • OpenClaw Vendor Releases:
      • SecureClaw Sandbox v2026.2.21: Enhances container isolation and mitigates sandbox escape attempts.
      • Kilo Gateway v2026.2.23: Introduces a hardened secure network layer that prevents man-in-the-middle and injection attacks between agents and external APIs.
    • OpenClaw Daily Security Bulletins: Launched in mid-2026, these daily updates provide near real-time visibility into emerging threats, patches, and best practices, fostering proactive community engagement.

Emerging Governance Needs: Permission Systems and Human Oversight

Amid the technical mitigations, the governance dimension of autonomous AI agents has gained prominence. A February 2026 Medium article by JIN highlights the insufficiency of prompt engineering alone to constrain AI agent side effects. Instead, it argues for production-grade permission systems that enforce contextual, permissioned control over agents’ potentially impactful actions in production environments.

Key governance insights include:

  • Permissioned Side Effect Governance: AI agents require explicit, auditable permission frameworks to authorize actions such as data access, network requests, or system modifications, preventing unauthorized or harmful behaviors even if prompt injections occur.

  • Human-In-The-Loop (HITL) Controls: Balancing autonomy with human oversight ensures high-risk operations undergo contextual approval, improving safety without unduly hindering agent productivity.

  • Operator Training and Incident Drills: Institutionalizing security education and simulation exercises prepares platform operators and MSPs to recognize, respond to, and mitigate emergent threats effectively.


Mitigation Strategies: A Comprehensive Defense-In-Depth Approach

OpenClaw’s response now encompasses a multi-layered security posture blending technical hardening with governance and operational best practices:

  • Zero-Trust Sandboxed Runtimes: AI skills execute within cryptographically enforced, strictly isolated containers, limiting the blast radius of any compromise and containing malicious payloads.

  • Immutable, Signed Telemetry Pipelines: Tamper-evident real-time anomaly detection and logging enable early threat detection and robust forensic analysis, protecting against telemetry poisoning.

  • Encrypted Credential Vaults with Automated Rotation: Frequent automated credential renewal reduces the window for stolen secrets to be exploited.

  • Adaptive Multi-Factor Authentication (MFA): AI-driven behavioral analytics dynamically assess access attempts, blocking anomalous or suspicious activities to reduce credential abuse.

  • Marketplace Vetting Enhancements: A rigorous vetting pipeline combines AI-based anomaly detection, static and dynamic code analysis, manual expert review, and cryptographic provenance verification to prevent malicious skills from entering ClawHub.

  • Network Hardening with Kilo Gateway: This new secure network layer enforces encrypted and authenticated communications between AI agents and external services, mitigating injection and man-in-the-middle threats.

  • Human-In-The-Loop Governance Frameworks: Permission systems and approval workflows regulate agent side effects, balancing autonomy with necessary human oversight.

  • Hardware Isolation Recommendations: Users are urged to deploy OpenClaw agents exclusively on dedicated hardware with hardware security modules (e.g., Raspberry Pi variants, Apple M4 Macs), isolating AI workloads from sensitive environments.

  • Content Moderation and Crypto Scam Enforcement: Strict bans on cryptocurrency-related content suppress illicit activity and protect marketplace integrity.


Current Status and Forward Outlook

OpenClaw’s security posture, while still challenged by persistent and evolving threats, has markedly improved through concerted platform hardening, formal vulnerability disclosure, and community-vendor collaboration. The launch of OpenClaw Daily security bulletins exemplifies a shift toward transparency and collective vigilance.

Nonetheless, the presence of advanced prompt injection attacks, supply-chain contamination, and runtime hijacking risks such as ClawJacked illustrate that continuous security innovation and rigorous governance remain indispensable.

The OpenClaw saga offers a critical blueprint for future AI agent platforms, underscoring that openness and innovation must be tightly coupled with robust, multi-layered security architectures and operational discipline. As autonomous AI agents become increasingly embedded in critical workflows, embedding permissioned governance, hardware isolation, and zero-trust principles from the outset will be key to securing the AI-driven future.


Selected References for Further Reading

  • Operation DoppelBrand, OpenClaw Exfiltration, and AI-Generated Passwords - Blumira Briefings
  • ClawHavoc Poisoned OpenClaw’s ClawHub with 1,184 Malicious Skills, Enabling Data Theft and Backdoor Access
  • Securing Agentic AI: What OpenClaw Gets Wrong and How to Do It Right | NCC Group
  • OpenClaw v2026.2.23 Release Analysis: Kilo Gateway, Moonshot/Kimi Vision Video, and Security Hardening
  • Microsoft warns OpenClaw unsafe on PCs | Cybernews
  • CVE-2026-27487 : OS Command Injection Vulnerability Detail
  • Fake troubleshooting tip on ClawHub leads to infostealer infection - Help Net Security
  • AI coding assistant Cline compromised to create more OpenClaw chaos
  • OpenClaw vulnerabilities exposed by AI-powered code scanner - Endor Labs Report
  • OpenClaw Security Risks: Identity Isolation and Safe Evaluation - Microsoft Guidance
  • OpenClaw Daily Update: Advanced Security Features in Agent ...
  • Why Your AI Agent Needs a Permission System, Not Just Better Prompts: Production-Grade Side Effect Governance in OpenClaw | JIN | Medium, Feb 2026
  • ClawJacked Flaw Lets Malicious Sites Hijack Local OpenClaw AI Agents via WebSocket

The OpenClaw narrative stands as a cautionary yet instructive tale on the frontiers of AI agent security, championing a future where innovation and rigorous security governance co-evolve to safeguard trust in autonomous AI ecosystems.

Sources (55)
Updated Feb 28, 2026