OpenClaw Secure Builds

Malicious skills and infostealers distributed through ClawHub and related OpenClaw supply‑chain compromises

Malicious skills and infostealers distributed through ClawHub and related OpenClaw supply‑chain compromises

ClawHub Malware Supply‑Chain Attacks

Malicious Skills and Infostealers Distributed Through ClawHub and Related OpenClaw Supply-Chain Compromises

The AI security landscape in 2026 is increasingly dominated by sophisticated supply-chain attacks targeting popular marketplaces like ClawHub and the OpenClaw ecosystem. These attacks facilitate the widespread distribution of malicious skills and infostealers, posing severe threats to organizations and individual users alike.

Compromised ClawHub Marketplace and Malicious Skill Uploads

ClawHub, the primary marketplace for AI skills and plugins, has become a significant attack surface. Recent investigations reveal that over 1,180 malicious skills are hosted on the platform, many masquerading as legitimate tools such as social media automators, data fetchers, or utility plugins. These malicious skills serve as infection vectors for deploying advanced malware families, including Moltbot, ClawdBot, AtomStealer, and Atomic Stealer.

Key functionalities of these malware families include:

  • Credential and API key exfiltration
  • Remote hijacking of AI agents
  • Manipulation of behavioral models and decision parameters

By embedding malicious code into seemingly benign or trusted plugins, attackers effectively bypass traditional security measures. These compromised skills often exploit vulnerabilities in platform dashboards, comment sections, and plugin repositories, making detection increasingly difficult.

Recent articles, such as "ClawHub hosts over 1,100 malicious skills capable of stealing SSH keys," highlight the scale of this threat. The infiltration of the marketplace not only risks individual systems but also threatens the integrity of entire AI ecosystems through large-scale supply-chain poisoning.

Exploitation of Critical Active CVEs and WebSocket Flaws

Adding to the complexity, several critical vulnerabilities identified in 2026 are actively exploited by threat actors:

  • CVE-2026-26323 and CVE-2026-26327 involve command injection, misconfigurations, and insecure process management, enabling arbitrary code execution, persistent backdoors, and security bypasses.
  • The ClawJacked WebSocket flaw represents a significant attack vector, allowing malicious websites to hijack local OpenClaw AI agents, creating persistent control channels.

These vulnerabilities have been exploited in widespread incidents, including Clawdbot / OpenClaw data leaks, which exposed vast amounts of user data. The recent updates to models like Claude have inadvertently expanded attack surfaces through remote control features and automated scheduled tasks, which threat actors leverage for remote hijacking and long-term persistence.

Infostealer Campaigns and Large-Scale Skill Abuse

One of the most alarming developments is the rise of infostealer campaigns, particularly pivoting around the AMOS stealer and similar malware families. Attackers utilize malicious skills as initial infection vectors, embedding covert command-and-control (C2) channels within comment sections, dependencies, or prompt injections.

Notable tactics include:

  • Comment-based C2 channels enabling stealthy communication
  • Prompt injection techniques to manipulate agent behavior or modify core model parameters
  • Evasion of CAPTCHA and Cloudflare protections using sophisticated guides and techniques, ensuring persistent C2 communication

A recent survey of 500 ClawHub skills found that 10% were dangerous or malicious, with many designed explicitly for data exfiltration or system hijacking. Among these, the top OpenClaw skill was a malware disguised as a Twitter writing bot, backed by a botnet that amplifies its malicious activities.

Theft of Core ‘Soul’ Files and Implications

A particularly concerning trend is the targeted exfiltration of ‘soul’ files—the core behavioral models, decision-making parameters, and configuration files of AI agents. These components often contain API secrets, environment variables, and credentials, making their theft potentially devastating.

By weaponizing or disabling these core components, adversaries can deeply manipulate, hijack, or disable AI systems, with implications ranging from organizational security breaches to systemic AI ecosystem destabilization.

Response Measures and Defensive Strategies

In response to these evolving threats, the security community has issued guidance and tools for mitigation:

  • Provenance validation: Cryptographically sign skill packages and plugins to verify authenticity.
  • Static and behavioral analysis: Use tools like VirusTotal and ClawCare to detect malicious code early.
  • Secrets management: Encrypt and rotate API keys and configuration files regularly.
  • Runtime monitoring: Detect prompt injections, agent hijacking, and covert C2 channels.
  • Platform hardening:
    • Restrict public access to dashboards and plugin repositories
    • Enforce least privilege and network segmentation
  • Prompt patching and vulnerability management: Apply security updates immediately for CVEs such as CVE-2026-26323, CVE-2026-26327, and the ClawJacked WebSocket flaw.

Broader Implications and Future Outlook

Despite recent patches, attackers continuously adapt, exploiting supply-chain vulnerabilities and stealing core ‘soul’ files for malicious ends. The scale and sophistication of these operations underscore the urgent need for collaborative defense efforts, community-driven detection signatures, and real-time behavioral monitoring.

The ongoing crisis emphasizes that trust in AI ecosystems hinges on robust security practices, transparent provenance, and collective vigilance. As adversaries refine their tactics, the AI community must prioritize early detection, rapid response, and secure deployment to safeguard the integrity and safety of AI-driven automation.

In summary, the distribution of malicious skills via ClawHub and OpenClaw not only facilitates large-scale infostealer campaigns but also threatens the foundational trust in AI systems. Addressing these challenges requires a concerted, multi-layered approach, emphasizing security best practices, community collaboration, and continuous vigilance.

Sources (9)
Updated Mar 1, 2026
Malicious skills and infostealers distributed through ClawHub and related OpenClaw supply‑chain compromises - OpenClaw Secure Builds | NBot | nbot.ai