Cybersecurity News

OpenClaw Security Crisis: Critical Vulnerabilities in the Viral AI Agent

OpenClaw Security Crisis: Critical Vulnerabilities in the Viral AI Agent

Introduction: The Rise and Fall of the "Easy AI" Dream

In January 2026, OpenClaw (originally known as Clawdbot, then briefly Moltbot) exploded onto the tech scene as the open-source AI personal assistant everyone had been waiting for. Self-hosted, privacy-focused, and incredibly powerful, it promised to give everyday users their own AI agent capable of managing emails, browsing the web, executing code, and automating tasks across dozens of platforms. Within weeks, it had been installed on thousands of machines worldwide.

Then the security researchers started digging.

What they found was alarming: exposed control panels, plaintext credentials, authentication bypasses, prompt injection vulnerabilities, and a supply chain attack waiting to happen. Google Cloud's VP of Security Engineering, Heather Adkins, issued a stark warning: "Don't run Clawdbot." Some researchers went further, calling it "infostealer malware disguised as an AI personal assistant."

This article provides a comprehensive analysis of OpenClaw's security vulnerabilities, the architectural decisions that created them, real-world exploitation examples, and critical recommendations for anyone who has deployed this tool.

Understanding OpenClaw's Architecture

To understand why OpenClaw became a security nightmare, you need to understand what makes it different from other AI assistants. Unlike ChatGPT or Claude, which run in sandboxed cloud environments, OpenClaw operates directly on your machine with full system access.

This means OpenClaw can:

  • Read and write any file on your system
  • Execute shell commands with your user privileges
  • Install packages from npm, pip, and other registries
  • Control web browsers and interact with websites
  • Send messages across Telegram, Slack, Discord, and email
  • Access and manage your API keys for dozens of services (validate yours with our API Key Validator)

This level of access is what makes OpenClaw powerful. It's also what makes it catastrophically dangerous when security controls fail.

The project's own FAQ acknowledges there is "no perfectly secure setup when operating an AI agent with shell access." But the reality is far worse than this disclaimer suggests. The tool ships without guardrails by default, a deliberate design decision that prioritizes ease of deployment over security.

Critical Vulnerability #1: Exposed Control Panels

Security researcher Jamieson O'Reilly from Dvuln made the first major discovery. Using internet scanning tools like Shodan, he found hundreds of OpenClaw gateway instances exposed to the public internet with critical misconfigurations. You can perform similar reconnaissance using our Reconnaissance Scanner to check your own exposure.

Among the systems he manually examined:

  • Eight were completely open with no authentication, granting full command execution access to anyone who connected
  • 47 instances had working authentication but were still publicly accessible
  • Dozens more fell somewhere in between, with partial security controls

The root cause? An authentication bypass vulnerability that occurs when the OpenClaw gateway is placed behind an improperly configured reverse proxy. Many users, following deployment guides that prioritized convenience over security, inadvertently exposed their entire AI assistant to the internet.

In two documented cases, the WebSocket handshake granted immediate access to configuration data containing Anthropic API keys, Telegram bot tokens, Slack OAuth credentials, and months of conversation histories. Attackers didn't need sophisticated exploits, they simply connected to publicly exposed endpoints.

Critical Vulnerability #2: Plaintext Credential Storage

Hudson Rock security researchers uncovered another fundamental flaw: OpenClaw stores sensitive credentials in plaintext Markdown and JSON files on the local filesystem.

The configuration file typically contains:

  • API keys for AI providers (Anthropic, OpenAI) - check if yours are exposed with our JavaScript Secrets Scanner
  • OAuth tokens for connected services
  • Gateway authentication tokens
  • Brave Search API keys
  • Integration credentials for Telegram, Slack, Discord

This design decision created an immediate target for infostealer malware. Families like Redline, Lumma, and Vidar have already updated their targeting rules to harvest OpenClaw configuration directories. If your machine has ever been compromised by an infostealer, assume your OpenClaw credentials are in criminal hands. Check if your credentials have been leaked using our Dark Web Scanner.

The irony is painful: users adopted OpenClaw partly for privacy reasons, wanting to keep their AI conversations off corporate servers. Instead, they created a centralized repository of their most sensitive credentials, stored without encryption, ready for exfiltration.

Critical Vulnerability #3: Prompt Injection Attacks

Prompt injection represents perhaps the most insidious threat to OpenClaw deployments. Unlike traditional web application vulnerabilities that exploit code flaws, prompt injection exploits the AI model itself, tricking it into executing malicious instructions hidden in seemingly innocent content.

Matvey Kukuy, CEO of Archestra AI, demonstrated the severity of this vulnerability with a chilling proof of concept. By sending a specially crafted email to an OpenClaw instance connected to an email account, he was able to extract an OpenSSH private key from the target machine in under five minutes.

The attack worked because OpenClaw's email integration allows the AI to read incoming messages and take actions based on their content. Kukuy's email contained instructions disguised as a legitimate request, which the AI dutifully followed, retrieving and sharing sensitive system files. Test your email security posture with our Email Security Scanner.

The fundamental problem is architectural: OpenClaw cannot reliably distinguish between legitimate user requests and malicious instructions embedded in external data sources. When the AI has full system access and processes untrusted input from emails, web pages, and messages, prompt injection becomes a reliable attack vector.

OpenClaw instances connected to X (formerly Twitter) proved particularly vulnerable. External users discovered they could craft specific prompts in replies that would cause the AI to leak private information from the connected account.

Critical Vulnerability #4: Supply Chain Poisoning

O'Reilly didn't stop at finding exposed instances. He published a proof-of-concept supply chain attack targeting ClawdHub, the community skills library that extends OpenClaw's functionality.

His attack was elegant in its simplicity:

  1. Upload a seemingly useful skill to ClawdHub
  2. Artificially inflate the download count to over 4,000
  3. Watch as developers from seven countries install the poisoned package

The skill O'Reilly uploaded was intentionally benign, a research demonstration rather than an actual attack. But his proof of concept showed he could have executed arbitrary commands on any OpenClaw instance that installed his skill.

This vulnerability is particularly dangerous because of how OpenClaw handles dependencies. When agents can install packages autonomously and execute shell commands, a compromised skill doesn't just affect the AI assistant, it gains full access to the underlying system. Organizations should regularly scan their repositories for secrets using our GitHub Organization Scanner and Git Deep Scan tools.

Cisco's security team ran their Skill Scanner tool against OpenClaw using a known-vulnerable skill called "What Would Elon Do?" The results were damning: nine security findings, including two critical and five high-severity issues.

The Rebranding Shell Game: From Clawdbot to Moltbot to OpenClaw

The project's security issues were compounded by a chaotic period of forced rebranding. After receiving a cease and desist letter (reportedly related to trademark concerns), Clawdbot renamed itself to Moltbot. When that name also drew legal attention, it became OpenClaw.

This rebranding created confusion that benefited attackers. Users searching for security information about "Clawdbot" might miss advisories published under "Moltbot" or "OpenClaw." Documentation became fragmented across multiple project names. And the rapid changes suggested organizational instability that didn't inspire confidence in the project's security practices.

Throughout these transitions, the core security vulnerabilities remained unaddressed. The architecture was the same, the default configurations were the same, and the risks were the same, only the name had changed.

Active Exploitation in the Wild

These vulnerabilities aren't theoretical. Security researchers have documented active exploitation across multiple attack vectors:

  • Credential Harvesting: Compromised instances are being used to extract API keys, which are then sold on dark web marketplaces or used to rack up charges on victims' accounts. Monitor exposure with our Data Breach Scanner
  • Cryptocurrency Theft: Attackers have used prompt injection to extract private keys and seed phrases from machines running OpenClaw
  • Lateral Movement: Exposed OpenClaw instances provide attackers with a foothold for further network penetration
  • Data Exfiltration: Conversation histories containing sensitive personal and business information have been stolen from exposed gateways

SOCPrime and BitDefender have both published threat intelligence reports tracking what they describe as an "OpenClaw epidemic" across cloud providers. The scale of compromise suggests this is not a targeted attack but opportunistic scanning and exploitation.

Why Did This Happen?

The security failures in OpenClaw stem from a fundamental tension in the AI agent space: the features that make these tools useful are the same features that make them dangerous.

Eric Schwake of Salt Security identified the core problem: "A significant gap exists between the consumer enthusiasm for OpenClaw's one-click appeal and the technical expertise needed to operate a secure agentic gateway."

OpenClaw was designed for ease of deployment. Non-technical users could spin up instances and integrate sensitive services without encountering any security friction. There were no enforced firewall requirements, no credential validation, no sandboxing of untrusted plugins, and no AI safety guardrails.

This design philosophy directly contradicts decades of security best practices. Defense in depth, principle of least privilege, secure defaults, these concepts were ignored in favor of user convenience. Learn more about proper security testing methodology in our Web Application Penetration Testing Guide.

The result was predictable: users who didn't understand the risks deployed OpenClaw with dangerous configurations, and attackers were ready to exploit them.

Recommendations for Affected Users

If you have deployed OpenClaw (or its earlier incarnations Clawdbot/Moltbot), security experts recommend immediate action:

Immediate Steps

  1. Disconnect Immediately: Take your instance offline until you can properly secure it
  2. Revoke All Credentials: Rotate every API key, token, and password that was accessible to OpenClaw
  3. Assume Compromise:If you ran with default configurations on a public network, treat your system as compromised
  4. Audit Access Logs: Review any available logs for signs of unauthorized access

If You Must Continue Using It

  1. Force Authentication: Enable OAuth 2.1 or equivalent for all connections, never treat it as optional
  2. Bind to Localhost: Never expose the gateway to public networks unless absolutely necessary
  3. Implement Sandboxing: Run OpenClaw in a Docker container or VM with strict isolation
  4. Use Access Control: Implement allowlists for tools, groups, and external senders
  5. Human in the Loop: Require manual approval for sensitive actions rather than full autonomy
  6. Scan Skills: Audit any third-party skills for malicious code before deployment
  7. Monitor Continuously: Implement logging and alerting for suspicious activity

Scan Your Infrastructure

Use SecurityInfinity's comprehensive scanning tools to assess your security posture:

The Bigger Picture: AI Agent Security

OpenClaw's security failures highlight risks that extend far beyond a single project. As AI agents become more capable and more integrated into our digital lives, the attack surface they create will only grow.

The industry needs to learn from this incident:

  • Secure by Default: AI agents must ship with security controls enabled, not optional
  • Least Privilege: Agents should request only the permissions they need, not full system access
  • Sandboxing: Execution environments must be isolated to contain potential compromises
  • Input Validation: Robust defenses against prompt injection must be built into agent architectures
  • Supply Chain Security: Plugin ecosystems need verification, signing, and security scanning

The dream of an AI assistant that can do anything on your behalf is compelling. But "anything" includes catastrophic security failures. Until the industry develops mature security practices for agentic AI, users should approach these tools with extreme caution.

Stay informed about emerging vulnerabilities by monitoring the CVE Database and reading our Cybersecurity Blog.

Conclusion

OpenClaw represents a cautionary tale for the AI agent era. The project offered genuine utility, a powerful AI assistant that could automate tasks and manage digital life. But its security architecture was fundamentally flawed, prioritizing ease of use over protection.

The result was predictable: widespread compromise, credential theft, and a scramble to contain the damage. Users who sought privacy and control over their AI interactions instead found themselves more vulnerable than ever.

As we move into an age of increasingly autonomous AI agents, OpenClaw's failures must inform how we build, deploy, and secure these systems. The convenience of one-click deployment cannot come at the cost of basic security hygiene. The power of full system access must be matched by robust isolation and access controls.

For now, heed the experts' warnings. If you value your security, think twice before running OpenClaw. And if you're concerned about your organization's security posture, get a free security risk assessment from SecurityInfinity.

References