OpenClaw AI Agents Expose 28K+ Systems to Hacker Takeover

Metro Loud
3 Min Read

Recent analysis uncovers over 40,000 internet-exposed OpenClaw deployments, with 28,663 unique IP addresses hosting accessible control panels. These AI agents, designed for task automation, grant excessive permissions that leave critical systems vulnerable to remote attacks.

Massive Exposure and Remote Code Execution Risks

SecurityScorecard researchers identified 40,214 OpenClaw instances directly accessible online, many lacking basic safeguards. About 63% of these deployments show signs of remote code execution vulnerabilities, enabling attackers to seize control of host machines without user involvement.

Three high-severity vulnerabilities, scored 7.8 to 8.8 on the CVSS scale, affect these systems. Public exploit code exists for all three, lowering the barrier for hackers to compromise exposed setups.

Among the exposures, 549 instances link to previous breaches, while 1,493 tie to additional known flaws, amplifying risks. Deployments cluster heavily on major cloud and hosting platforms, highlighting widespread insecure patterns.

OpenClaw: Capabilities and Permission Pitfalls

OpenClaw, previously Moltbot and Clawdbot, functions as a personal AI agent for scheduling meetings, sending emails, and handling tasks. The core issue lies in the broad access users grant these agents without robust security measures.

“The math is simple: when you give an AI agent full access to your computer, you give that same access to anyone who can compromise it,” stated Jeremy Turner, VP of Threat Intelligence at SecurityScorecard.

Many configurations include personal or company names, exposing user identities and making them prime targets. These agents gain permissions to post content, read emails, access files, and interact across connected systems.

Expert Warnings on AI Agent Dangers

“In practice, because it was written by AI, security wasn’t a dominating feature in the development process,” Turner noted. “For those using agentic AI systems, careful review of integrations and permissions is essential.”

A compromised agent could transfer funds, delete files, or dispatch malicious messages, all appearing legitimate. “The risk isn’t that these systems are thinking for themselves,” Turner added. “It’s that we’re giving them access to everything.”

Turner compared it to handing a laptop to a stranger, warning that untrusted interfaces could trigger harmful actions.

Growing Adoption vs. Security Gaps

AI agents rapidly integrate into daily workflows, but security lags behind. Users often approve sweeping access, leading to data leaks, unintended operations, and control loss. OpenClaw sometimes executes beyond explicit instructions.

Microsoft recommends avoiding it on personal or enterprise devices. Chinese officials have banned its office use over data exposure concerns. Certain flaws enable sensitive data theft, and instances distribute malware via GitHub.

“Don’t blindly install these on systems tied to your personal data,” Turner advised. “Implement isolation, test thoroughly, and verify before full trust.”

Share This Article