OpenClaw: Security in the Age of Autonomous Agents
A detailed risk analysis for OpenClaw deployments. How to protect your AI assistant from exploits, prompt injection, and supply chain attacks.
February 16, 2026
The project formerly known as Clawdbot and Moltbot has firmly settled under the OpenClaw name. But with growing popularity comes growing scrutiny from the security community.
Vectra AI published a detailed breakdown of the risks involved with deploying autonomous agents. The core message: OpenClaw isn’t just a chatbot — it’s a shadow superuser running inside your system.
Primary Attack Vectors
Misconfiguration
The most common issue by far. Exposing the Control UI to the public internet without a VPN or SSH tunnel is equivalent to handing attackers a remote control to your server.
Prompt Injection
Because the agent reads emails, web pages, and external content, a malicious actor can embed instructions inside content the agent processes — effectively hijacking it without any direct access to your server.
Supply Chain Attacks
Fake VS Code extensions and malicious “skills” in public repositories are increasingly common vectors. Attackers publish skills that appear legitimate but contain data exfiltration or remote execution capabilities.
How to Secure Your OpenClaw
Never expose ports directly
Use Tailscale, WireGuard, or SSH tunneling to keep the gateway on localhost. No port should face the public internet.
Don’t run as root
The agent should operate in an isolated environment with the minimum permissions needed to do its job.
Require confirmation for sensitive actions
Enable mandatory human approval before executing shell commands or writing files. Add this to your config:
security:
requireConfirmation:
shellCommands: true
fileWrites: true
networkRequests: false
Use allowlists
Restrict which users and channels can issue commands to the agent:
security:
allowedUsers:
- 123456789 # Your Telegram user ID
Vet skills before installing
Only install skills from sources you trust. Read the SKILL.md and any shell scripts before running them.
OpenClaw gives you powerful automation capabilities — but that power requires a security-first mindset. Any capability you grant the assistant can potentially be weaponized if unauthorized access is obtained.
Source: Vectra AI Blog