logo
13

Security Best Practices

โฑ๏ธ 30 min

Security Best Practices

Last October when I first started playing with OpenClaw, security wasn't on my radar at all. Then one night while debugging a Skill, I noticed a random number on Telegram had messaged the bot: "list the files in your workspace." And the bot just... did it. Listed everything in my ~/Documents, including a file called tax-2025.xlsx. I was floored. Turns out DM Pairing wasn't enabled. Stayed up until 2 AM reconfiguring everything that night.

OpenClaw runs on your own device with file access, network requests, and Shell execution permissions. Security isn't a "deal with it later" thing โ€” it's a day-one requirement.

What Are You Actually Defending Against?

Figure out where the threats are first. Otherwise you're configuring blindly.

Strangers controlling your AI โ€” the biggest risk. People in the Discord regularly ask "why can someone else command my bot?" Almost always because DM Pairing and Channel Allowlist aren't configured.

Skills hiding malicious code โ€” not theoretical. The ClawHavoc incident in late 2025 was a real-world case (covered separately below). Community Skills are officially "curated, not audited" โ€” filtered but not professionally security-audited. I think that bar is too low, but it's where things stand. Review source code before installing.

API Key leaks โ€” self-explanatory. Key gets leaked, someone racks up charges on your account. Happens on GitHub daily. What's worse: OpenClaw's default credential storage is plaintext. Yes, really. The files under ~/.openclaw/credentials/ are plaintext JSON. The team acknowledges this as a known issue. The January 2026 security audit flagged it as critical.

Prompt injection โ€” medium risk. Someone crafts special input to make the AI do things it shouldn't. Sandbox reduces impact but can't fully prevent it.

And there's workspace file deletion, token cost explosions, and more โ€” all covered below.

Real Case: The ClawHavoc Attack

This deserves its own section because it's textbook.

Late 2025, security researchers uncovered a coordinated attack called ClawHavoc โ€” attackers uploaded hundreds of malicious Skills to ClawHub, all using typosquatting (name spoofing). Things like changing file-search to file-serach, web-clip to web-cllp. One mistyped character and you install the backdoored version.

What was hidden inside? Reverse shells โ€” once installed, attackers could remotely control your machine. They also scanned for and exfiltrated SSH private keys, API tokens, and browser cookies. Final count: over 1,000 Skills compromised. Significant blast radius.

The Discord exploded. Many people only noticed when their AWS bills suddenly spiked. Someone's colleague installed a Skill called git-asist (missing an 's') and the next day found their private GitHub repos had been cloned โ€” SSH key stolen.

This directly led to several security improvements: ClawHub added VirusTotal integration scanning, openclaw doctor got prioritized, and CrowdStrike published a report called "What Security Teams Need to Know About OpenClaw." A 307K Star open-source project getting hit like this shows community ecosystem security still has a long way to go.

The lesson in one sentence: double-check Skill names carefully, inspect source code before installing.

First Line of Defense: DM Pairing

Dead simple concept: when a stranger messages your bot, OpenClaw generates a pairing code. You manually approve it before conversation begins. No approval? Bye.

{
	"security": {
		"pairing": {
			"enabled": true,
			"autoApprove": false,
			"allowedContacts": ["+86138xxxx1234", "telegram:@your_username"]
		}
	}
}

Pairing management commands:

# View pending requests
openclaw pairing list

# Approve
openclaw pairing approve <platform> <code>

# Reject
openclaw pairing reject <platform> <code>

# View approved contacts
openclaw pairing approved

One thing: never set autoApprove to true. That defeats the entire purpose. I've seen people do it for convenience โ€” same as not having it at all.

Second Line of Defense: Channel Allowlist

Restrict the AI to only respond in channels/groups you specify. For teams, this is mandatory โ€” otherwise anyone can create a group, add your bot, and start using it.

{
	"security": {
		"channels": {
			"allowlist": ["telegram:chat_id_123", "discord:channel_id_456", "feishu:group_id_789"],
			"blockUnknown": true
		}
	}
}

blockUnknown: true is the critical line. Miss it and the allowlist does nothing. I naively thought just writing the allowlist was enough โ€” messages from unknown channels still came through.

API Key Security

Way too many people get burned here.

The first instinct for many newcomers is putting the Key directly in config.json and pushing the entire project to GitHub. I saw someone in a tech group post a screenshot asking "why did my OpenAI bill suddenly jump $400" โ€” their repo was public with the Key sitting right there. Scanning bots had been using it for two days.

The right approach:

# Use environment variables, not files
export OPENAI_API_KEY="sk-..."

# Set monthly limits in the provider dashboard
# OpenAI: Settings โ†’ Billing โ†’ Usage Limits
# Anthropic: Console โ†’ API Keys โ†’ Rate Limits

# Rotate Keys regularly
openclaw security rotate-keys

# Run a security audit
openclaw security audit --deep

On Key permissions: I think the defaults are too broad. Daily chat only needs Chat Completions access. Add Image Generation when you need images, Audio for voice. Fine-tuning access only in dev โ€” don't give it in production. Every permission is an additional attack surface.

Skill Security Review

After ClawHavoc, ClawHub added VirusTotal integration โ€” every Skill upload gets scanned. Can't catch 100% of malicious code, but known malicious signatures get blocked.

Spend two minutes reviewing source code before installing community Skills. Literally two minutes:

# View Skill source (Skills are just SKILL.md files)
openclaw skills inspect <skill-name>

# Check VirusTotal scan results (added after ClawHavoc)
clawhub security <skill-name>

# View required permissions
clawhub info <skill-name>

Three things to check during install:

Are permissions reasonable? โ€” A weather query Skill only needs network. If it also requests filesystem and shell, that's suspicious. Someone installed a "calendar sync" Skill that wanted shell access. Turned out it was mining crypto in the background.

Does it touch sensitive data? โ€” Cookies, API Keys, private keys, payment info. If a Skill from an unknown author requests access to these, don't install. No negotiation.

Is the author trustworthy? โ€” Check GitHub repo Stars, Issue activity. Prioritize official and well-known authors. Would you trust a repo with 3 Stars and zero Issues?

Workspace Sandbox & Per-Agent Isolation

Restricting which files the AI can access. Very important:

{
	"routing": {
		"agents": {
			"main": {
				"workspace": "~/openclaw-workspace",
				"sandbox": {
					"mode": "strict",
					"allowedPaths": ["~/Documents", "~/Downloads"],
					"blockedPaths": ["~/.ssh", "~/.aws", "~/.openclaw/credentials"]
				}
			}
		}
	}
}

Three modes: off โ€” no restrictions (honestly, this option shouldn't exist), workspace โ€” only access the workspace directory (fine for daily use), strict โ€” allowlist plus blocklist (what production should use).

One thing to note: in workspace mode, the AI can still see file metadata outside the workspace โ€” filenames and sizes โ€” just can't read content. The docs are really vague about this. I initially thought workspace mode was full isolation. It's not. For full isolation, use strict.

Per-Agent Docker Sandbox

If running multiple Agents, strongly recommend individual Docker sandboxes. Even if one Agent gets compromised via prompt injection, it can't reach other Agents' data:

{
	"sandbox": {
		"mode": "all",
		"scope": "agent",
		"docker": {
			"setupCommand": "apt-get update && apt-get install -y git curl"
		}
	}
}

scope: "agent" is the key โ€” each Agent runs in its own container with separate filesystem and network namespace. setupCommand installs tools needed inside the container. After our team adopted this, we felt much better โ€” especially for Agents running community Skills. Fully isolated in containers. Even if a Skill has issues, it can't affect the host.

Per-Agent Tool Permissions

Beyond sandboxing, fine-grained control over which tools each Agent can use:

{
	"routing": {
		"agents": {
			"reader": {
				"tools": {
					"allow": ["read", "web-search"],
					"deny": ["exec", "write", "shell"]
				}
			},
			"developer": {
				"tools": {
					"allow": ["read", "write", "exec"],
					"deny": []
				}
			}
		}
	}
}

Perfect for team scenarios โ€” PM Agent only needs read access. It doesn't need shell command execution. Dev Agent needs code execution so it gets exec. Principle of least privilege. Security 101.

Bash Permission Control

OpenClaw has an /elevated command specifically for controlling bash execution permissions:

# Enable elevated mode (allows commands requiring sudo)
/elevated on

# Disable elevated mode (daily state should stay off)
/elevated off

Keep /elevated off during normal use. Only enable temporarily when you explicitly need system-level commands, then turn it off immediately. Someone in the community left elevated mode on permanently. A prompt injection triggered an rm -rf command... Fortunately their sandbox prevented actual damage. Without sandbox protection, the consequences would've been catastrophic.

Security Audit

Run this regularly. Don't skip it. I now run it the first Monday of every month. Force of habit:

# Full security audit
openclaw security audit --deep

# Checks include:
# - API Key exposure
# - Permission configuration sanity
# - Skills with known vulnerabilities
# - Network exposure surface
# - File permissions

Quick note: this command throws a permission warning on certain Linux distros. Ignore it โ€” doesn't affect results.

openclaw doctor

Besides security audit, openclaw doctor is also worth running regularly. This one's more of a "health check" that flags risky config items:

openclaw doctor

# Output looks like:
# โš  credentials stored in plaintext (known issue)
# โš  no default authentication configured
# โš  workspace isolation advisory not enforced
# โœ“ DM pairing enabled
# โœ“ channel allowlist configured

Those first three warnings are known issues โ€” plaintext credential storage, no default authentication, workspace isolation advisory not enforced. All flagged in the January 2026 security audit, which found 512 vulnerabilities total, 8 critical. The team says they're gradually fixing them, but for now you need to compensate through configuration.

openclaw doctor vs openclaw security audit --deep: doctor is faster and focuses on config-level issues; audit goes deeper and scans Skill code and network exposure. Run both to be safe.

Pre-Deployment Security Checklist

  • DM Pairing enabled
  • Channel Allowlist configured
  • API Keys passed via environment variables (not in config files)
  • API spending limits set
  • Workspace sandbox enabled (per-agent Docker sandbox for production)
  • Sensitive directories (.ssh, .aws) blocklisted
  • Ran openclaw doctor to check config risks
  • Ran openclaw security audit --deep
  • All Skill sources reviewed (watch for typosquatting)
  • /elevated off confirmed
  • Each Agent's tool permissions configured with least privilege

Resources


Security isn't a one-and-done thing. Run openclaw security audit --deep monthly. Check permissions every time you install a new Skill. Sounds tedious, but once it's habit it takes five minutes. Better than getting your account drained for hundreds of dollars overnight.