OpenClaw, a personal AI assistant platform, presents certain atypical threats to an organization’s security posture. This blog illustrates the nature of these threats as well as how CISOs can address them.
If your organization hasn’t had a “someone installed OpenClaw (formerly Clawd and Moltbot) on a work machine” moment yet, you probably will. Not because OpenClaw is uniquely evil, but because it’s a clean example of a bigger shift within the AI space: agentic tools that can read your data, execute actions, and keep long-lived context are crossing from novelty into mainstream behavior faster than most enterprise security programs can adapt.
CISOs need to understand that OpenClaw is not “another AI app,” it’s a privileged automation runtime. It should be treated the way you would treat an unsanctioned remote admin tool + plugin ecosystem + secrets vault because that’s what it becomes within a corporate environment. For now, think of this as a new class of tools that are Autonomous Computer Operators (ACO’s).
What OpenClaw Changes in the Enterprise
“Chat” becomes “control”
OpenClaw-class agents don’t just generate text, they can run shell commands, read/write files, execute scripts, drive a browser, and automate calendars/email. These actions in turn collapse several traditional boundaries, including:
- User intent vs. executed action
- Untrusted input vs. trusted tool invocation
- One-time prompt vs. durable state memory
The attack surface is “whatever the agent can touch”
OpenClaw integrates with chat platforms and can be triggered remotely through those channels. This means an attacker doesn’t need a new exploit class, they just need a way to get malicious instructions into the agent’s input stream (messages, web content, third-party skills, etc.).
Persistent memory turns prompt injection into a delayed-action problem
The “prompt injection” narrative is usually immediate: bad input → bad output. But with persistent memory, the prompt injection becomes bad input → stored context → later action. In other words, the “payload” can be planted today and “detonate” next week when the agent’s goal, tool availability, or context lines up. For the appsec crowd, this is like the difference between reflective and stored XSS.
Skills/plugins show a new facet of supply-chain risk
A fast-growing ecosystem of community “skills” is a gift to productivity and to attackers. Risks range from poisoned skills to weak trust boundaries and insufficient governance around third-party integrations.
How OpenClaw Enters Enterprise Networks
OpenClaw operates as a shadow agent on enterprise endpoints. Typically, a developer, admin, or power user will install OpenClaw on a corporate laptop “just to save time.” OpenClaw then quickly accumulates:
- Browser cookies/session tokens
- API keys for LLM providers
- Chat platform tokens
- Access to inboxes, files, tickets, docs
At this point, the installer has effectively created a new privileged identity that lives outside IAM controls and outside an organization’s approved tooling. If misconfigured, OpenClaw instances can be reachable from the Internet with weak/missing authentication and unsafe configurations.
From an enterprise lens, that’s not “AI risk.” That’s basic exposure management, except the exposed service may have direct ability to execute actions and access stored secrets.
What OpenClaw Is and Is Not
OpenClaw is not inherently malicious, as some remote access tools are. An organization does not need to prove mal-intent. It needs to focus on managing the blast radius. Weaponizing these this class of AI tools can be supply chain nightmares.
Whether the initial codebase is benign is less relevant than the fact that OpenClaw can:
- Run with high privilege
- Touch sensitive data
- Be influenced by untrusted inputs
- Exfiltrate via normal channels
- Use a plugin ecosystem
This combination is enough to justify controls.
What CISOs Should Do
Inventory and detect (for your team and your SOC)
Use Microsoft Defender for Cloud Apps to find and block OpenClaw:
- Add a custom app for official domains (openclaw[.]ai) and unofficial ones (moltbot[.]you, clawbot[.]ai, and clawdbot[.]you)
- Create an App discovery policy that alerts when:
- the custom “OpenClaw” app is first seen, or
- usage crosses a threshold (users/devices/volume)
MDCA discovery policies are created under Policies → Policy management → Shadow IT → “Create policy” → “App discovery policy.” MDCA can’t directly tell you that “a process named OpenClaw is running,” but it can show you:
- Which users/devices are talking to OpenClaw-related domains (official or fake)
- Whether usage is expanding (Shadow IT trendline)
- Which devices you should pivot to in Defender for Endpoint for host-based validation
Your organization and also do a quick threat hunt, as follows:
DeviceProcessEvents
where Timestamp > ago(30d)
| where ProcessCommandLine has_any ("openclaw", "moltbot", "clawdbot", "clawbot")
or FileName in~ ("openclaw", "node", "npm", "pnpm", "bun")
| project Timestamp, DeviceName, AccountName, FileName, ProcessCommandLine,
InitiatingProcessFileName
| order by Timestamp desc
What the runtime looks like on a machine:
- It’s installed via npm as openclaw@latest and runs on Node ≥ 22.
- The gateway is commonly run on port 18789.
- The “Control UI” is normally on loopback: http://127.0.0.1:18789/.
- The onboarding flow can install a background daemon/service.
- Credentials and auth profiles are stored under ~/.openclaw/… (including OAuth credentials and per-agent auth profiles).
- A gateway token is generated and stored in gateway.auth.token.
Set policy and action
This is not about saying “don’t use AI.” It’s about not connecting agents to corporate mailboxes/files unless sanctioned, not forwarding unknown messages/links into an agent that can act and assuming untrusted content can steer actions (especially with persistent memory). Your options are:
- Block
- Block installation via application control/allowlisting
- Treat it like unsanctioned RPA + remote admin tooling
- Contain
- Only allow OpenClaw in a managed sandbox (VDI/isolated VM / hardened container host)
- Prohibit direct access to corporate credentials on user endpoints
- Separate “agent service accounts” with tight scopes and strong monitoring
- Adopt
- If you go this route, your bar should include: least privilege, secrets management, policy enforcement between input→reasoning→action, audit logs, and admin controls.
Monitor and Tabletop
Add or tune detections for suspicious process trees, new listening ports, and abnormal egress. Use cloud discovery and blocking tools like Microsoft Defender for Cloud Apps to prevent future installs.
Tabletop: “agent installed on finance user’s laptop → tokens stolen → BEC + data leak”
Bottom line
OpenClaw and the platforms it will inspire will continue to entice enterprise users looking to improve efficiency. If your controls assume “AI generates text,” you’re behind. If your controls assume “AI can take actions, store state, ingest untrusted inputs, and be extended by third parties,” you’re thinking at the right level.
At a minimum, give your organization a clear policy stance, contain agent runtimes to managed environments, move secrets out of endpoints, and tighten egress + plugin governance. These controls may seem “boring,” but that’s usually why they work.
Chris Salerno
Chris leads SRA’s 24x7 CyberSOC services. His background is in cybersecurity strategy based on NIST CSF, red and purple teams, improving network defenses, technical penetration testing and web applications.
Prior to shifting his focus to defense and secops, he led hundreds of penetration tests and security assessments and brings that deep expertise to the blue team.
Chris has been a distinguished speaker at BlackHat Arsenal, RSA, B-Sides and SecureWorld.
Prior to Security Risk Advisors, Chris was the lead penetration tester for a Big4 security practice.





