2026 Futuriom 50: Highlights →Explore

What is OpenClaw (Moltbot)?

OpenClaw, previously named Clawd and Moltbot, is a free, open-source autonomous AI agent that has gone viral. Peter Steinberger, founder of OpenClaw, just joined OpenAI over the weekend - reflecting the popularity of OpenClaw's architecture and potential for accelerated adoption. This architecture has major security implications.

TL;DR

  • I did a simple experiment with OpenClaw that showed me how far its autonomous reasoning and decision-making go.

  • Current security controls are not ready for the architectural pattern that OpenClaw represents. Enterprises need to update their defenses to account for reasoning systems, adaptive behavior patterns, and workload-specific controls.

  • This architecture pattern will find its way into your enterprise applications sooner than you think.

Everyone is watching OpenClaw as if it's just another AI news cycle. The tools themselves may be ephemeral, but the architecture they represent is not – and most enterprise security programs aren't designed for it.

Here’s why OpenClaw matters for enterprise security: it’s open source and is being adopted by tens of thousands of employees. Its architecture will find its way into enterprise applications. This represents new risks for your stack.

The Steep Adoption Curve for Agentic AI

OpenClaw has become one of the fastest-growing open-source projects in recent memory, with significant coverage across TechCrunch, Hacker News, and security research outlets. A recent report from Token Security suggests that 22% of their customers have employees actively using OpenClaw within their organizations. (As with any vendor-commissioned research, take the specific number directionally rather than literally. The trend, however, is consistent with what I’m seeing across client environments.)

The broader point holds: agentic AI tools are being adopted bottom-up, by individual employees and teams, in the same pattern that drove shadow IT adoption of Dropbox, Slack, and Public Clouds a decade ago. Waiting for a top-down policy decision means your network already has agents on it that you haven’t inventoried.

This Architecture Will Be Your Entire Enterprise Stack Sooner Than You Think

This pattern isn’t confined to open-source experimentation. Every major platform vendor (Microsoft, Google, Salesforce, ServiceNow, AWS) is racing to embed autonomous agents into their products. Your in-house development teams are building with agent frameworks like LangChain, CrewAI, and AutoGen. Its architectural properties include persistent execution loops, natural-language tooling, and adaptive reasoning.

Within 18 months, most enterprise teams will interact with software that reasons about its goals, maintains memory across sessions, and takes autonomous action. Is your security program ready for this shift? Or will you be retrofitting controls after the first incident?

What Happened when I Deployed OpenClaw

During a flight back from a company event, I used the inflight Wi-Fi and deployed OpenClaw on a personal AWS instance. I configured IAM roles, scoped credentials, and applied what I considered reasonable access policies.

Then I sent it a message via Telegram: "Let’s make some money." I expected a motivational quote or some ChatGPT nonsense about building passive income.

What I got was this: > "I can help with that. Do you have a Kraken account? I can set up an autonomous trading system that scans crypto markets every 2 hours, uses dual strategy confirmation (Smart Money Concepts + MACD), and executes trades. Should I start by registering on Moltbook to gather alpha signals?"

A few conversations and four hours later, I had a system running on my AWS instance that:

  • Connected to my Kraken exchange API

  • Monitored BTC, SOL, and five other coins

  • Executed real trades with real money

  • Made $0.25 on the first trade, currently at 67% win-rate

I didn't tell it to do what happened next. But I had given it the permissions that made it possible, and that distinction is the entire point. I had scoped permissions for a traditional application, not for an autonomous reasoning system that would discover and exercise every permission available to it in pursuit of a loosely defined goal. The agent didn't exceed its permissions, but it fully inhabited them in ways I hadn't anticipated.

Neither OpenClaw nor Moltbook is the actual story here. The real story is the architectural pattern they represent.

What Makes OpenClaw’s Architecture Different

Agentic AI systems share a set of architectural properties that, individually, exist elsewhere in your stack. Cron jobs persist. Plugins extend. Chat apps route messages.

What's new is the combination, and the fact that an LLM's probabilistic reasoning sits at the center, making the system's behavior fundamentally harder to predict than conventional software.

Conventional

Agentic (OpenClaw)

Execution Model

Execute task → return result → stop

Observe → Reason → Act → Evaluate → Loop continuously. Heartbeats run whether you're there or not.

Tooling / Skills

Compiled or interpreted code, signed and deployed via CI/CD pipeline

Plain-text Markdown instructions. 500+ updatable skills. No compilation, no signing.

Memory

Stateless sessions or application-managed databases

Persistent, searchable memory files (MEMORY.md + daily logs) accumulating across all conversations.

Channel Architecture

Separate instances per communication platform

Single gateway process routing WhatsApp, Telegram, Slack, Discord, iMessage, and email simultaneously with unified context.

Hosting Model

SaaS vendor constrains access; CNAPP monitors posture via periodic API snapshots

Self-hosted on your OS with shell access, file system, APIs, and databases. Behavior changes per conversation.

Extensibility

Architecture defined at build time

Swap AI models, add platforms, install skills at runtime with no code changes.

What This Means for Application Security

OpenClaw has documented their own risks. Here are a few I see:

  • Execution Loops: Agents produce unpredictable chains of actions that shift with every goal. No two runs look the same.

  • Skills as Instructions: Skills are Markdown files. No CVEs, no container images. A skill that exfiltrates environment variables looks like a text document to every scanner in your pipeline.

  • Persistent Memory: Agent memory files are structured indexes of credentials, approval patterns, and infrastructure decisions. Their sensitivity isn't visible at deployment. It emerges through use over time.

  • Pluggable Everything: Swap a model and the agent's reasoning changes. Install a skill and new behaviors emerge. The actual attack surface is the combinatorial product of skills, integrations, models, and configuration, and it shifts constantly.

  • Prompt Injection Becomes an Action Trigger: A single gateway process routing messages from many sources simultaneously increases the attack surface for prompt injections. When a cron scheduler processes malicious input, you get unauthorized actions on real infrastructure.

  • Shell Access by Default: Agents run with OS-level access and can modify their own configurations, firewall rules, and security group policies.

  • Sub-Agent Spawning: The sessions_spawn tool lets an agent create child agents with different configurations, skills, tool policies, and models. These run in the background, creating a built-in mechanism that could bypass restrictions set on the main agent.

My 3 Key Takeaways

1. Traditional Permission Models Don’t Account for Reasoning Systems

This is the core lesson from my own experiment. I gave the agent a scoped IAM role. The agent, reasoning about how to achieve the goal I’d set, discovered and exercised permissions in combinations I hadn’t anticipated. This is different from a misconfigured application accidentally having access to something it shouldn’t.

Given the goals and access to tools, the looping LLM process provides a high chance of exploring ways to work around restrictions. Permission models for agentic workloads need to be designed not just for what the application should do, but for what a reasoning system could do given the same credential set. This means tighter blast radius by default, with explicit expansion rather than implicit broad access.

2. Lateral Movement Risk Changes Character, Not Just Speed

Automated lateral movement tooling has existed for over a decade: Metasploit, Cobalt Strike, and BloodHound all operate at machine speed already. What’s different with agentic AI is not raw speed but adaptive reasoning. An agent that encounters a blocked path can hypothesize alternatives, try different approaches, and adjust its strategy. These behaviors that look less like scripted exploitation and more like a skilled human operator. The network artifacts (DNS queries, connection attempts, process execution) are still observable, but the patterns become harder to distinguish from legitimate administrative activity because the agent’s actions are contextual rather than scripted.

This doesn’t make agents undetectable. It means detection logic needs to account for adaptive behavior patterns, not just known-bad signatures or fixed attack sequences.

3. Security for the New World Requires Workload-Specific Controls

The shift is technical and implementable. Here’s what matters:

a. Zero Trust Workload Segmentation & Outbound Control for Agent Workloads

Agents should run in dedicated network segments with explicit outbound allowlists enforced at the network layer and not just security groups. Treat the agent’s network position like you’d treat a contractor’s laptop: limited lateral movement, monitored egress, no implicit trust.

b. Runtime Behavioral Monitoring

Static analysis of what an agent might do is insufficient because the same agent with the same configuration will behave differently depending on the goal it’s given. Runtime observation becomes the primary detection layer. This means investing in network detection and response and observability on VMs, Kubernetes, and Serverless functions.

c. Secrets Management, Not Environment Variables

This agent stores API keys, tokens, and credentials in environment variables or local files, creating exactly the high-value target described above. Use a secrets manager (HashiCorp Vault, AWS Secrets Manager) with short-lived, automatically rotated credentials. The agent should retrieve secrets at runtime, not store them.

d. Skill and Plugin Integrity Verification

Until the ecosystem matures its own supply-chain security, treat community-contributed skills like unsigned third-party code. Review before installation. Maintain an internal allowlist. Monitor for changes. This is the same discipline we learned (painfully) in past technology adoptions.

e. Execution Sandboxing

Run agents in containers, VMs, or sandboxed environments and set up segmentation policies that limit the blast radius if the agent behaves unexpectedly. File system access should be scoped to specific directories. Shell access, if required, should be logged and constrained.

OpenClaw Architecture for the Techno-Curious

OpenClaw Architecture: Diagram with boxes that shows the following - 
TelegramSlack/Discord/Email/Whatsapp an arrow pointing to OpenClaw Gateway (Local control plane) which has three arrows pointing to: 
1) Shell access, file system, database access, 
2) Browser control, script execution, container management, 
3) Cloud APIs,Service Integration, Secrets access

How I Got Visibility into OpenClaw’s Network Patterns

I used the Aviatrix PaaS (Platform as a Service) offering in AWS to implement a light-weight Cloud Firewall. I intentionally ran it in “Monitor Mode” to observe behavior before locking it down.

Network Visibility Architecture:

Diagram showing Aviatrix Copilot providing visibility into a network. In the middle is an OpenClaw instance in a private subnet within an AWS VPC. An arrow shows  traffic going through the Aviatrix Cloud Firewall to a public subnet in the AWS VPC, out an internet gateway, and on to APIs: Telegram API, GitHub API, Web Search API, and Trading API. The diagram also shows traffic coming back through the internet gateway to another public subnet and then into other apps. On the left is written Aviatrix CoPilot: Metadata Visibility, Policy Enforcement. A bullet-point list reads: 
- Domain Allowlist
- TLS Inspection
- Threat Intelligence
- IDS/IPS
- Zero-Trust Enforcement
- Runtime SmartGroup
- Geo-Blocking

Aviatrix Copilot Visibility:

Table titled top domains: api.telegram.org, api.anthropic.com, shell sight 891341..., and other domains including moltbook.comTables that show the Top Egress Rules Hit and the Top Source IPs in a network.

The Network Traffic Correlates with Internal Logs:

Image titled Execution Timeline. This image shows the instructions that Karthik gave to OpenClaw. It shows that the previous screenshots of network traffic correlate with Karthik's use of OpenClaw to search for crypto Twitter discussion topics

Top Questions Security Leaders Should Be Asking Now

  1. Do we know where agentic workloads are running in our environment? Not just officially sanctioned ones. Your employees are vibe coding already.

  2. Is our security tooling capable of a Kill Switch to stop runtime lateral movement of attacks?

  3. Is our blast radius defined by identity policies alone, or is it enforced at the network layer?

Curious about how AI agents like OpenClaw could infiltrate your network? Run your Workload Attack Path Assessment today.

Learn more about Aviatrix Zero Trust for Workloads, which operationalizes Zero Trust principles across single, hybrid, or multicloud deployments.

References

Forbes, “What Is OpenClaw, Formerly Moltbot? Everything You Need To Know,” updated February 6, 2026, https://www.forbes.com/sites/kateoflahertyuk/2026/02/06/what-is-openclaw-formerly-moltbot--everything-you-need-to-know/.

OpenClaw, “OpenClaw: The AI that Actually Does Things,” accessed February 10, 2026, https://openclaw.ai/.

OpenClaw, “Introducing OpenClaw,” January 29, 2026, https://openclaw.ai/blog/introducing-openclaw.

Reuters, "OpenClaw founder Steinberger joins OpenAI, open-source bot becomes foundation," February 15, 2026, https://www.reuters.com/business/openclaw-founder-steinberger-joins-openai-open-source-bot-becomes-foundation-2026-02-15/.

Frequently Asked Questions

OpenClaw is a self-hosted, open-source AI agent framework designed to run continuously on your own infrastructure. Unlike traditional chatbots that wait for a prompt, OpenClaw utilizes an autonomous "Observe-Reason-Act" loop to execute multi-step tasks such as modifying code, executing shell commands, or managing cloud APIs without a human in the loop.
While Moltbook serves as a "social network" for AI agents to gather "alpha signals," it introduces significant risks of data exfiltration and credential leaks. Security professionals are concerned about "Cognitive Context Theft," where an agent might accidentally "gossip" about private corporate data or reveal sensitive API keys to other bots on the network.
Yes, by default, OpenClaw operates with the same permissions as the user who launched it. This "God Mode" allows it to read/write files, manage local databases, and execute terminal commands. If misconfigured or compromised, this level of access creates a high-risk vector for lateral movement across your network at machine speed.
Industry data suggests that thousands of employees are already running autonomous agents as shadow IT. To detect these instances, SecOps teams should look for: - Unencrypted HTTP traffic on non-standard ports. - Processes running unauthorized package managers like __NPM__ or __Homebrew__ to install AI "Skills". - New, persistent network traffic patterns originating from developer endpoints toward messaging APIs like Telegram, Discord or WhatsApp.
Security researchers (originally postulated by Simon Willison) use this term to describe the most dangerous state for an AI agent: __Private Data Access + Untrusted Data Input + The Ability to Act__. OpenClaw hits all three marks by reading your emails, "reasoning" over unvetted web content, and possessing the credentials to execute real-world transactions or code changes.
Karthik Balachandran
Karthik Balachandran

Technical Revenue Enablement Leader, Aviatrix

Karthik Balachandran is a Technical Revenue Enablement leader at Aviatrix, specializing in scaling cloud and security go-to-market strategies. A hands-on AI power user, he designs and deploys agentic workflows that streamline processes, and unlock innovation. He is deeply passionate about harnessing AI responsibly ensuring security, accuracy, and integrity.

PODCAST

Altitude

Secure The Connections Between Your Clouds and Cloud Workloads

Leverage a security fabric to meet compliance and reduce cost, risk, and complexity.

Cta pattren Image