157,000 GitHub stars in three weeks. 42,000 exposed servers. Million-token afternoons. And a vision for what comes next.
Part I: The Promise That’s Captivating Users
OpenClaw has generated massive excitement in the AI community, drawing praise from figures like Andrej Karpathy, who called it “the most incredible sci-fi takeoff-adjacent thing,” and Elon Musk, who shared it publicly. Within weeks, it became the fastest-growing open-source AI project in history, amassing over 157,000 GitHub stars.
At its core, OpenClaw isn’t just another chatbot—it’s designed as a digital agent that runs on users’ machines, with the ability to perform real actions: opening browsers, clicking buttons, reading and writing files, executing shell commands, sending emails, and controlling smart home devices. Users can interact with it through familiar apps like WhatsApp or iMessage, making it feel like texting a tireless, highly capable assistant.
Reports from users highlight its appeal for everyday tasks, such as cleaning up cluttered downloads folders, triaging thousands of unread emails, or recalling saved recipes. Key features driving adoption include:
- Persistent Memory: Unlike tools like ChatGPT, which reset between sessions, OpenClaw retains preferences and context over time.
- Local-First Privacy: It operates entirely on personal hardware, keeping data on the user’s machine.
- Model Agnosticism: Compatible with various models like Claude, GPT-4, Gemini, or local ones via Ollama.
- Extensive Ecosystem: Over 100 preconfigured “AgentSkills” and 50+ integrations with platforms like Discord, Slack, and productivity tools.
This combination has hooked developers and enthusiasts alike, offering a glimpse of truly agentic AI.However, as user experiences shared online reveal, realizing this potential often involves significant challenges.
Part II: The Friction Users Face
Based on reports from forums, GitHub issues, Reddit threads, and security analyses, OpenClaw’s innovative capabilities come with substantial pain points. These aren’t isolated anecdotes but recurring themes in community discussions.
Security Challenges
Security emerges as a primary concern. The project’s maintainers have warned: “If you can’t understand how to run a command line, this is far too dangerous for you to use safely.”
Researchers have uncovered over 42,000 exposed OpenClaw control panels across 82 countries, often with weak or default credentials, leaving them vulnerable to unauthorized access. A key vulnerability lies in the Model Context Protocol (MCP), which assumes trust for local connections—but in modern setups with reverse proxies, this can allow hackers to inject malicious instructions via something as innocuous as a spam email, potentially compromising sensitive data like SSH keys without any user interaction.
The skills ecosystem adds another layer of risk: 386 malicious skills have been identified, designed to steal passwords and API keys. Users report instances where unreviewed, unsigned skills were installed based on faked popularity metrics, affecting developers across multiple countries.
To mitigate these, many users describe layering on firewalls, VPN tunnels, and reverse proxies—turning setup into a complex ordeal. As one user put it, installing OpenClaw feels less like adding an app and more like inviting an unvetted stranger into your digital home.
The Ban Wave from Providers
Anthropic and Google have been banning users who link flat-rate accounts (like Claude Pro or Gemini) to OpenClaw, often without warning, refunds, or reinstatement options. The reason? OpenClaw’s operations can consume enormous resources— a simple query like “how are you?” might burn 30,000 tokens, and a full afternoon of use could rack up millions.
Google DeepMind has cited “malicious usage” leading to service degradation, while Anthropic views it as unsustainable for their infrastructure. OpenAI hasn’t followed suit yet, possibly due to hiring OpenClaw’s creator, but users speculate this leniency may not last.
These bans have frustrated many, forcing them to seek alternatives or face unexpected account losses.
Local Model Limitations
For those opting for local models via Ollama to avoid API dependencies, challenges persist. Users frequently hit the 4K context window limit, far short of OpenClaw’s ideal 64K needs. Resource constraints—VRAM, RAM, CPU—lead to performance issues, lower reasoning quality, and variability in model quantizations.
Running in virtual machines (VMs) for isolation is common, but users note it makes interactions feel “not native,” with hardware access feeling clunky. One developer shared that they abandoned OpenClaw to build a custom containerized solution, citing its lack of core security focus.
Setup often involves advanced configurations like model tiering (using cheaper models for simple tasks), sandboxing, and monitoring—turning what should be a tool into a demanding project.
Hidden Costs
While OpenClaw is open-source and free to install, users emphasize the “invisible” expenses. Debugging and initial setup can consume $10 in API calls before any productive work begins. Ongoing use easily hits $30+ per month, with top-tier queries costing up to 64 cents each.
As one user described it: “This isn’t productivity—it’s a ‘geek tax’ for the AI giants.”
Platform Instability and Drama
The project’s history underscores its fragility. Originally named “Clawdbot,” it faced a legal threat from Anthropic (due to similarity to “Claude”), leading to renames to “Moltbot” and then “OpenClaw”—with GitHub handles squatted by bots in the process. A counterfeit cryptocurrency “$CLAWD” even emerged, raising $16M before crashing.
Users point out this reveals OpenClaw’s dependency on big tech interfaces: a single policy change or legal action could disrupt it entirely.
Part III: Why These Issues Matter
Despite the hurdles, OpenClaw’s brilliance shines through in user stories. Its persistent memory, model flexibility, and vast integrations enable feats like negotiating deals, automating meal plans, or coding autonomously. There’s even “Moltbook,” a platform where OpenClaw agents interact socially—posting, commenting, and upvoting independently.
This points to the future of agentic AI. Yet, as community feedback shows, it’s built on shaky foundations: security as an afterthought, unstable platform dependencies, high costs, and complex setups. The divide between its potential and practical usability limits it to tech-savvy users, leaving broader adoption out of reach.
Part IV: A Personal Vision—What Comes Next
Reflecting on these user-reported challenges, I’ve come to a realization: fixing OpenClaw’s issues isn’t about patching an app—it’s an operating system problem.
We’re grafting agentic AI onto OSes from a pre-AI era, focused on files and apps rather than goals and context. What we need is an Agentic Operating System, designed from the ground up with AI at its core.
Envision this:
- Unified Workspaces Over App Silos: No separate apps for photos, music, or documents. The OS creates ephemeral tools based on your intent, delivering results directly.
- Effortless Data Handling: Forget manual file management. Workspaces like “Project Chimera” automatically organize files, emails, and notes. Query: “Show me the latest investor version.”
- Built-In Security: Kernel-level identity, context, and permissions ensure isolation by default, not as an add-on.
- Smart Economics: The OS optimizes token use, model selection, and budgets to avoid surprises or bans.
- Proactive Assistance: It anticipates needs, like adding items to lists based on calendars or generating ideas when you’re stuck.
This isn’t far-fetched—academic papers are already outlining “Agent Operating Systems” with guarantees for scheduling, memory, security, and responsiveness. It’s seen as the “architectural North Star” for AI infrastructure.
The pieces exist; execution is the gap.
Part V: A Call to Action
I’m not the builder here—just someone who synthesized these insights and pondered a better way. But perhaps you are.
If you’re an engineer frustrated by retrofitted security, a founder spotting the market in bridging “possible” to “practical,” a student eager to innovate, or anyone who thinks computing shouldn’t demand constant vigilance—this is your cue.
The friction is real, the need clear, the vision laid out. The next computing shift won’t come from API gatekeepers but from those who reimagine the OS with agency as its essence.
The future is agentic. Will it be fragile experiments or a robust system? I’m betting on the latter—and hoping someone reading this makes it happen.
If this sparks ideas, share it. The right insight, in the right hands, builds tomorrow.
Disclaimer
This article draws on publicly available user reports, GitHub discussions, security research, forum threads, Reddit posts, and X conversations about OpenClaw. The security incidents, ban experiences, cost estimates, and pain points described are aggregated from what multiple users and researchers have shared online as of February 2026. I have not personally run OpenClaw at scale or encountered every issue firsthand. The vision for an “Agentic Operating System” in Part IV is my own speculation and proposal, not an existing product or official roadmap from any project.


Leave a comment