OpenClaw security fears lead Meta, other AI firms to restrict its use

February 20, 2026
Ai
OpenClaw security fears lead Meta, other AI firms to restrict its use

Here’s something that caught my attention — big AI players are now tightening the leash on a viral tool called OpenClaw. Last month, Jason Grad warned his startup team to steer clear — it's unvetted and risky. Now, a Meta exec told staff to keep OpenClaw off their work laptops, or they could lose their jobs. According to Paresh Dave at Wired, the software is seen as unpredictable and a potential privacy nightmare. The guy behind it, Peter Steinberger, launched it last November as a free open-source project. And get this — its popularity exploded recently as coders shared their experiences online. Last week, Steinberger even joined OpenAI, which plans to keep OpenClaw open source and supported. So what’s the big deal? Well, as Wired reports, the fear is that unrestrained AI agents like this could cause security breaches or chaos in sensitive environments. The thing is — these tools are evolving fast, and the industry’s trying to figure out how to keep everyone safe while still innovating.

Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. “You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. “Please keep Clawdbot off all company hardware and away from work-linked accounts.”

Grad isn’t the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.

Peter Steinberger, OpenClaw’s solo founder, launched it as a free, open source tool last November. But its popularity surged last month as other coders contributed features and began sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.

Read full article

Comments

Audio Transcript

Last month, Jason Grad issued a late-night warning to the 20 employees at his tech startup. “You've likely seen Clawdbot trending on X/LinkedIn. While cool, it is currently unvetted and high-risk for our environment," he wrote in a Slack message with a red siren emoji. “Please keep Clawdbot off all company hardware and away from work-linked accounts.”

Grad isn’t the only tech executive who has raised concerns to staff about the experimental agentic AI tool, which was briefly known as MoltBot and is now named OpenClaw. A Meta executive says he recently told his team to keep OpenClaw off their regular work laptops or risk losing their jobs. The executive told reporters he believes the software is unpredictable and could lead to a privacy breach if used in otherwise secure environments. He spoke on the condition of anonymity to speak frankly.

Peter Steinberger, OpenClaw’s solo founder, launched it as a free, open source tool last November. But its popularity surged last month as other coders contributed features and began sharing their experiences using it on social media. Last week, Steinberger joined ChatGPT developer OpenAI, which says it will keep OpenClaw open source and support it through a foundation.

Read full article

Comments

0:00/0:00
OpenClaw security fears lead Meta, other AI firms to restrict its use | Speasy