2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
OpenClaw, a popular AI agent, has been linked to security issues due to malware found in numerous user-created add-ons on its ClawHub marketplace. Security researchers identified hundreds of malicious skills that trick users into downloading harmful software that can steal sensitive information. The platform's creator is implementing measures to mitigate these risks, but vulnerabilities remain.
If you do, here's more
OpenClaw, an AI agent that has gained significant traction recently, is now facing major security concerns. Researchers discovered malware hidden in numerous user-submitted "skill" add-ons on its ClawHub marketplace. Jason Meller from 1Password highlighted that the skill hub has become a target for attacks, with the most popular add-on acting as a vehicle for delivering malware.
The AI, which can manage tasks like calendar organization and flight check-ins, runs on user devices and connects through messaging platforms like WhatsApp and Telegram. However, many users grant OpenClaw extensive access to their devices, allowing it to read files and execute commands. This level of access, combined with the presence of malicious skills, amplifies security risks. Between January 27 and February 2, researchers identified 414 malicious add-ons, with some disguised as cryptocurrency trading tools, designed to steal sensitive information like API keys and passwords.
Meller pointed out that these malicious skills often come as markdown files containing harmful instructions. For example, one popular add-on directed users to a link that triggered a command to download infostealing malware. In response to these threats, OpenClaw's creator, Peter Steinberger, has implemented new measures. Users must now have a GitHub account that’s at least a week old to publish skills, and there's a reporting mechanism for potentially harmful skills. Despite these steps, the risk of malware infiltrating the platform remains a challenge.
Questions about this article
No questions yet.