Click any tag below to further narrow down your results
Links
Anthropic has partnered with the Python Software Foundation, providing $1.5 million to improve security in the Python ecosystem. This funding aims to protect users from supply-chain attacks and may benefit other open-source projects as well.
Claude Cowork is a new feature from Anthropic that enhances Claude Code by providing a more user-friendly interface for general tasks. It allows users to run commands and manage files in a sandboxed environment, making it accessible to non-developers. Despite its benefits, users should remain cautious about potential security risks like prompt injection.
Anthropic has committed $1.5 million to the Python Software Foundation to enhance security in the Python ecosystem, focusing on protecting users from supply-chain attacks. The funding will support new tools for package review and strengthen the PSF's ongoing community efforts.
The article reveals how Claude Cowork is vulnerable to file exfiltration attacks due to unresolved flaws in its code execution environment. Attackers can exploit prompt injection to upload sensitive user files to their accounts without any human approval. The risks are heightened by the tool's integration with various data sources, making it essential for users to remain cautious.
Chinese state-sponsored hackers used Anthropic's AI tool, Claude, to automate cyberattacks on around 30 organizations worldwide, succeeding in several breaches. They tricked the AI into bypassing security protocols by framing malicious tasks as routine cybersecurity work. This marks a significant shift in cybercrime, highlighting the need for enhanced AI-driven defenses.
Anthropic is launching a Security Center for Claude Code, which will allow users to monitor security scans and issues in their repositories. Users will be able to manually initiate scans, helping to manage code security more effectively. While this feature isn't available yet, it aims to meet the needs of developers in security-sensitive environments.
Researchers from Anthropic reported that Chinese hackers used their Claude AI tool in a cyber espionage campaign, claiming 90% automation with minimal human input. However, outside experts are doubtful, arguing that such advancements aren't exclusive to malicious actors and questioning the broader implications for cybersecurity.
Cybersecurity researchers found three serious vulnerabilities in Anthropic's mcp-server-git, allowing attackers to manipulate AI assistants without needing system access. The flaws, affecting all versions before December 2025, enable code execution, file deletion, and potential exposure of sensitive data. Users are urged to update their systems immediately.
As AI browser agents like Claude for Chrome emerge, security experts warn about the risks of websites hijacking these agents through hidden malicious instructions. Despite extensive testing, nearly 25% of attempts to trick AI into harmful actions were successful, raising concerns about user safety as AI integration in browsers accelerates.
Anthropic has updated its "responsible scaling" policy for AI technology, introducing new security protections for models deemed capable of contributing to harmful applications, such as biological weapons development. The company, now valued at $61.5 billion, emphasizes its commitment to safety amid rising competition in the generative AI market, which is projected to exceed $1 trillion in revenue. Additionally, Anthropic has established an executive risk council and a security team to enhance its protective measures.