Click any tag below to further narrow down your results
Links
A private online forum obtained Mythos the day Anthropic began limited company testing. According to a source with screenshots and a live demo, the group has kept using the model regularly without permission.
Security researchers found that Anthropic’s new Mythos AI model was reachable by unauthorized users through exposed API endpoints. This lapse could expose sensitive prompts and responses, prompting Anthropic to investigate and strengthen its access controls.
Quodeq is an MIT-licensed tool that runs locally to scan codebases using AI across six ISO 25010 dimensions, mapping each finding to CWE identifiers and providing fix plans. It supports cloud and local models, outputs grades and violations in JSON, and includes a dashboard for exploring results and defining custom standards.
The UK’s AI Safety Institute tested Claude Mythos and found its ability to uncover security flaws scales directly with the number of tokens spent. This creates a simple economic model: defenders must outspend attackers on AI-driven reviews to stay secure. It also boosts the value of open source libraries, since multiple users can share the cost of token-based audits.