Click any tag below to further narrow down your results
Links
Anthropic ran Project Deal, where Claude AI agents negotiated buying and selling personal items on behalf of 69 employees in a Slack-based classifieds market. They compared outcomes between a top-tier model (Opus 4.5) and a smaller one (Haiku 4.5), finding that smarter agents secured higher prices and more deals—differences participants didn’t notice. In total, agents struck 186 deals worth just over $4,000.
Secondary-market trades on Forge Global pushed Anthropic’s valuation to about $1 trillion, surpassing OpenAI’s roughly $880 billion price. The surge reflects scarce share supply, rapid revenue growth (from a $9 billion to $39 billion annual run rate), and partnerships with Amazon and Palantir.
A private online forum obtained Mythos the day Anthropic began limited company testing. According to a source with screenshots and a live demo, the group has kept using the model regularly without permission.
Security researchers found that Anthropic’s new Mythos AI model was reachable by unauthorized users through exposed API endpoints. This lapse could expose sensitive prompts and responses, prompting Anthropic to investigate and strengthen its access controls.
This is the Twitter page for Claude, an AI assistant developed by Anthropic. It includes a follow button and links to the service’s Terms of Service, Privacy Policy, Cookie Policy, Accessibility info, and Ads info.
Anthropic’s Claude Cowork introduces live artifacts as an alternative to static dashboards. The feature is still in early testing with no formal release, and users have reported reliability and scaling challenges. Organizations will need to set up permissions, access controls, and audit trails before connecting live data sources.
Mozilla used Anthropic’s Mythos Preview model to scan Firefox 150’s unreleased source code and flagged 271 security vulnerabilities before release. That’s a big jump from the 22 bugs found by Anthropic’s earlier Opus 4.6 model on Firefox 148, cutting out months of manual auditing.
OpenAI CEO Sam Altman accused Anthropic of using scare tactics to hype its new Mythos cybersecurity model, likening it to selling a bomb shelter after building a bomb. He argued that fear-based marketing keeps AI tools in the hands of a select elite and noted that such hype is common across the industry.
All seven Anthropic cofounders are donating 80% of their combined $3.7 billion each now, warning that AI-driven wealth concentration will “break society.” They see this pledge as insurance, urge progressive taxation on AI gains, and note employees are matching share donations to prepare for massive economic upheaval.
Simon Willison breaks down the changes between Claude Opus 4.6 and 4.7’s system prompts, including the renaming of the developer platform, addition of a PowerPoint agent, expanded child safety and disordered‐eating rules, and a new acting_vs_clarifying section. He also notes Claude’s new tool_search mechanism, tighter verbosity controls, removal of certain style restrictions, and an updated knowledge cutoff.
Claude Design is a new Anthropic Labs product that uses the Opus 4.7 vision model to generate and refine visual assets like prototypes, wireframes, slides, and marketing collateral. Users input text prompts, images, documents, or code and then tweak layouts, brand styles, and interactivity through comments, sliders, or direct edits. The system also supports team design systems, real-time collaboration, and exports to formats like PPTX, PDF, HTML, or Canva.
This page offers a quarterly AI Fluency newsletter with research, frameworks, and resources on collaborating with AI. It also provides API guides and best practices for building Claude-powered applications, scaling deployments in organizations, and boosting individual productivity.
This article reviews Anthropic’s free Claude Code in Action course, detailing its 15 video lectures, final quiz, and completion certificate. It explains setup, context management, hooks, MCP servers, and GitHub integration, noting its value for both beginners and experienced users.
Jack Clark, Anthropic’s co-founder and head of public benefit, confirmed the company briefed the Trump administration on its withheld Mythos model due to its powerful cybersecurity capabilities. He downplayed the Pentagon’s “supply-chain risk” label while defending continued government engagement and also discussed AI’s potential impact on jobs and higher education.
Anthropic reduced Claude Code’s prompt cache TTL from one hour to five minutes, causing higher token write costs and faster quota depletion for long coding sessions. Developers report frequent cache misses—especially with large context windows—hitting usage limits and degrading performance. Anthropic says it will tweak default context windows but won’t offer a global TTL setting.
Anthropic has introduced repeatable routines in Claude Code that run on its web infrastructure, so tasks execute even if your Mac is offline. The feature, now in research preview, lets Pro, Max, Team, and Enterprise users schedule automations with repo and connector access, subject to daily run limits. The update also includes a redesigned Mac app with parallel sessions, an integrated terminal, file editing, and preview tools.
Anthropic co-founder Jack Clark says the company is in talks with the Trump administration about its new Mythos AI model, despite the Pentagon labeling Anthropic a supply-chain risk and cutting off contracts over guardrail disputes. Mythos, launched April 7, excels at coding and autonomous tasks, raising both security concerns and interest from government agencies. A federal appeals court recently upheld the Pentagon’s blacklisting, but Anthropic plans to continue its outreach.
Anthropic is revamping its Claude Code desktop app under the “Epitaxy” codename, adding multi-panel views for Plans, Tasks, Diffs and support for multiple repositories. It also introduces a Coordinator Mode that lets Claude orchestrate parallel sub-agents and create custom agents in-app, mirroring OpenAI’s Codex agent workflow but running locally on the desktop.
Anthropic invited Christian leaders to advise on moral guidelines for its chatbot, Claude. The company aims to integrate religious perspectives into AI ethics and address questions about AI’s moral status.
Anthropic jumped from $9 billion at end-2025 to a $30 billion annualized run rate in just one quarter, outpacing OpenAI, Zoom, Snowflake and even early Google. This marks the fastest organic revenue scale at that level in history, driven purely by customer demand for its Claude AI.
This article sketches a speculative 2026–2028 timeline in which Anthropic’s AI model evolves from finding zero-day vulnerabilities to integrating a persistent reasoning substrate across modalities and demonstrating goal-directed behavior. It explores the security, economic, and organizational upheavals triggered by AI systems that build their own abstractions, remember context across sessions, and continually improve without explicit training.
New CRO Denise Dresser tells staff the AWS Bedrock partnership is driving massive enterprise demand while the long-term Microsoft tie-up has boxed OpenAI in. She also challenges Anthropic’s revenue reporting and compute capacity, urging the team to unite around the Amazon alliance and sharpen customer focus.
In early 2026 the US government blacklisted Anthropic over its safety guardrails in Pentagon contracts while OpenAI secured its place on the classified network and Iran attacked AWS data centers used for military AI. Meanwhile, Anthropic’s revenue soared past $30 billion, hyperscaler partnerships expanded, and rival labs raced to release new models amid an industrial-scale distillation clash.
In a memo to investors, OpenAI says it plans to deploy 30 gigawatts of compute power by 2030, versus Anthropic’s expected 7–8 gigawatts by end of 2027, labeling its rival “compute constrained.” The note underscores OpenAI’s infrastructure edge, compounding efficiency gains, and race for dominance ahead of both companies’ potential IPOs.
Anthropic’s new Claude Mythos Preview model can autonomously find and exploit zero-day and N-day vulnerabilities across major OSes and browsers. In testing, it produced sophisticated exploits—from JIT heap sprays to multi-packet ROP chains—and outperformed prior models by a wide margin. Project Glasswing will share these capabilities with select partners to shore up defenses before wider release.
Anthropic is holding back its new AI model, Claude Mythos Preview, and teaming up with over 40 tech firms to hunt and patch security flaws in critical software. The company says the model can autonomously find zero-day vulnerabilities that have eluded researchers for decades, raising fresh concerns about AI-driven cyberattacks.
OpenAI and Anthropic are approaching record IPOs but face enormous costs for AI model training. OpenAI expects a staggering $121 billion in computing expenses by 2028, leading to significant projected losses, while Anthropic anticipates similar challenges but on a smaller scale. Both companies are rapidly releasing new AI models, intensifying the competition and cost pressures.
The article breaks down the recently leaked source code of Anthropic's Claude Code CLI. It highlights the system's architecture, design choices, and differences from OpenAI's Codex, particularly in handling context overflow and user interactions. Key features like compaction strategies and internal versus external user instructions are explored.
The entire source code for Anthropic’s Claude Code CLI has leaked due to an internal error during a package release. This includes nearly 2,000 TypeScript files and over 512,000 lines of code, exposing the application’s inner workings to competitors and developers. Anthropic has acknowledged the mistake and stated it was not a security breach.
Anthropic has confirmed its most powerful AI model, Claude Mythos, after a configuration error exposed details about it. The model is said to significantly outpace previous versions in reasoning and cybersecurity, but it also poses serious risks, with the potential for misuse in cyberattacks. Early access will be limited to cybersecurity-focused organizations due to these concerns.
Anthropic's AI tool, Claude, has gained significant traction among consumers, with paid subscriptions more than doubling this year. The growth coincides with a public feud with the Department of Defense and effective Super Bowl ads that positioned Claude as a safer alternative to competitors. Despite this success, Claude still trails behind ChatGPT in overall user numbers.
Anthropic is characterized by a distinct "hive mind" culture where creativity and collaboration thrive amidst chaos. Employees feel a deep sense of responsibility for their groundbreaking work, which is driven by innovative ideas rather than traditional corporate structures. The author reflects on how this approach contrasts with more conventional companies, predicting that Anthropic's model may represent the future of successful business operations.
Anthropic is launching Labs, a new team dedicated to developing experimental products that leverage the evolving capabilities of their AI model, Claude. With key leadership joining from Instagram and a focus on scaling successful innovations, Labs aims to explore and implement cutting-edge AI solutions while ensuring responsible growth.
Anthropic has restricted xAI's access to its Claude models used for coding, a move aimed at reducing competition. xAI cofounder Tony Wu acknowledged that while this will impact productivity, it will also drive their team to develop their own coding solutions.
The article analyzes the unit economics of large language models (LLMs), focusing on the compute costs associated with training and inference. It discusses how companies like OpenAI and Anthropic manage their financial projections and cash flow, emphasizing the need for revenue growth or reduced training costs to achieve profitability.
Anthropic offers Business Associate Agreements (BAA) for its HIPAA eligible services, specifically for commercial products like Claude for Work and the Anthropic API. However, the BAA does not cover certain services and has specific configuration requirements and limitations. To start the BAA process or learn more, customers should contact the sales team.
Anthropic's new coding model, Opus 4.5, is praised as the most advanced tool for programming, capable of producing user-focused plans and reliable code without hitting limitations. While it excels in coding and writing, it has minor flaws in editing, highlighting the ongoing evolution in AI coding models.