Click any tag below to further narrow down your results
Links
ByteDance has introduced an AI coding assistant called Doubao-Seed-Code for 40 yuan ($1.30) per month, aiming to disrupt the market amid rising AI adoption in China. The model achieved a top score on the SWE-Bench Verified test, comparable to major systems like Anthropic's Claude Sonnet. This release follows Anthropic's recent restrictions on access for Chinese firms.
Anthropic and Microsoft have expanded their partnership, making Claude Sonnet 4.5, Haiku 4.5, and Opus 4.1 available in public preview on Microsoft Foundry. This integration allows developers to use Claude for coding, agent development, and office tasks while streamlining procurement processes within the Microsoft ecosystem.
Anthropic's Nicholas Carlini detailed how 16 Claude Opus AI agents developed a C compiler over two weeks with minimal supervision. They produced a 100,000-line Rust-based compiler capable of building a Linux kernel and handling major open source projects. The project highlights the challenges and advantages of using AI for coding tasks.
Anthropic closed a $30 billion funding round, bringing its valuation to $380 billion, more than double its worth from last September. The company, founded by ex-OpenAI researchers, is focusing on enterprise AI tools like Claude and aims to expand its infrastructure and product offerings in a competitive market.
Anthropic's new coding model, Opus 4.5, is praised as the most advanced tool for programming, capable of producing user-focused plans and reliable code without hitting limitations. While it excels in coding and writing, it has minor flaws in editing, highlighting the ongoing evolution in AI coding models.
Apple is reportedly partnering with Anthropic to create an AI coding platform aimed at enhancing software development. This collaboration seeks to leverage Anthropic's expertise in artificial intelligence to streamline coding processes and improve developer productivity.
Lovable, a Vibe coding tool, reports that Claude 4 has reduced coding errors by 25% and increased speed by 40%. Anthropic's Claude Opus 4 has demonstrated strong performance in coding tasks, achieving a 72.5% score in the SWE-bench and sustaining performance over extended periods. Despite competition from Google's Gemini models, Claude 4 is noted for its coding efficiency and effectiveness, with mixed opinions on its overall superiority.