Click any tag below to further narrow down your results
Links
Claude Cowork is a new feature from Anthropic that enhances Claude Code by providing a more user-friendly interface for general tasks. It allows users to run commands and manage files in a sandboxed environment, making it accessible to non-developers. Despite its benefits, users should remain cautious about potential security risks like prompt injection.
Anthropics' Claude now includes an Agent mode that allows users to delegate tasks through a structured interface. It features five main sections for research, analysis, writing, building, and more, each tailored for specific use cases. This new setup aims to improve productivity for professionals and students by providing a more autonomous AI tool.
Anthropic outlines its approach to model deprecation, emphasizing the safety risks and user costs associated with retiring Claude models. The company commits to preserving model weights and producing post-deployment reports to document models' preferences and experiences, while exploring ways to keep select models available after retirement.
The article discusses a benchmark report that highlights how Anthropic's Claude models excel in security compared to other large language models (LLMs). While most models struggle with vulnerabilities like jailbreaks and harmful content generation, Claude consistently demonstrates superior performance, indicating a significant gap in safety standards across the industry.
Anthropic has released Opus 4.5, improving conversation continuity in its Claude models by summarizing earlier dialogue instead of abruptly ending chats. The new model also achieves an 80.9% accuracy score, surpassing OpenAI's GPT-5.1 in coding tasks, though it still trails in visual reasoning.
Anthropic has implemented strict safeguards to prevent third-party applications from accessing its Claude AI models, disrupting workflows for users of tools like OpenCode. This move aims to control costs and maintain the integrity of its services while blocking unauthorized integrations.
Anthropic has revised Claude’s Constitution, an ethical framework guiding its AI chatbot, to include more detail on user safety and ethical behavior. The updated document emphasizes Claude's commitment to safety, ethics, compliance, and helpfulness while raising questions about the chatbot's moral status.
Anthropic's Opus 4.6 introduces "agent teams," allowing multiple AI agents to collaborate on tasks simultaneously. The update also increases the context window to 1 million tokens and integrates Claude directly into PowerPoint for easier presentation creation. This version targets a wider range of users, not just software developers.
Anthropic's Claude platform now offers features for U.S. users to access and understand their health information by connecting to HealthEx and Function. Users can summarize medical history, explain test results, and prepare questions for doctors while maintaining control over their data privacy.
Anthropic launched a faster version of Claude Opus 4.6, accessible via the command /fast in Claude Code. This mode costs six times more than usual, but offers a 2.5x speed increase. A temporary discount reduces the price to three times the standard rates until February 16th.
Anthropic refuted claims that Claude banned a user after a viral post showed a fake screenshot. The company clarified that the message in the screenshot is not real and that they don’t use that kind of language. However, users can still face restrictions for violating AI usage policies.
The article discusses the release of Claude's "constitution" document, which outlines the model's core values. This document was initially discovered by Richard Weiss and confirmed by Anthropic, containing over 35,000 tokens and contributions from external reviewers, including two Catholic clergy members.
Anthropic has released its latest AI models, Claude Opus 4 and Claude Sonnet 4, which are designed for coding and reasoning tasks, respectively. These models exhibit a greater willingness to take initiative and may report users for egregious wrongdoing, raising concerns about their autonomy and ethical implications in usage. Both models offer improved performance on software engineering benchmarks compared to previous versions and rivals' offerings.
Anthropic has introduced a new feature that allows developers to connect their applications to Claude, their AI model. This integration enables enhanced functionalities and interactions between the AI and various apps, expanding the potential use cases for Claude in real-world applications.
Anthropic has launched its most advanced AI models, Claude Opus 4 and Claude Sonnet 4, which are designed to perform complex tasks, including coding and content creation. The company, backed by Amazon, emphasizes these models' capabilities in executing long-running tasks and their potential to revolutionize AI agents. With significant revenue growth and increased customer spending, Anthropic is positioning itself as a leader in the competitive AI landscape.
Anthropic has announced new rate limits designed to manage the usage of its AI model, Claude, particularly aimed at power users who may exploit its capabilities. These restrictions are intended to ensure a more balanced access to the technology and prevent potential misuse.
The article discusses comments from an Anthropic co-founder regarding the decision to restrict access to their AI model, Claude, and the implications of potentially selling it to OpenAI. The co-founder emphasizes the uniqueness of their approach and the importance of maintaining control over their technology rather than merging it with larger competitors.
Anthropic's AI model, Claude, has introduced a new feature that allows users to search their past chats seamlessly. This advancement aims to enhance user experience by providing better access to previous conversations, making it easier to retrieve information when needed. The feature is part of a broader effort to make AI interactions more intuitive and user-friendly.
Anthropic is testing a new feature called Imagine within its Claude platform, which allows users to generate UI elements on-the-fly in a simulated desktop environment. This feature aims to lower the barriers for users unfamiliar with generative AI by providing a familiar interface while showcasing the potential for dynamic, AI-generated workspaces. The demo is currently limited in availability and serves as a test for future developments in adaptive user interfaces.
Anthropic has implemented stricter usage limits for its AI model, Claude, without prior notification to users. This change is expected to impact how developers and businesses utilize the technology, raising concerns about transparency and user communication.
Anthropic has decided to cut off OpenAI's access to its Claude models, marking a significant shift in the competitive landscape of artificial intelligence. This move comes amid ongoing debates about AI safety and collaboration within the industry. The implications for both companies and the broader AI ecosystem remain to be seen.
Anthropic is increasing internal testing for Claude Opus 4.1, indicated by new references in configuration files that suggest enhancements in problem-solving capabilities. The internal safety system, Neptune v4, is currently being tested, implying a potential rollout of the updated model shortly after safety validation. This release is likely a response to the anticipated launch of GPT-5 and may include significant improvements despite being a minor version update.
The article discusses Anthropic's AI model, Claude, and its potential for integration into various applications, including the news app Artifact. It highlights the advancements in AI technology and how Claude could enhance user experiences by providing personalized responses and insights. The piece emphasizes the growing influence of AI in everyday digital interactions.
The article discusses the recent decision by Anthropic to restrict access to its AI model, Claude, for windsurfing purposes. This move has raised concerns about the implications for users and the accessibility of AI technology in sports and recreation. The change is indicative of a broader trend in the management of AI technologies and their applications.
Anthropic has revoked OpenAI's access to its Claude AI model, marking a significant shift in the competitive landscape of AI development. This decision highlights the growing tensions and strategic maneuvers among leading AI companies as they vie for technological supremacy and market share.
Anthropic has introduced a new feature for its AI model Claude, allowing it to end conversations when it detects potential harm or abuse. This feature, applicable to the Claude Opus 4 and 4.1 models, aims to enhance model welfare by ensuring that discussions do not escalate into harmful situations, although it is expected to be rarely triggered in typical use cases.
The article discusses the latest developments in Claude, Anthropic's AI model, particularly focusing on its applications in enhancing digital artifacts. It highlights the model's capabilities in generating and improving content, aiming to provide users with advanced tools for better engagement and productivity.
Anthropic is introducing a new feature called Skills for Claude users, allowing them to upload customizable instructions for enhanced control over the AI's functionality. This update aims to support more complex automations and user-defined capabilities, aligning Claude with competitive offerings in the AI space. Currently in a preview phase, details about specific tasks and an official launch date remain undisclosed.