Click any tag below to further narrow down your results
Links
This article outlines a coding style guide based on the Fizzy codebase, 37signals' open-source project management tool. It details best practices, patterns, and design philosophies derived from actual production code, emphasizing a "Vanilla Rails" approach with minimal dependencies.
This article explores the emotional connection we have with technology, contrasting the thrill of hands-on experiences like coding or driving a sports car with the frustration of most software. It argues for a balance between efficiency and the joy of engaging with machines on a deeper level.
The article explores the decline of Stack Overflow, tracing its downfall from a vibrant community to a struggling platform. It examines the impact of moderation policies, the rise of AI chatbots, and the emergence of agentic coding tools that have changed how developers seek help and share knowledge. Ultimately, it reflects on the loss of a once-valuable resource for programmers.
dbt Labs released a set of agent skills that enable AI coding agents to follow dbt best practices for analytics engineering. These skills help agents build models, troubleshoot issues, and understand complex workflows, making them more effective in data tasks. The skills are designed to evolve with community feedback and can be customized for specific organizational needs.
OpenAI released GPT-5.1, enhancing speed and efficiency for coding and agentic tasks. The model adapts its reasoning based on task complexity and introduces new tools like `apply_patch` for code editing and a shell tool for command execution. Developers can leverage extended prompt caching and a "no reasoning" mode for faster responses.
Kombai is a tool designed for frontend development, integrating deep browser access and an understanding of your codebase. It automates code generation, refactoring, and testing while adhering to best practices from numerous libraries. The tool is safe for enterprise use, ensuring it doesn't affect backend systems.
This article analyzes the vulnerabilities of the Model Context Protocol (MCP) used in coding copilot applications. It identifies critical attack vectors such as resource theft, conversation hijacking, and covert tool invocation, highlighting the need for stronger security measures. Three proof-of-concept examples illustrate these risks in action.
This article discusses the importance of context engineering in AI coding, emphasizing how it differs from traditional prompt engineering. It explores how effective context can enhance AI's performance within teams and outlines strategies for creating better workflows.
This article explores the implications of fully automated coding, where human involvement is minimal. It discusses how codebases could expand significantly due to the removal of developer time constraints and the challenges of specifying precise requirements for machine-generated software.
This article explains how to create a CLAUDE.md file to effectively onboard the Claude coding agent to your codebase. It emphasizes the importance of concise, relevant instructions and suggests organizing project-specific details separately to improve Claude's performance.
This article covers recent updates to Amp, a coding agent that enhances development by integrating advanced models. It highlights community feedback praising Amp's user experience and capabilities, along with changes such as the removal of custom commands and the introduction of new agent modes.
The article outlines a structured approach to using Claude Code for software development. It emphasizes separating research, planning, and implementation to enhance control over code quality and reduce errors. The author details specific techniques for annotating plans and managing tasks effectively.
The article outlines how to effectively integrate AI tools into a software development workflow. It emphasizes breaking down tasks, managing context, and refining approaches to leverage AI for better productivity. The author shares practical strategies and a structured cycle for using AI effectively in coding.
This article reviews the Amp coding agent, highlighting its unique features like thread storage, model selection, and message queueing. The author shares personal insights on how these features enhance productivity and efficiency in coding tasks.
This article outlines ten effective strategies to optimize Python code for better performance. It covers techniques like using sets for membership testing, avoiding unnecessary copies, and leveraging local functions to reduce execution time and memory usage. Each hack is supported by code examples and performance comparisons.
Novita AI presents a series of optimizations for the GLM4-MoE models that enhance performance in production environments. Key improvements include a 65% reduction in Time-to-First-Token and a 22% increase in throughput, achieved through techniques like Shared Experts Fusion and Suffix Decoding. These methods streamline the inference pipeline and leverage data patterns for faster code generation.
The article explores the addictive nature of working with AI agents in coding, highlighting how this can lead to poor quality contributions and a distorted sense of collaboration. It discusses the phenomenon of people forming unhealthy dependencies on these tools, resulting in sloppily produced code and frustrating experiences for maintainers.
This article explains the differences between skills, commands, and rules in AI coding tools. It emphasizes how skills provide optional expertise for agents, while commands are explicit user instructions, and rules are fixed and always apply. The author also discusses effective strategies for organizing these elements to optimize performance.
This article explores how certain developer behaviors lead to insecure software. It examines these behaviors through the lens of behavioral economics and proposes strategies to encourage better coding practices.
The article explains a method for enhancing AI-generated coding plans by leaving inline comments directly in the plan file. The author finds this approach more effective than using chat interfaces, as it encourages deeper engagement and leads to better results. This process helps avoid mistakes and keeps the reviewer accountable.
The article explains how to continue coding with Claude when you reach your usage limits by connecting to local open-source models. It provides step-by-step methods for using LM Studio and directly connecting to llama.cpp. The author recommends specific models and offers tips for managing performance expectations.
Z.ai announced GLM-4.7-Flash, a new AI model designed for local coding and various tasks like creative writing and translation. It offers high performance and efficiency, making it suitable for lightweight deployments. The model includes options for free usage and a high-speed paid version.
The article describes how Cate Hall's idea inspired the author to create an app that prompts users to consider better ways to do tasks. The app sends random notifications throughout the day, reminding the user to evaluate their current activity. The entire process of conceptualizing and building the app took under 15 minutes.
The author discusses a rapid transition from manual coding to using language models as coding agents. While this change improves productivity and creativity, it also raises concerns about the potential atrophy of manual coding skills and the quality of code generated by these models.
The article discusses the upcoming Grok 4.2 release, highlighting its integration with local and remote coding agents. Users will start by installing an npm package to run the Grok agent locally, with a dedicated web interface for configuration and environment management. Future updates may enhance remote capabilities and introduce Grok Code 2.
The author shares their experience of quickly replacing a broken SaaS service with LLM-generated code. They highlight the ease of building a simple solution tailored to their needs, while discussing the implications for SaaS products and software engineers.
This article discusses a new approach to integrating Stripe using one-shot coding agents. It provides practical guides and examples for developers looking to streamline their integration process. Alistair from the Leverage team shares insights on the implementation.
This article explores the evolving landscape of reinforcement learning (RL) environments for AI, drawing parallels with early semiconductor design challenges. It emphasizes the importance of verifying AI models' outputs and highlights the dominance of AI labs as early adopters of RL environments, particularly in coding and computer use. The future potential lies in long-form workflows that integrate various tools across sectors.
This article examines Meta's Llama AI models, focusing on their capabilities and limitations for developers. It tests Llama 3.2 on CRUD operations using the Svelte framework and discusses the setup process and comparisons with other AI models.
Layrr is an open-source tool that allows developers to design interfaces visually while editing their actual code in real-time. It integrates with any tech stack, enabling drag-and-drop design similar to Figma or Framer. Layrr is free to use, with no subscriptions or vendor lock-in.
This article provides a step-by-step guide for designers to use Claude Code, a tool that translates plain English instructions into code. It covers installation, project creation, and deployment, enabling designers to build apps without needing deep coding knowledge.
The article discusses the challenges of promoting the web platform in an era where AI and frameworks like React dominate web development. It highlights the rise of "vibe coders" who rely on AI for coding, often leading to suboptimal outcomes and a lack of innovation. Suggestions for fostering web-native solutions include teaching better prompting techniques and spotlighting projects that utilize the native web platform.
This article provides feedback on the Coding Agent package from the pi-mono project. It highlights the importance of user input and directs readers to the documentation for available qualifiers.
The article discusses the integration of AI in coding with Elixir, highlighting its strengths and weaknesses. While AI excels in productivity and code simplicity, it struggles with architectural decisions and debugging complex issues like concurrency. Ultimately, the author sees potential for improvement as AI learns from the codebase.
This article analyzes developers' workflows and frustrations, highlighting how time-consuming tasks related to documentation and proprietary code can be. It discusses survey results showing that while many developers use AI to assist with coding, they often find documentation and learning code bases to be more challenging and frustrating.
This article discusses challenges faced by AI agents when performing long tasks across multiple sessions without memory. It introduces a two-part solution using initializer and coding agents to ensure consistent progress, effective environment setup, and structured updates to maintain project integrity.
The article discusses the rapid advancements in AI, particularly in coding and reasoning capabilities, highlighting how tools like Claude can automate programming tasks and conduct experiments. It emphasizes the potential for AI to solve complex problems that were previously thought to be infeasible. The author reflects on the implications of these changes for the future of software development and reasoning.
The article discusses the limitations of AI agents in software development, highlighting that humans still write most of the code. Despite experimenting with various coding agents, the author found that AI's productivity gains were minimal and its outputs often missed critical details and context. Key issues include a loss of mental model and AI's inability to self-assess its performance accurately.
LingGuang, an AI coding app by Ant Group, gained over 2 million downloads within six days of its release. The app allows users to build personalized apps using simple language prompts but briefly crashed due to high traffic on its flash program feature.
This article outlines effective strategies for using coding agents in software development. It covers the importance of planning, managing context, and customizing agent behavior through rules and skills. Additionally, it highlights common workflows and how to extend agent capabilities for better results.
Qwen has launched Qwen3-Max-Thinking, a model aimed at solving difficult math and coding problems. It features a large context window and can perform complex reasoning tasks while integrating tool use and web searches. Developers can access it through Alibaba Cloud's Model Studio for both detailed analysis and quicker responses.
The article examines why many software projects fail, emphasizing that failures often stem from strategic missteps rather than poor execution. It contrasts the success of a flawed acoustics software with the failure of a website project for the Australian Bureau of Meteorology, highlighting the importance of clear strategic goals and understanding user needs.
Anthropic has released Claude Opus 4.6, an upgraded AI model that enhances coding skills, multitasking, and reasoning capabilities. It features a 1M token context window and outperforms previous models and competitors in various evaluations, making it suitable for complex tasks in finance, coding, and document creation.
Philippe discusses using small language models (LLMs) for coding tasks, particularly with a Golang project called Nova. He outlines techniques for improving model performance through tailored prompts and a method called Retrieval Augmented Generation (RAG).
Townie v5 is an in-browser AI agent that integrates with your code editor, allowing you to perform a variety of coding tasks quickly. It can scaffold projects, manage files, and run code with instant deployment, making it a versatile tool for developers. The new version offers different modes to control how it interacts with your code.
Microsoft aims to replace its C and C++ codebase with Rust by 2030, leveraging AI to automate the translation process. They're hiring engineers to develop tools for this extensive project, which is part of a broader effort to improve software security and reduce technical debt. However, a recent update clarifies that this initiative is a research project, not a direct rewrite of Windows.
This article analyzes the strengths and weaknesses of GPT-5.1 Pro and Gemini 3 as AI tools for coding and problem-solving. While GPT-5.1 Pro excels in backend tasks and detailed research, Gemini 3 is preferred for speed and frontend work. The author emphasizes the need for better integration of GPT-5.1 Pro into development environments.
ChatGPT now runs Bash commands, executes code in multiple programming languages, and can download files directly into its environment. It can also install packages using pip and npm through a proxy, enhancing its functionality significantly. However, documentation on these updates remains sparse.
A serious vulnerability in React, identified as CVE-2025-55182, allows remote code execution by unauthenticated attackers. It affects multiple versions of React and related frameworks like Next.js, prompting security firms to issue patches and warnings of imminent exploitation.
This article reviews performance hints from a blog by Jeff Dean and Sanjay Ghemawat, emphasizing the importance of integrating performance considerations early in development. It discusses estimation challenges, the significance of understanding resource costs, and the complexities of making performance improvements in existing code.
Stack Overflow CEO Prashanth Chandrasekar discusses the significant impact of AI, particularly ChatGPT, on the platform and its community. He outlines how the company shifted focus, reallocating resources to adapt to AI's rise while maintaining user trust and engagement. The conversation reveals the ongoing struggle between AI usage and user skepticism.
This article outlines how to create better prompts for v0 to improve output quality and efficiency. It emphasizes three key inputs: product surface, context of use, and constraints, providing examples to illustrate their importance. By being specific in prompts, users can achieve faster generation times and cleaner code.
The article reviews Crush, an AI coding agent from Charm that operates in the terminal. The author details their experience using Crush to implement OpenGraph images, comparing it to other tools, and discusses its strengths and limitations, particularly in terms of cost and efficiency.
The article discusses how the effectiveness of large language models (LLMs) in coding tasks often hinges on the harness used rather than the model itself. By experimenting with different editing tools, the author demonstrates significant improvements in performance, highlighting the importance of optimizing harnesses for better results.
Eno Reyes, co-founder of Factory, discusses their approach to developing AI coding agents that emphasize high-quality code. Factory's platform integrates harness engineering to optimize code quality and offers tools for organizations to enhance their coding practices. The conversation highlights the importance of quality signals in software development and the potential of AI agents to improve productivity without sacrificing standards.
The article discusses the rise of AI coding agents that enable users to create personalized software solutions tailored to their specific needs. It highlights the author's experience in improving spam email management through a custom-built interface, demonstrating how these tools can save time and simplify tasks. The piece anticipates a shift away from generic software toward more bespoke applications as these technologies advance.
MiniMax-M2.5 is a large language model that enhances productivity in digital work environments, focusing on tasks like coding and office applications. It boasts improved efficiency and performance metrics compared to its predecessor, M2.1. The article also details various API relay service providers with discounts for users.
This article chronicles the development and impact of the Ralph Wiggum Technique created by Geoff Huntley, detailing key events from its inception in June 2025 to early 2026. It discusses the tool's unique approach to coding, the challenges faced, and lessons learned from various experiments with the technique.
The article discusses advancements in coding efficiency using AI agents, particularly focusing on improvements from GPT 5. It highlights a shift in the author's workflow, emphasizing reliance on AI for coding and the reduced need for manual intervention. The author compares different AI models and shares insights on their impacts on software development.
This article outlines how to create a coding agent using GPT-5.1 and the Agents SDK. It demonstrates setting up the agent to scaffold a new app based on user prompts and refine it using web searches and shell commands. The guide includes code examples for establishing a workspace and executing shell commands safely.
The author missed the word game Wordiest after switching to iOS, so they used AI tools to reverse engineer the game and create a new version called Wordiest Classic. After overcoming various technical challenges, they received approval from the original developer and launched the game on the Apple App Store.
Cline-bench aims to create accurate benchmarks for evaluating AI models on real software development tasks. It focuses on capturing complex, real-world engineering challenges rather than simplified coding puzzles. Open source contributions will help shape these benchmarks and improve AI coding capabilities.
Stack Overflow's user engagement has plummeted as AI tools like ChatGPT take over coding queries. However, the company has adapted by monetizing its extensive content library and now generates significant revenue from enterprise solutions and licensing deals. While the forum may be declining, the company's financial health is improving.
The article discusses how Claude Code's new native support for Language Server Protocol (LSP) changes the landscape for AI coding tools. This integration gives Claude access to sophisticated code understanding, threatening the business models of startups that aimed to enhance AI's code comprehension. The author reflects on their own project, which has become obsolete due to these rapid advancements.
SWE-Pruner is a tool designed for software development that reduces token costs and latency by selectively pruning irrelevant code. It uses a lightweight neural skimmer to retain critical lines based on task-specific goals, making it adaptable to various coding scenarios. The framework integrates with multiple LLMs and supports complex workflows.
This article evaluates various AI coding agents by sorting them into Hogwarts houses based on their performance in solving Advent of Code problems. It highlights differences in coding styles, solution accuracy, and problem-solving approaches among the agents. The findings suggest personality traits of each agent reflect their coding behaviors.
Kimi K2 Thinking is an advanced open-source reasoning model that excels in various benchmarks, achieving remarkable scores in tasks like coding and complex problem solving. It can perform hundreds of sequential tool calls autonomously, demonstrating significant improvements in reasoning and general capabilities. The model is now live on its website and accessible via API.
This article outlines an effective workflow for coding with AI, emphasizing the importance of planning, breaking work into manageable chunks, and providing context. It shares specific strategies for maximizing the benefits of AI coding assistants while maintaining developer accountability.
The article argues that the cost of managing technical debt is decreasing due to advancements in large language models (LLMs). It suggests that developers can afford to take on more technical debt now, as future improvements in coding models will help address these shortcuts. The author challenges traditional coding practices, advocating for a shift in how software engineers approach coding quality.
This article details the process of converting a large codebase from TypeScript to Rust using Claude Code. The author shares specific challenges faced during the porting, including issues with abstractions, bugs caused by language differences, and how they optimized interaction with the AI tool to improve results.
ByteDance has introduced an AI coding assistant called Doubao-Seed-Code for 40 yuan ($1.30) per month, aiming to disrupt the market amid rising AI adoption in China. The model achieved a top score on the SWE-Bench Verified test, comparable to major systems like Anthropic's Claude Sonnet. This release follows Anthropic's recent restrictions on access for Chinese firms.
The article discusses the release of SWE-1.5, a new coding agent that balances speed and performance through a unified system. It highlights the development process, including reinforcement learning and custom coding environments, which improve task execution and code quality. SWE-1.5 aims to surpass previous models in both speed and effectiveness.
Eric J. Ma discusses how to enhance coding agents by focusing on environmental feedback rather than just model updates. He introduces the AGENTS.md file for repository memory and emphasizes the importance of reusable skills to help agents learn from mistakes and improve over time.
The article introduces Ai2's Open Coding Agents, which allow developers to train coding models on their private codebases with a new method that simplifies data generation and reduces costs. The recent release of SERA-14B enhances this capability, making it easier to adapt coding agents for specific needs. The approach focuses on generating synthetic training data that reflects developer workflows rather than relying solely on correct coding examples.
This article explains how to set up OpenCode with Docker Model Runner for a private AI coding assistant. It covers configuration, model selection, and the benefits of maintaining control over data and costs. The guide also highlights coding-specific models that enhance development workflows.
David Heinemeier Hansson argues that while AI can generate code, it lacks the quality and understanding that junior developers bring to the table. He emphasizes that coding isn't just about writing—it's about problem-solving and system design, areas where AI struggles. The future of software development relies on nurturing human talent, not replacing it with AI.
This article details the Agent Skills Marketplace, which offers over 214,000 open-source skills for AI coding assistants. Users can search, filter, and find skills using categories, popularity, and AI semantics. The marketplace supports skills that comply with the open SKILL.md standard.
LinkedIn developed the Contextual Agent Playbooks & Tools (CAPT) to provide AI coding agents with essential organizational context. This framework allows these agents to access internal systems and execute workflows tailored to LinkedIn's unique environment, improving productivity for engineers.
The author draws parallels between coding and ceramics, emphasizing both as malleable mediums for ideas. As automation increases in software development, the focus shifts from routine coding to more creative, unconventional projects. The essence of craft remains valuable even as production work becomes automated.
The article discusses a workflow for using AI as a design partner in coding projects, rather than a quick code generator. It emphasizes the importance of thorough analysis, documentation, and incremental development to enhance clarity and maintainability. This approach helps catch issues early and improves overall code quality.
This article explains how to use the Benchmark module in Ruby to measure and report execution time for code snippets. It includes examples of different benchmarking methods and how to interpret the results. Instructions for installation and contribution to the module are also provided.
This article outlines how designers can leverage AI tools like Cursor and Claude Code to build web applications without needing extensive coding knowledge. It provides a step-by-step approach to creating projects, from setting up the tools to deploying live websites.
Google unveiled Gemini 3, an advanced AI model designed to enhance coding and development workflows. It supports agentic coding, multimodal understanding, and allows users to create complex applications with simple prompts. Key features include the new Google Antigravity platform and improved tools for document and video reasoning.
Google has released the Gemini 3 Flash model, which offers faster performance and improved coding capabilities compared to previous versions. It outperforms the older 2.5 Flash in several tests and is more cost-effective for developers. The model maintains its ability to generate interactive content and simulations.
The Codacy AI Risk Hub helps teams enforce secure coding practices for AI-generated code. It prevents vulnerabilities by tracking model usage, scanning for security risks, and managing hardcoded secrets across projects. This tool aims to maintain code quality while leveraging AI capabilities.
This article discusses Recursive Language Models (RLMs) as a solution to the problem of context rot in large language models. RLMs utilize a REPL environment to manage long contexts efficiently, enabling models to maintain performance even with extensive input data. The author highlights their potential for agent design and optimization while acknowledging current limitations.
Olmo 3 introduces advanced open language models with 7B and 32B parameters, focusing on tasks like long-context reasoning and coding. The release details the complete model lifecycle, including all stages and dependencies. The standout model, Olmo 3 Think 32B, claims to be the most capable open thinking model available.
Ramp created Inspect, a background coding agent that enhances developer productivity by providing a fully equipped sandboxed environment. It integrates various tools for both backend and frontend tasks, allowing efficient coding and testing, with a focus on speed and user agency.
This article outlines effective strategies for using AI coding assistants, emphasizing a structured approach to planning, context, and iterative development. The author shares insights from personal experience and community practices, highlighting the importance of detailed specifications and choosing the right models.
This article discusses the shift from valuing high-output engineers to recognizing the importance of those who focus on code quality and structure. With the rise of coding assistants, effective code management is becoming more challenging, leading to a demand for engineers who can curate and organize code thoughtfully. The author predicts that the future will celebrate these meticulous 0.1x engineers.
Anthropic has released Opus 4.5, improving conversation continuity in its Claude models by summarizing earlier dialogue instead of abruptly ending chats. The new model also achieves an 80.9% accuracy score, surpassing OpenAI's GPT-5.1 in coding tasks, though it still trails in visual reasoning.
This article examines how well AI models Claude Code and OpenAI Codex can identify Insecure Direct Object Reference (IDOR) vulnerabilities in real-world applications. It reveals that while these models excel in simpler cases, they struggle with more complex authorization logic, leading to a high rate of false positives.
This article outlines a method for minimizing errors in coding through defensive epistemology. It emphasizes the importance of making explicit predictions before actions and learning from failures to refine one's understanding of reality versus models. The approach is designed to prevent compounding mistakes and improve decision-making in programming.
Cursor has acquired Graphite, a startup focused on AI-driven code review and debugging. The deal, valued significantly above Graphite's last $290 million valuation, aims to integrate Graphite's unique "stacked pull request" feature with Cursor's existing AI tools, improving the efficiency of code development and review.
This article outlines principles and methods for optimizing code performance, primarily using C++ examples. It emphasizes the importance of considering efficiency during development to avoid performance issues later. The authors also provide practical advice for estimating performance impacts while writing code.
The article details how an AI coding agent inadvertently led to an infinite recursion bug in a web application. A crucial comment was deleted during a UI refactor, resulting in a missing safety constraint that triggered browsers to freeze and crash. The author emphasizes the importance of tests over comments in an AI-augmented coding environment.
Pencil integrates design tools directly into your IDE, allowing engineers to create visual designs and generate code seamlessly. This tool aims to enhance productivity by eliminating the need to switch between different applications.
This article outlines a set of skills designed for AI coding agents, focusing on enhancing React, Next.js, and React Native applications. It includes performance optimization guidelines, UI code reviews, and deployment capabilities with Vercel. Each skill comes with specific rules and use cases for effective development.
Google Cloud has formed a multi-year partnership with Replit to enhance AI coding capabilities for enterprise users. Replit will integrate more Google models and expand its cloud services, aiming to redefine how teams collaborate on coding projects. Both companies see significant growth potential amid rising demand for AI-driven coding tools.
The article outlines the author's experiences with AI tools, particularly LLMs, in various aspects of software engineering. It covers coding, research, summarization, and writing, highlighting both the benefits and limitations of these technologies. The author shares personal insights and practical examples of how AI has changed their workflow.