Click any tag below to further narrow down your results
Links
Greptile automates code review in GitHub and GitLab, providing context-aware comments on pull requests. Teams can customize coding standards and track rule effectiveness to improve code quality and speed up merges. It supports multiple programming languages and offers self-hosting options.
Aleks Volochnev discusses the complexities of reviewing AI-generated code compared to writing it. He highlights how automation in code creation has increased the burden of verification and understanding, necessitating better tools for code review. The article emphasizes the importance of integrating AI in the review process to maintain quality.
The article discusses how AI tools are changing software development, particularly in code reviews. While AI can speed up coding, it also creates a bottleneck as more code requires review, leading to increased pressure on senior engineers. Developers need to understand AI-generated code better to manage the complexities it introduces.
The article outlines a workflow for effectively reviewing pull requests (PRs) using AI coding assistants. It emphasizes the importance of human involvement in PR reviews, detailing steps to analyze changes, assess impacts, and provide feedback efficiently. The author shares tools and commands to enhance the review process while minimizing time spent.
This article details the development of Bugbot, an AI-driven code review agent that identifies bugs and performance issues in pull requests before they go live. It highlights the systematic approach taken to enhance Bugbot's accuracy, including multiple testing strategies and the introduction of a new resolution rate metric to measure effectiveness.
The article outlines various design issues in LLVM, including insufficient code review capacity, frequent API changes, and challenges with build times and testing. It emphasizes the need for better testing practices and more stable APIs to enhance user experience and contributor engagement.
This article details how to create an AI-powered automated code reviewer for Azure DevOps using Microsoft Foundry and Large Language Models. It covers the setup process, necessary scripts, and how to ensure the review outputs are structured for effective automation.
The article discusses how AI changes the landscape of code reviews, making the reviewer's job more complex. It outlines specific heuristics for assessing pull requests (PRs), focusing on aspects like design, testing, error handling, and the effort put in by the author. The author emphasizes the need for human oversight despite advances in AI review tools.
The article discusses how code review should evolve in the age of large language models (LLMs). It emphasizes aligning human understanding and expectations rather than merely fixing code issues, highlighting the importance of communication and reasoning skills over mechanical coding ability. The author argues that effective reviews should focus on shared system knowledge and high-level concepts.
Metis is an open-source tool developed by Arm to enhance security code reviews using AI. It leverages large language models for semantic understanding, making it effective in identifying vulnerabilities in complex codebases. The tool is extensible and supports multiple programming languages.
The article discusses how code review is becoming a significant bottleneck in software development. While generating code quickly is easier, ensuring its quality and reliability takes more time. It highlights the potential role of AI tools in addressing this challenge.
This article discusses how advancements in AI are shifting engineering roles. Traditional skills that defined senior engineers are now expected from all levels, as AI takes over implementation tasks. The focus is on maintaining context, effective planning, and enhancing code review practices.
This article explains how to use Continuous Claude, a CLI tool that automates the process of creating pull requests and improving code coverage by running Claude Code in a continuous loop. It allows for persistent context across iterations, enabling efficient handling of multi-step projects without losing track of progress.
The article discusses the growing competition in AI code review tools and emphasizes Greptile's unique approach. It highlights three key principles—independence, autonomy, and feedback loops—that shape their vision for the future of code validation.
The article discusses the benefits of using stacked pull requests to improve code review efficiency. Instead of creating large, unmanageable PRs, developers can submit smaller, dependent requests that can be reviewed and merged more easily. This approach helps maintain team velocity and enhances code quality.
The article explores how advancements in AI coding tools will reshape software engineering in 2026. It highlights shifts in infrastructure, testing practices, and the importance of human oversight as LLMs generate code. The author raises questions about the evolving roles of engineers and the implications for project estimates and build vs. buy decisions.
This article examines how traditional code reviews often miss critical bugs that lead to significant production failures, highlighting a $2.1 million loss caused by a simple validation error. It discusses the inefficiencies of the process, the high costs involved, and the increasing role of AI in optimizing code review tasks.
The article discusses how generative AI, especially coding agents, has made collaboration within software teams less efficient. It highlights issues like poorly structured PR descriptions, different types of bugs introduced by AI, and the ambiguity of authorship, which complicates knowledge sharing and code review. The author argues for a cultural shift to improve transparency around LLM usage in team settings.
Sentry's AI Code Review tool has identified over 30,000 bugs in just one month, significantly speeding up the code review process by 50%. The updates include clearer comments, actionable AI prompts, and a new feature that automates patch generation.
The author argues against traditional line-by-line code review, advocating for a harness-first approach where specifications and testing take priority. They draw on examples from AI-assisted coding and highlight the importance of architecture and feedback loops over direct code inspection. Caveats are noted for critical systems where code review remains essential.
This article explains how AI is changing the code review process, emphasizing the need for evidence of code functionality rather than just relying on AI-generated outputs. It contrasts solo developers’ fast-paced workflows with team dynamics, where human judgment remains essential for quality and security. The piece outlines best practices for integrating AI into development and review processes.
Datadog developed an LLM-powered tool called BewAIre to review pull requests for malicious activity in real time. The system processes code changes and classifies them, achieving over 99.3% accuracy while minimizing false positives. It addresses the challenges posed by the increasing volume of PRs and the sophistication of attacks.
This article discusses how AI is changing the code review process for both solo developers and teams. It emphasizes the need for evidence of working code, highlights the risks of relying too heavily on AI, and outlines best practices for integrating AI into code reviews while maintaining human oversight.
Codacy introduces a hybrid code review engine that enhances Pull Request feedback by identifying logic gaps, security issues, and code complexity. It automates the review process, letting developers ship code faster and with more confidence.
This article explains how Sentry's AI Code Review system uses production data to identify potential bugs in pull requests. It details the multi-step pipeline that filters code changes, drafts bug hypotheses, and verifies them to provide actionable feedback without overwhelming developers with false positives.
This article discusses the challenges and methods of verifying code generated by AI systems. It highlights the importance of precision in automated code reviews, the need for repo-wide tools, and how real-world deployment has shown positive outcomes in catching bugs and improving code quality.
This article outlines how Qodo developed a benchmark to evaluate AI code review systems. It highlights a new methodology that injects defects into real pull requests to assess both bug detection and code quality, demonstrating superior results compared to other platforms.
The article stresses the importance of software engineers providing code that they have manually and automatically tested before submission. It emphasizes accountability in code reviews and the use of coding agents to assist in proving code functionality. Developers should include evidence of their tests to respect their colleagues' time and efforts.
This tool color-codes code diffs based on the level of human attention they require. By replacing "github.com" with "0github.com" in a pull request URL, users can visualize which changes might need a closer look. It uses AI to analyze the code and generates a heatmap highlighting potential issues.
This article emphasizes the responsibility of software engineers to deliver code that has been thoroughly tested and proven to work, both manually and automatically. It argues against the trend of relying on AI tools to submit untested code and stresses the importance of accountability in the development process.
This article discusses the importance of code reviews in web development. It highlights how code reviews improve code quality, foster team collaboration, and enhance learning among developers. The piece includes links to related topics and best practices.
This article discusses Unblocked, a code review tool that focuses on significant issues rather than trivial style problems. It uses your team's historical decisions and discussions to provide relevant feedback, ensuring that reviews are efficient and context-aware. Unblocked also offers actionable insights when CI fails and integrates with your existing workflows.
The article discusses how AI is transforming software development by generating code quickly but often producing low-quality output known as "AI slop." To address this issue, AI-powered code reviewers are emerging to ensure quality and security, changing the developer's role from coder to overseer. This shift highlights the need for skilled developers to manage AI tools effectively.
Seer is an AI debugging tool that helps developers identify and fix bugs during local development, code review, and production. It leverages Sentry's telemetry to provide context and automate root cause analysis, making it easier to catch issues early and streamline the debugging process. The service now offers unlimited use for a flat monthly fee.
This article discusses the concept of comprehension debt, which arises when teams rely on AI to generate code without fully understanding it. As AI produces large volumes of code quickly, engineers struggle to debug and maintain it later, leading to significant time losses. The piece emphasizes the importance of planning and collaboration with AI to mitigate these issues.
Sketch.dev experienced multiple outages caused by LLM-generated code that introduced a bug during a refactoring process, leading to infinite loops in error handling. Despite initial stability, the issues persisted until the offending code was reverted and clipboard support was added to improve code management. The incident highlights the need for better tooling to catch subtle errors during code reviews, especially when using LLMs for coding tasks.
Git notes are an underutilized feature in Git that allow users to attach metadata to commits without altering the original objects. While they can be useful for various purposes like tracking reviews and adding important information, their complex usability and lack of visibility have led to limited adoption. Despite their potential, Git notes remain largely overlooked in the developer community.
Gemini Code Assist enhances the code review process in GitHub by providing instant summaries, identifying bugs, and suggesting improvements, which allows developers to focus on more complex issues. With the integration of the advanced Gemini 2.5 model, feedback is more accurate and actionable, leading to higher code quality and increased developer satisfaction, as evidenced by early adopters like Delivery Hero.
Saša Jurić's talk at the Goatmire Elixir Conf emphasized the importance of effective code reviews and manageable pull requests (PRs). He advocated for returning complex PRs to authors for clarification, promoting smaller, story-driven commits that enhance understanding and collaboration among developers. Adopting these practices can significantly improve code quality and the review process.
Sentry integrates with pull requests to identify and resolve potential issues in code before deployment, leveraging error and performance data. It provides instant feedback, highlights impactful errors, and even generates unit tests to ensure robust code quality. This tool aims to streamline the development process by minimizing bugs and enhancing productivity.
The article discusses ways to improve the code review process, emphasizing the importance of clear communication, constructive feedback, and leveraging collaborative tools. It highlights common pitfalls in code reviews and suggests strategies for fostering a more productive and inclusive review environment. By implementing these practices, teams can enhance code quality and developer satisfaction.
Code reviews are essential for maintaining high-quality software and fostering a collaborative team environment. They help identify issues early, improve code quality, and enhance knowledge sharing among team members. A structured approach to code reviews can significantly benefit both individual developers and the overall project.
Claude Code is a GitHub action that enhances PRs and issues by intelligently responding to context and executing code changes. It supports multiple authentication methods and includes features like code review, implementation of fixes, and seamless integration with GitHub workflows. The setup process is streamlined for users, providing various automation patterns and a comprehensive guide for migration and usage.
SecureVibes is an AI-powered security system designed to detect vulnerabilities in codebases through a collaborative multi-agent architecture. Utilizing five specialized agents, it provides thorough security assessments, threat modeling, code reviews, and dynamic testing across multiple programming languages while offering customizable reporting options.
Sentry has launched a beta version of its AI-powered code review tool aimed at reducing production errors. This new feature leverages machine learning to assist developers in identifying and addressing issues within their code before deployment, enhancing overall software quality.
Amazon Q Developer introduces an interactive code review experience in GitHub that enhances developer productivity by providing inline answers and suggestions directly within pull requests. This feature streamlines the review process by offering concise summaries and reducing the time spent searching for context, ultimately enabling faster code merges and improved collaboration among teams.
Effective code review is essential for maintaining code quality and understanding long-term implications, especially as AI-generated code increases the volume and complexity of commits. Developers must adapt to a more senior-level mindset early in their careers due to the rapid output of AI tools, which can complicate traditional review processes. While AI can assist in code review by identifying patterns and style issues, it cannot replace the nuanced judgment of human reviewers, making collaboration between AI and developers crucial for maintaining code integrity.
The author discusses the concept of compounding engineering, where AI systems learn from past code reviews and bugs to improve future development processes. By utilizing AI like Claude Code, developers can create self-improving systems that enhance efficiency and reduce repetitive work, ultimately transforming how they approach coding and debugging.
Jules has launched significant updates including PR comments in GitHub, a stacked diff viewer, and native file upload, enhancing code review processes and project initialization for developers. These features aim to streamline feedback loops and simplify setup for new projects, positioning Jules as a proactive collaborator in the development workflow. The updates are part of a broader strategy to improve reliability and expand capabilities in upcoming launches.
Vibecoding significantly accelerated the development of a web crawler capable of crawling a billion pages in 24 hours, though it presented challenges in managing bugs and code quality. The author highlights the importance of balancing AI assistance with manual code review to mitigate risks, especially in high-stakes environments. Key insights include the exploration of various designs and the impact of AI on coding efficiency and experimental processes.
Atlassian has developed an ML-based comment ranker to enhance the quality of code review comments generated by LLMs, resulting in a 30% reduction in pull request cycle time. The model leverages proprietary data to filter and select useful comments, significantly improving user feedback and maintaining high code resolution rates. With ongoing adaptations and retraining, the comment ranker demonstrates robust performance across diverse user bases and code patterns.
The article discusses how to implement dynamic required reviewers in Azure DevOps pull requests, enabling teams to tailor their review processes based on specific criteria. This feature enhances collaboration and ensures that the right stakeholders are involved in code reviews, improving overall code quality and team efficiency.
The article explores the essential components of a pull request generator, detailing its significance in streamlining the code review process and enhancing collaboration among developers. It emphasizes the importance of automation and best practices in creating effective pull requests to improve software development workflows.
The pull request #6429 discusses the addition of production kernels and a micro-benchmark for a mixture-of-experts MLP in the Triton programming language. It highlights various limitations and restrictions regarding the application of suggestions within the pull request, including issues related to closed status and deleted lines. Overall, it addresses the complexities of managing code suggestions during the review process.
The article discusses two straightforward principles that can significantly enhance the effectiveness of code reviews. By focusing on clarity and constructive feedback, teams can improve their code quality and collaboration during the review process.
The article discusses the integration of Claude, an AI system developed by Anthropic, to automate security reviews in software development. By leveraging Claude's capabilities, teams can enhance their security processes, reduce manual effort, and improve overall code quality. This innovation aims to streamline security practices in the tech industry.
Non-programming leaders starting to contribute to code with LLMs can increase iteration speed and introduce diverse perspectives, but this also risks compromising the implicit architecture of the codebase. As more non-engineers make changes, maintaining design intent and code maintainability becomes a challenge, requiring developers to adapt their roles to focus on architectural oversight. Despite these risks, democratizing coding could lead to better solutions as more perspectives are included in the development process.