10 links tagged with all of: code-review + software-engineering
Click any tag below to further narrow down your results
Links
The article discusses how AI tools are changing software development, particularly in code reviews. While AI can speed up coding, it also creates a bottleneck as more code requires review, leading to increased pressure on senior engineers. Developers need to understand AI-generated code better to manage the complexities it introduces.
The article discusses how code review should evolve in the age of large language models (LLMs). It emphasizes aligning human understanding and expectations rather than merely fixing code issues, highlighting the importance of communication and reasoning skills over mechanical coding ability. The author argues that effective reviews should focus on shared system knowledge and high-level concepts.
The article discusses how AI changes the landscape of code reviews, making the reviewer's job more complex. It outlines specific heuristics for assessing pull requests (PRs), focusing on aspects like design, testing, error handling, and the effort put in by the author. The author emphasizes the need for human oversight despite advances in AI review tools.
This article examines how traditional code reviews often miss critical bugs that lead to significant production failures, highlighting a $2.1 million loss caused by a simple validation error. It discusses the inefficiencies of the process, the high costs involved, and the increasing role of AI in optimizing code review tasks.
The article explores how advancements in AI coding tools will reshape software engineering in 2026. It highlights shifts in infrastructure, testing practices, and the importance of human oversight as LLMs generate code. The author raises questions about the evolving roles of engineers and the implications for project estimates and build vs. buy decisions.
The author argues against traditional line-by-line code review, advocating for a harness-first approach where specifications and testing take priority. They draw on examples from AI-assisted coding and highlight the importance of architecture and feedback loops over direct code inspection. Caveats are noted for critical systems where code review remains essential.
The article discusses how generative AI, especially coding agents, has made collaboration within software teams less efficient. It highlights issues like poorly structured PR descriptions, different types of bugs introduced by AI, and the ambiguity of authorship, which complicates knowledge sharing and code review. The author argues for a cultural shift to improve transparency around LLM usage in team settings.
This article outlines how Qodo developed a benchmark to evaluate AI code review systems. It highlights a new methodology that injects defects into real pull requests to assess both bug detection and code quality, demonstrating superior results compared to other platforms.
The article stresses the importance of software engineers providing code that they have manually and automatically tested before submission. It emphasizes accountability in code reviews and the use of coding agents to assist in proving code functionality. Developers should include evidence of their tests to respect their colleagues' time and efforts.
This article discusses the concept of comprehension debt, which arises when teams rely on AI to generate code without fully understanding it. As AI produces large volumes of code quickly, engineers struggle to debug and maintain it later, leading to significant time losses. The piece emphasizes the importance of planning and collaboration with AI to mitigate these issues.