Click any tag below to further narrow down your results
Links
A recent survey reveals that while 96% of engineers don't fully trust AI-generated code, only 48% consistently verify it before submission. This gap raises concerns about code quality and accountability in software development. The article discusses survey findings on AI usage, trust levels, and the importance of oversight.
GitHub is responding to the influx of low-quality AI-generated pull requests that burden maintainers. Product manager Camilla Moraes initiated a community discussion on potential solutions, including options to disable pull requests or improve review processes to address the challenges posed by AI contributions.
This article discusses how AI tools necessitate stricter coding practices to produce high-quality software. It emphasizes the importance of 100% code coverage, thoughtful file organization, and automated best practices to support AI in writing effective code.
This article explains the `satisfies` keyword in TypeScript, highlighting its role in type inference and safety. It contrasts `satisfies` with explicit type annotations, showing how it allows TypeScript to infer more specific types while ensuring assignments are valid.
The author critiques the reliance on AI tools like LLMs for code generation, arguing that it undermines the essential thinking and problem-solving skills of developers. They compare generated code to fast fashion—appealing but often flawed—emphasizing the importance of accountability and understanding in software development.
This article analyzes the quality, security, and maintainability of code generated by leading AI models like GPT-5.2 High and Gemini 3 Pro using SonarQube. It presents findings on functional performance, complexity, concurrency issues, and security vulnerabilities across various models.
This article analyzes a report comparing AI-generated and human-written code, focusing on the higher incidence of issues in AI pull requests. Key findings show that AI code often has more critical errors, readability problems, and security vulnerabilities, highlighting the need for better review processes.
This article presents a leaderboard ranking various LLMs based on their performance in code quality, security, and maintainability. The analysis evaluates 4,444 Java programming assignments, providing metrics like pass rates and issue density for each model. Key insights include the top-performing models and their specific strengths.
The article discusses how 37signals achieves cleaner code through a focused engineering strategy that emphasizes small teams, strict scope management, and hiring top talent. Most companies struggle to adopt this approach due to their mindset that more features equal more revenue. Ultimately, 37signals' success lies in their commitment to quality over quantity.
This article discusses the high noise levels in AI code review tools, which often generate more trivial comments than actionable insights. It introduces a framework to measure the signal-to-noise ratio, emphasizing the importance of focusing on critical issues to improve code quality and team efficiency.
This article emphasizes that AI-generated code often lacks the quality needed for sustainable software development. It argues for prioritizing code quality and architecture over speed and flashiness, highlighting that true software success involves ongoing maintenance and understanding of the codebase.
Linus Torvalds criticized a proposed helper function for being ambiguous and adding cognitive load instead of clarity. The article discusses how clearer naming conventions can improve code readability and reduce confusion. It suggests that while abstractions can be beneficial, they must be explicit to be effective.
Ugly code can hold hidden value, particularly when it reflects deep knowledge of a problem domain. Often, it contains insights that aren't documented elsewhere and can be more helpful than starting from scratch. Working with legacy code may be challenging, but it can reveal lessons that aren't immediately clear.
This article discusses how to enhance the effectiveness of large language models (LLMs) in software engineering by focusing on guidance and oversight. It emphasizes the importance of creating a prompt library to improve LLM outputs and the necessity of oversight to ensure quality and alignment in code decisions.
The author used an AI tool to repeatedly modify a codebase, aiming to enhance its quality through an automated process. While the AI added significant lines of code and tests, many of the changes were unnecessary or unmaintainable, leaving the core functionality largely intact but cluttered. The exercise highlighted the pitfalls of prioritizing quantity over genuine quality improvements.
This article discusses the issues caused by frozen test fixtures in large codebases, where changes can lead to false test failures. It emphasizes writing focused tests to prevent fixture dependency problems and explores effective strategies for maintaining both fixtures and factories.
This article outlines key lessons learned from a long career at Google, focusing on the importance of user-centric problem solving, collaboration, and clarity in engineering. It emphasizes that technical skills alone aren't enough; navigating people and processes is crucial for success.
The article argues that a focus on rapid feature delivery in tech has led to a decline in code quality and craftsmanship. It explores reasons behind this shift, such as perverse incentives, backlog pressure, and lower stakes in software delivery. The author expresses concern that conversations about craftsmanship have become rare in the industry.
The article introduces Levo's Principle, which states that an object's behavior should remain unchanged after construction. It discusses the pitfalls of allowing post-construction configuration and provides examples of how to maintain clarity and avoid bugs by adhering to this principle.
The article discusses how the rise of AI tools, particularly LLMs, has affected software engineering and data work. While some engineers are concerned about the declining quality of code, data professionals find value in these tools for generating quick, low-maintenance solutions. It emphasizes the need for careful evaluation of the new data generated by these systems.
The article discusses the emergence and persistence of disposable code in software development, highlighting its advantages and challenges. It emphasizes how disposable code can lead to faster iteration and innovation but also raises concerns about code quality and maintainability. The piece advocates for a balanced approach to incorporating disposable code into programming practices.
The article discusses the concept of "useless use of callbacks," which refers to unnecessary use of callbacks in programming, particularly in JavaScript. It highlights how this practice can lead to more complex and less maintainable code, advocating for more straightforward alternatives.
Managing dependencies in a React application requires careful attention to both direct and transitive dependencies to avoid unnecessary complexity and bloating. Techniques such as reading dependency source code, utilizing tools like Renovate and Knip, and analyzing package sizes are essential for maintaining a clean and efficient project. Ultimately, understanding the ecosystem and making informed choices can lead to better dependency management and reduced technical debt.
Web Codegen Scorer is a specialized tool designed to assess the quality of web code generated by Large Language Models (LLMs), enabling developers to make informed decisions about AI-generated code. It allows for the configuration of evaluations across different models and frameworks, offers built-in checks for various code quality metrics, and provides a user-friendly report viewer for analysis. The tool aims to improve the consistency and repeatability of measuring code generation performance compared to traditional trial-and-error methods.
AI-generated tests can create the illusion of thorough testing by merely reflecting existing code without validating its correctness, leading to a dangerous cycle of replacing critical thinking with automation. While these tools can be useful for documenting legacy code, they should not replace the intent behind testing, which is to ensure that code meets its intended functionality. Engineers must remain engaged in the testing process to maintain accountability and ensure quality.
Vibe coding, a practice where developers rely on intuition and personal feelings rather than structured methods, poses significant risks to code quality and project outcomes. Relying on this approach can lead to poor decision-making and inefficiencies, ultimately affecting the success of software development projects. Embracing more systematic coding practices is essential for delivering reliable and maintainable software.
Figma Sites, currently in beta, are criticized for generating overly complex and semantically flawed code, leading to multiple accessibility issues. The article highlights specific problems, such as non-standard navigation structures, redundant elements, and a lack of proper interactive components, questioning the effectiveness of the tool in producing usable web content.
pyscn is a tool designed for structural analysis of codebases, enabling developers to maintain code quality through features like dead code detection, clone detection, and cyclomatic complexity analysis. It can be run without installation using commands like `uvx pyscn analyze .` and integrates with AI coding assistants via the Model Context Protocol (MCP). The tool supports various output formats, including JSON and HTML reports, and offers configuration options for custom analyses.
The article discusses the challenges and pitfalls of "vibe coding," a term that describes the practice of relying on intuition and feelings rather than structured programming principles and methodologies. It emphasizes the potential risks associated with this approach, including code quality and maintainability issues, and advocates for a more disciplined and methodical coding practice.
The article discusses the importance of maintaining a technical debt backlog in software development, emphasizing that it helps teams prioritize, track, and address technical debt effectively. By adopting a structured approach to managing technical debt, organizations can improve code quality and enhance overall project sustainability.
The article discusses how integrating JSDoc into the development workflow significantly improved code documentation and comprehension. By leveraging JSDoc, developers can generate useful documentation automatically, leading to more maintainable and understandable codebases. This practice also enhances collaboration among team members by providing clear insights into the code's functionality.
The article discusses the concept of "comprehension debt" in relation to code generated by large language models (LLMs). It highlights the risks associated with relying heavily on LLM-generated code, as developers may struggle to understand and maintain it, leading to potential long-term issues in software quality and sustainability. The author emphasizes the importance of fostering comprehension to mitigate these risks.
The content appears to be corrupted or unreadable, making it impossible to extract a coherent summary or key points. It seems to lack structured information related to coding practices or advice on avoiding poor coding habits.
John Arundel shares ten essential commandments for writing effective Go code, emphasizing practices such as creating reusable packages, thorough testing, and prioritizing code readability. The guidelines also stress error handling, safe programming habits, and the importance of maintaining a clean environment while coding. By adhering to these principles, developers can enhance their code quality and overall efficiency.
Code smells in TypeScript, such as inadequate state management and untyped promise responses, can lead to maintainability and readability issues in a project. Utilizing AI code review tools can help identify and resolve these issues early, enhancing code quality and preventing technical debt. By addressing these code smells, development teams can focus on building features more efficiently.
A case study on OpenVPN2 highlights the effectiveness of using CodeQL to manage and reduce a staggering 2,500 compiler warnings. The approach demonstrates how automated code analysis tools can enhance code quality and maintainability in complex projects, ultimately leading to more robust software development practices.
The article discusses common pitfalls and mistakes to avoid when writing JavaScript code, emphasizing the importance of best practices to improve code quality and maintainability. It highlights issues like variable scope, asynchronous programming, and the use of outdated functions that can lead to bugs and performance problems. Following these guidelines can help developers create more efficient and reliable applications.
Armin Ronacher reflects on the challenges of programming with inadequate tools and documentation, emphasizing the potential of programming agents to objectively measure code quality and developer experience. He discusses the importance of good test coverage, error reporting, ecosystem stability, and user-friendly tools, arguing that these factors impact both agents and human developers. By utilizing agents, teams can gain valuable insights into their codebases and improve overall project health.
The webpage provides an introduction to SonarSource's cloud offerings, detailing how users can get started with their tools for code quality and security analysis. It outlines the benefits of using SonarCloud and encourages developers to integrate their services into their workflows for improved software development processes.
The article discusses the importance of leveraging a type system in programming to enhance code quality and maintainability. It emphasizes how a well-structured type system can prevent errors and improve developer efficiency by providing clear documentation and better tooling support. Practical examples and benefits of using a type system are highlighted to encourage adoption among programmers.
The article discusses the implications of large language models (LLMs) on software development, highlighting the varying effectiveness of their use and the potential risks associated with their integration. It raises concerns about the possible future of programming jobs, the inevitable economic bubble surrounding AI technology, and the inherent unpredictability of LLM outputs. Additionally, it emphasizes the importance of understanding workflows and experimenting with LLMs while being cautious of their limitations and security vulnerabilities.
Symbiotic Security v1 integrates AI-driven code security directly into developers' IDEs, providing real-time detection, remediation, and educational insights for coding vulnerabilities. By automatically suggesting secure code replacements and facilitating interactive learning, it enhances developer productivity and ensures clean code from the outset. Teams have successfully mitigated thousands of vulnerabilities before they reach production, streamlining the development process.
High cognitive load in software development can hinder understanding and efficiency. To mitigate this, developers should focus on reducing extraneous cognitive load by simplifying code structures, preferring deep modules with simple interfaces over numerous shallow ones, and adjusting designs to minimize confusion and maximize clarity. Emphasizing the importance of cognitive load can lead to better software practices and more maintainable codebases.
The article discusses the concept of "stringly typed" programming, which refers to the practice of using strings for multiple types of data, leading to confusion and errors. It advocates for more robust type systems that enhance code clarity and reliability by avoiding string-based representations. The author highlights the importance of adopting better data structures for cleaner and more maintainable code.
The article presents an analysis of the current state of AI in relation to code quality, highlighting key metrics and trends that impact software development practices. It emphasizes the importance of integrating AI tools to enhance code accuracy and efficiency, ultimately aiming for improved software outcomes.
The article emphasizes the importance of avoiding abstract code in programming, advocating for clarity and simplicity in code design. It suggests that clear, straightforward code enhances maintainability and collaboration among developers. The author argues that overly abstract code can lead to confusion and hinder the understanding of the underlying logic.
The rise of AI coding agents is transforming software development, leading to a shift where engineers spend more time reviewing AI-generated code than writing it. Predictive CI is proposed as a solution to enhance code quality by proactively generating tests and identifying potential issues, thus evolving traditional CI/CD practices to keep pace with AI advancements. Companies that adopt predictive CI early will gain a competitive edge in building reliable software.