Click any tag below to further narrow down your results
Links
A recent survey reveals that while 96% of engineers don't fully trust AI-generated code, only 48% consistently verify it before submission. This gap raises concerns about code quality and accountability in software development. The article discusses survey findings on AI usage, trust levels, and the importance of oversight.
This article discusses how AI tools necessitate stricter coding practices to produce high-quality software. It emphasizes the importance of 100% code coverage, thoughtful file organization, and automated best practices to support AI in writing effective code.
GitHub is responding to the influx of low-quality AI-generated pull requests that burden maintainers. Product manager Camilla Moraes initiated a community discussion on potential solutions, including options to disable pull requests or improve review processes to address the challenges posed by AI contributions.
The author critiques the reliance on AI tools like LLMs for code generation, arguing that it undermines the essential thinking and problem-solving skills of developers. They compare generated code to fast fashion—appealing but often flawed—emphasizing the importance of accountability and understanding in software development.
This article analyzes the quality, security, and maintainability of code generated by leading AI models like GPT-5.2 High and Gemini 3 Pro using SonarQube. It presents findings on functional performance, complexity, concurrency issues, and security vulnerabilities across various models.
This article analyzes a report comparing AI-generated and human-written code, focusing on the higher incidence of issues in AI pull requests. Key findings show that AI code often has more critical errors, readability problems, and security vulnerabilities, highlighting the need for better review processes.
This article emphasizes that AI-generated code often lacks the quality needed for sustainable software development. It argues for prioritizing code quality and architecture over speed and flashiness, highlighting that true software success involves ongoing maintenance and understanding of the codebase.
The author used an AI tool to repeatedly modify a codebase, aiming to enhance its quality through an automated process. While the AI added significant lines of code and tests, many of the changes were unnecessary or unmaintainable, leaving the core functionality largely intact but cluttered. The exercise highlighted the pitfalls of prioritizing quantity over genuine quality improvements.
The article presents an analysis of the current state of AI in relation to code quality, highlighting key metrics and trends that impact software development practices. It emphasizes the importance of integrating AI tools to enhance code accuracy and efficiency, ultimately aiming for improved software outcomes.