AI-generated code poses significant risks to the software supply chain due to the prevalence of non-existent dependencies, which can be exploited in dependency confusion attacks. A recent study found that a majority of code samples generated by large language models contained these "hallucinated" dependencies, increasing the likelihood of malicious packages being unknowingly installed by developers. This vulnerability highlights the need for careful verification of code outputs from AI models to prevent potential security breaches.
The rise of AI-powered code generation tools has led to an increase in "slopsquatting," where malicious actors exploit hallucinated package names suggested by AI to distribute malware. Security experts emphasize the importance of verifying package names and contents to mitigate risks associated with AI-generated code. Ongoing efforts are being made to enhance security measures in package registries like PyPI to combat this issue.