AI-generated code poses significant risks to the software supply chain due to the prevalence of non-existent dependencies, which can be exploited in dependency confusion attacks. A recent study found that a majority of code samples generated by large language models contained these "hallucinated" dependencies, increasing the likelihood of malicious packages being unknowingly installed by developers. This vulnerability highlights the need for careful verification of code outputs from AI models to prevent potential security breaches.