The article discusses the security vulnerabilities of local large language models (LLMs), particularly gpt-oss-20b, which are more easily tricked by attackers compared to larger frontier models. It details two types of attacks: one that plants hidden backdoors disguised as harmless features, and another that executes malicious code during the coding process by exploiting cognitive overload. The research highlights the significant risks of using local LLMs in coding environments.