3 links
tagged with all of: security + llms
Click any tag below to further narrow down your results
Links
Implementing guardrails around containerized large language models (LLMs) on Kubernetes is crucial for ensuring security and compliance. This involves setting resource limits, using namespaces for isolation, and implementing access controls to mitigate risks associated with running LLMs in a production environment. Properly configured guardrails can help organizations leverage the power of LLMs while maintaining operational integrity.
Current approaches to securing large language models (LLMs) from malicious inputs remain inadequate, highlighting significant vulnerabilities in their design and deployment. The article discusses the ongoing challenges and the need for improved strategies to mitigate risks associated with harmful prompts.
The article discusses the security vulnerabilities of local large language models (LLMs), particularly gpt-oss-20b, which are more easily tricked by attackers compared to larger frontier models. It details two types of attacks: one that plants hidden backdoors disguised as harmless features, and another that executes malicious code during the coding process by exploiting cognitive overload. The research highlights the significant risks of using local LLMs in coding environments.