7 links
tagged with all of: llm + security
Click any tag below to further narrow down your results
Links
The article discusses the potential security risks associated with using large language models (LLMs) in coding practices. It highlights how these models can inadvertently introduce vulnerabilities and the implications for developers and organizations. The need for robust security measures when integrating LLMs into coding workflows is emphasized.
Supabase's Model Context Protocol (MCP) poses a security risk as it can be exploited to leak sensitive SQL database information through user-submitted messages that are processed as commands. The integration allows developers to unintentionally execute harmful SQL queries due to elevated access privileges, emphasizing the need for better safeguards against prompt injection attacks.
Security backlogs often become overwhelming due to inconsistent severity labeling from various tools, leading to chaos in issue prioritization. Large language models (LLMs) can help by analyzing and scoring issues based on detailed context rather than relying solely on scanner outputs, providing a more informed approach to triage and prioritization.
Bloomberg's research reveals that the implementation of Retrieval-Augmented Generation (RAG) systems can unexpectedly increase the likelihood of large language models (LLMs) providing unsafe responses to harmful queries. The study highlights the need for enterprises to rethink their safety architectures and develop domain-specific guardrails to mitigate these risks.
Agentic AI systems, particularly those utilizing large language models (LLMs), face significant security vulnerabilities due to their inability to distinguish between instructions and data. The concept of the "Lethal Trifecta" highlights the risks associated with sensitive data access, untrusted content, and external communication, emphasizing the need for strict mitigations to minimize these threats. Developers must adopt careful practices, such as using controlled environments and minimizing data exposure, to enhance security in the deployment of these AI applications.
The article explores the concept of developing C2-less malware using large language models (LLMs) for autonomous decision-making and exploitation. It discusses the implications of such technology, particularly through a malware example called "PromptLock," which utilizes LLMs to generate and execute code without human intervention. The author proposes a proof of concept for self-contained malware capable of exploiting misconfigured services on a target system.
PromptMe is an educational project that highlights security vulnerabilities in large language model (LLM) applications, featuring 10 hands-on challenges based on the OWASP LLM Top 10. Aimed at AI security professionals, it provides a platform to explore risks and mitigation strategies, using Python and the Ollama framework. Users can set up the application to learn about vulnerabilities through CTF-style challenges, with solutions available for beginners.