3 links
tagged with all of: llm + vulnerabilities
Click any tag below to further narrow down your results
Links
The article discusses the potential security risks associated with using large language models (LLMs) in coding practices. It highlights how these models can inadvertently introduce vulnerabilities and the implications for developers and organizations. The need for robust security measures when integrating LLMs into coding workflows is emphasized.
The article provides an in-depth explanation of the Model Context Protocol (MCP), highlighting its role in enhancing the capabilities of large language models (LLMs) through improved context provision. It also conducts a detailed threat model analysis, identifying key security vulnerabilities and potential attack vectors associated with MCP's functionalities, such as sampling and composability.
PromptMe is an educational project that highlights security vulnerabilities in large language model (LLM) applications, featuring 10 hands-on challenges based on the OWASP LLM Top 10. Aimed at AI security professionals, it provides a platform to explore risks and mitigation strategies, using Python and the Ollama framework. Users can set up the application to learn about vulnerabilities through CTF-style challenges, with solutions available for beginners.