Click any tag below to further narrow down your results
Links
This article outlines key security vulnerabilities identified by NVIDIA's AI Red Team in large language model (LLM) applications. It highlights risks such as remote code execution from LLM-generated code, insecure access in retrieval-augmented generation, and data exfiltration through active content rendering. The blog offers practical mitigation strategies for these issues.