Click any tag below to further narrow down your results
Links
Researchers have developed AURA, a tool that injects fake data into knowledge graphs, making stolen proprietary data useless to attackers while remaining accessible to authorized users. This method is designed to safeguard sensitive information in AI systems from theft and misuse.
Large Language Models (LLMs) are vulnerable to data poisoning attacks that require only a small, fixed number of malicious documents, regardless of the model's size or training data volume. This counterintuitive finding challenges existing assumptions about AI security and highlights significant risks for organizations deploying LLMs, calling for urgent development of robust defenses against such vulnerabilities.