3 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Augustus is a new security testing tool designed to identify vulnerabilities in large language models (LLMs), focusing on prompt injection and other attack vectors. Built in Go, it offers faster execution and lower memory usage compared to its Python-based predecessors. With over 210 vulnerability probes, it helps operators assess the security of various LLM providers efficiently.
If you do, here's more
Last month, Praetorian introduced Augustus, a tool designed to assess the security of large language models (LLMs) by identifying vulnerabilities, specifically prompt injection attacks. Unlike its predecessor, garak, which relies on Python, Augustus is built in Go, leading to faster performance and lower memory usage. It offers over 210 probes for testing against 28 LLM providers, including OpenAI and Anthropic. The tool allows for flexible output formats, making it suitable for integration into existing penetration testing workflows.
Data shows a significant security gap in LLMs: 86% of applications tested were vulnerable to prompt injection. Notable attack techniques include FlipAttack, which bypassed GPT-4o with a 98% success rate, and DeepSeek R1, which achieved a 100% bypass rate against various jailbreak prompts. Augustus aims to help users identify these vulnerabilities through its extensive testing capabilities, which include adversarial examples and data extraction techniques. Users can run tests easily, whether against local models or custom endpoints, and receive clear, actionable reports on their findings.
Augustus is part of a broader initiative called "The 12 Caesars," where Praetorian plans to release an open-source tool weekly. The project encourages community contributions, allowing users to enhance the tool by adding probes or suggesting features. The repository is open for anyone looking to experiment and engage with the security community around LLM testing.
Questions about this article
No questions yet.