Prompts used in large language models (LLMs) are emerging as critical indicators of compromise (IOCs) in cybersecurity, highlighting how threat actors exploit these technologies for malicious purposes. The article reviews a recent report from Anthropic detailing various misuse cases of the AI model Claude and emphasizes the need for threat analysts to focus on prompt-based tactics, techniques, and procedures (TTPs) for effective monitoring and detection. The author proposes the NOVA tool for detecting adversarial prompts tailored to specific threat scenarios.
+ prompts
cybersecurity ✓
ai ✓
threat-intelligence ✓
iocs ✓