5 links
tagged with all of: threat-intelligence + ai
Click any tag below to further narrow down your results
Links
Prompts used in large language models (LLMs) are emerging as critical indicators of compromise (IOCs) in cybersecurity, highlighting how threat actors exploit these technologies for malicious purposes. The article reviews a recent report from Anthropic detailing various misuse cases of the AI model Claude and emphasizes the need for threat analysts to focus on prompt-based tactics, techniques, and procedures (TTPs) for effective monitoring and detection. The author proposes the NOVA tool for detecting adversarial prompts tailored to specific threat scenarios.
Google has launched Sec-Gemini v1, an experimental AI model aimed at enhancing cybersecurity by providing advanced reasoning capabilities and real-time knowledge to support cybersecurity workflows. This model outperforms existing benchmarks and is available for research collaboration with select organizations to help shift the balance in favor of cybersecurity defenders.
Warren is an open-source AI-powered security alert management system that automates alert triage by ingesting alerts from various sources, enriching them with threat intelligence, and filtering out noise. Key features include webhook-based ingestion, LLM-powered analysis, a React-based web UI, and flexible deployment options, making it suitable for enhancing incident response times and managing alerts effectively.
AI is transforming the cybercrime landscape by enhancing existing attack methods rather than creating new threats, making cybercriminal activities more efficient and accessible. The panel at RSA Conference 2025 emphasized the importance of adapting defense strategies to counter AI-driven attacks, highlighting the need for international cooperation and innovative security frameworks. As AI continues to evolve, both defenders and threat actors will need to adapt rapidly to the changing dynamics of cyber threats.
An analysis of over 2.6 million AI-related posts from underground sources reveals how threat actors are leveraging AI technologies for malicious purposes. The research highlights 100,000 tracked illicit sources and identifies five distinct use cases, including multilingual phishing and deepfake impersonation tools. This comprehensive insight offers unmatched visibility into adversaries' strategies and innovations in AI exploitation.