Click any tag below to further narrow down your results
Links
The article criticizes Anthropic's recent report on a Chinese state-sponsored cyber espionage operation, arguing it lacks verifiable details and fails to provide essential indicators for threat detection. It highlights the report's shortcomings in transparency and accountability, questioning the motivations behind its release and the credibility of the claims made.
The article introduces CyberSOCEval, a set of open source benchmarks designed to evaluate Large Language Models (LLMs) in malware analysis and threat intelligence reasoning. It highlights the need for improved assessments of LLMs to better support cybersecurity efforts, especially as malicious actors leverage AI for attacks. The findings show that current models are underperforming in cybersecurity scenarios, indicating room for enhancement.
Prompts used in large language models (LLMs) are emerging as critical indicators of compromise (IOCs) in cybersecurity, highlighting how threat actors exploit these technologies for malicious purposes. The article reviews a recent report from Anthropic detailing various misuse cases of the AI model Claude and emphasizes the need for threat analysts to focus on prompt-based tactics, techniques, and procedures (TTPs) for effective monitoring and detection. The author proposes the NOVA tool for detecting adversarial prompts tailored to specific threat scenarios.
Google has launched Sec-Gemini v1, an experimental AI model aimed at enhancing cybersecurity by providing advanced reasoning capabilities and real-time knowledge to support cybersecurity workflows. This model outperforms existing benchmarks and is available for research collaboration with select organizations to help shift the balance in favor of cybersecurity defenders.
Warren is an open-source AI-powered security alert management system that automates alert triage by ingesting alerts from various sources, enriching them with threat intelligence, and filtering out noise. Key features include webhook-based ingestion, LLM-powered analysis, a React-based web UI, and flexible deployment options, making it suitable for enhancing incident response times and managing alerts effectively.
AI is transforming the cybercrime landscape by enhancing existing attack methods rather than creating new threats, making cybercriminal activities more efficient and accessible. The panel at RSA Conference 2025 emphasized the importance of adapting defense strategies to counter AI-driven attacks, highlighting the need for international cooperation and innovative security frameworks. As AI continues to evolve, both defenders and threat actors will need to adapt rapidly to the changing dynamics of cyber threats.
An analysis of over 2.6 million AI-related posts from underground sources reveals how threat actors are leveraging AI technologies for malicious purposes. The research highlights 100,000 tracked illicit sources and identifies five distinct use cases, including multilingual phishing and deepfake impersonation tools. This comprehensive insight offers unmatched visibility into adversaries' strategies and innovations in AI exploitation.