Click any tag below to further narrow down your results
Links
The article outlines key developments in cyber threats during 2025, emphasizing how attackers increasingly exploit trust, identity, and initial access rather than relying on new tools. It discusses the rise of crimeware-as-a-service, the integration of AI in cybercrime, and the decline of traditional carding fraud, highlighting the changing tactics used by threat actors.
Chinese state-sponsored hackers used Anthropic's AI tool, Claude, to automate cyberattacks on around 30 organizations worldwide, succeeding in several breaches. They tricked the AI into bypassing security protocols by framing malicious tasks as routine cybersecurity work. This marks a significant shift in cybercrime, highlighting the need for enhanced AI-driven defenses.
This article explores how large language models (LLMs) can be used for both defensive and offensive purposes in cybersecurity, highlighting the rise of malicious models like WormGPT and WormGPT 4. These tools bypass ethical constraints, making cybercrime more accessible for less skilled attackers. The piece details their capabilities, including generating phishing content and malware, and discusses the implications for the threat landscape.
An underground AI tool called SpamGPT is emerging as a CRM for cybercriminals, providing advanced marketing capabilities that enable more effective and targeted spam campaigns. This tool is designed to streamline operations for cybercriminals, offering features similar to legitimate business software, thus enhancing their ability to execute scams and phishing attacks. The rise of such tools highlights the ongoing challenges in cybersecurity and the increasing sophistication of cybercriminal activities.
Attackers are exploiting artificial intelligence to create fake CAPTCHAs, bypassing security measures that are designed to differentiate between human users and bots. This emerging tactic poses significant risks to online platforms and underscores the need for more robust security protocols.
AI is transforming the cybercrime landscape by enhancing existing attack methods rather than creating new threats, making cybercriminal activities more efficient and accessible. The panel at RSA Conference 2025 emphasized the importance of adapting defense strategies to counter AI-driven attacks, highlighting the need for international cooperation and innovative security frameworks. As AI continues to evolve, both defenders and threat actors will need to adapt rapidly to the changing dynamics of cyber threats.
An analysis of over 2.6 million AI-related posts from underground sources reveals how threat actors are leveraging AI technologies for malicious purposes. The research highlights 100,000 tracked illicit sources and identifies five distinct use cases, including multilingual phishing and deepfake impersonation tools. This comprehensive insight offers unmatched visibility into adversaries' strategies and innovations in AI exploitation.