4 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Google reported that the North Korean group UNC2970 used its AI model, Gemini, for reconnaissance on high-value targets, including cybersecurity firms. This trend of hacking groups leveraging generative AI for malicious purposes raises concerns about the evolving methods of cyber attacks. Google is enhancing its safety measures to counteract these threats.
If you do, here's more
Google revealed that UNC2970, a North Korea-linked hacking group, has been using its generative AI model, Gemini, for reconnaissance on high-value targets. By synthesizing open-source intelligence (OSINT), the group has been able to map out job roles and salary information at major cybersecurity and defense firms, blurring the line between legitimate research and malicious targeting. This tactic enhances their phishing operations, allowing them to create tailored personas for engaging potential victims.
UNC2970 is part of a broader trend where various hacking groups exploit Gemini. Other groups mentioned include UNC6418, which seeks sensitive credentials; Temp.HEX, focusing on individuals in Pakistan; and APT41, known for automating vulnerability analysis. Google’s team has noted that threat actors often pose as security researchers or CTF participants to bypass safeguards, prompting ongoing updates to Gemini's safety protocols.
Google also identified a malware variant called HONESTCUE, which uses the Gemini API to generate malicious code without leaving traces on disk. Another threat is the AI-generated phishing kit COINBAIT, which masquerades as a cryptocurrency exchange to harvest credentials. Moreover, Google has disrupted model extraction attacks aimed at querying its proprietary AI model to replicate its functionality, highlighting a significant gap in security perceptions regarding AI models. Security experts stress that simply keeping model weights private is insufficient protection, as every interaction contributes to a model’s behavior, exposing it to potential replication.
Questions about this article
No questions yet.