Click any tag below to further narrow down your results
Links
Google found a new malware called PROMPTFLUX that uses Visual Basic Script to modify itself by interacting with its Gemini AI model. This malware seeks to evade detection by generating obfuscated code and is still in the development phase, lacking the ability to compromise networks. Security experts debate its effectiveness and significance.
This article explores the use of AI models, particularly Claude Opus 4.6, to detect hidden backdoors in binary executables. While some success was noted, with a 49% detection rate for obvious backdoors, the approach remains unreliable for production use due to high false positives and limitations in analyzing complex binaries.
The Konni hacker group is targeting blockchain developers with AI-generated PowerShell malware. Their attacks involve sending malicious links via Discord that deliver a backdoor capable of compromising sensitive assets like API credentials and cryptocurrency. Researchers have identified the malware as being developed with AI assistance, indicating a shift in their tactics.
Researchers found a malicious npm package named eslint-plugin-unicorn-ts-2 that attempts to deceive AI security scanners. It contains a hidden prompt and exfiltrates sensitive data during installation, highlighting a new tactic in cybercrime where attackers manipulate AI to avoid detection.
Researchers have identified four new phishing kits—BlackForce, GhostFrame, InboxPrime AI, and Spiderman—that enable large-scale credential theft. These kits utilize advanced techniques, including AI automation and evasion strategies, to deceive users and bypass security measures.
This article outlines five key security features expected to dominate in 2026, including supply chain malware detection and AI-based vulnerability management. It also highlights three important capabilities that should be prioritized, such as advanced application detection and real-time AI threat modeling.
This article explores how large language models (LLMs) can be used for both defensive and offensive purposes in cybersecurity, highlighting the rise of malicious models like WormGPT and WormGPT 4. These tools bypass ethical constraints, making cybercrime more accessible for less skilled attackers. The piece details their capabilities, including generating phishing content and malware, and discusses the implications for the threat landscape.
Researchers from ESET have identified PromptLock, the first known AI-powered ransomware, which is currently a non-functional proof-of-concept. This prototype utilizes OpenAI's gpt-oss-20b model to generate malicious Lua scripts and operates within a controlled environment, highlighting the potential dangers of AI in cybercrime despite no active infections being reported.
AgentHopper, an AI virus concept, was developed to exploit multiple coding agents through prompt injection vulnerabilities. This research highlights the ease of creating such malware and emphasizes the need for improved security measures in AI products to prevent potential exploits. The post also provides insights into the propagation mechanism of AgentHopper and offers mitigations for developers.
Researchers at Mandiant have discovered a new malware strain dubbed "UNC6032," which utilizes AI-generated video content to deceive victims. The malware operates primarily through phishing campaigns, leveraging convincing videos to trick users into downloading malicious software. This highlights a growing trend in cyber threats where AI technology is exploited for malicious purposes.
A newly discovered malware prototype named "Skynet" attempts to manipulate AI tools by instructing them to ignore its malicious code. Although the malware's design is rudimentary and ineffective, it highlights emerging trends in the intersection of AI and cybersecurity, raising concerns about future evasion tactics.
Microsoft is developing an AI prototype called Project Ire, designed to autonomously reverse-engineer malware without human intervention. This initiative aims to enhance cybersecurity by quickly analyzing and understanding malicious software to improve defenses against cyber threats.
NYU researchers developed a proof-of-concept AI-powered ransomware, dubbed Ransomware 3.0, which utilizes large language models to create customized attacks targeting specific files on victim systems. The project unexpectedly gained attention when security analysts mistakenly identified it as a real threat, prompting discussions about the implications of AI in ransomware development. While the malware is not functional outside a lab setting, researchers warn that the techniques could inspire actual cybercriminals to create similar threats.
The takedown of DanaBot, a major Russian malware platform, demonstrates how agentic AI significantly reduced the time required for Security Operations Centers (SOCs) to analyze threats from months to weeks. By automating threat detection and response, agentic AI empowers SOC teams to better combat increasingly sophisticated cyber threats, showcasing its essential role in modern cybersecurity.
Microsoft has introduced an autonomous AI system named Project Ire that can reverse-engineer and identify malware without human intervention. This innovative approach marks a significant advancement in cybersecurity, automating processes traditionally performed by security experts. The company continues to prioritize security, launching initiatives like the Zero Day Quest to enhance its defenses.