Click any tag below to further narrow down your results
Links
This article discusses the enduring success of ServiceNow in the enterprise software space, emphasizing its outdated UI and the importance of systems of record. It also touches on current challenges for startups post-product-market fit and the shifting landscape of venture capital with significant declines in secondary market valuations.
Google found a new malware called PROMPTFLUX that uses Visual Basic Script to modify itself by interacting with its Gemini AI model. This malware seeks to evade detection by generating obfuscated code and is still in the development phase, lacking the ability to compromise networks. Security experts debate its effectiveness and significance.
In 2025, an AI system identified four previously unknown security issues in OpenSSL, three of which were disclosed and fixed by the system. The findings highlight the potential of AI in proactively discovering vulnerabilities in critical infrastructure.
This article outlines key tech trends and challenges for 2026, based on insights from various investment teams. Topics include managing unstructured data, AI's role in cybersecurity, and the evolution of infrastructure to support agent-driven workloads.
Cato Networks revealed HashJack, a vulnerability that uses the URL fragment to hide malicious commands for AI browser assistants. This allows attackers to manipulate AI behavior without compromising the actual website, leading to risks like credential theft and unauthorized data access.
Anthropic CEO Dario Amodei has been called to testify before the House Homeland Security Committee on December 17 regarding a Chinese cyber-espionage campaign involving AI. This marks the first congressional appearance by an Anthropic executive related to this AI-driven attack. Lawmakers are seeking insights on the implications of AI in cybersecurity.
This article covers recent advancements in technology, including new AI capabilities from IBM and Cisco, as well as updates on cloud revenue driven by generative AI. It also highlights trends in data governance and unified communications.
Researchers at Stanford University tested an AI bot named Artemis, designed to find and exploit software vulnerabilities. The experiment revealed that Artemis could outperform professional penetration testers in identifying bugs on a real-world network.
Oligo Security has revealed an ongoing global hacking campaign, ShadowRay 2.0, where attackers exploit a flaw in the Ray AI framework to create a self-propagating botnet. The attackers, known as IronErn440, leverage AI-generated payloads to enhance their methods while competing with other criminal groups for resources. Over 230,000 Ray servers are currently exposed to this threat.
The ISC2's 2025 Cybersecurity Workforce Study highlights a growing skills gap in cybersecurity, with 88% of professionals experiencing significant events due to these shortages. As AI adoption accelerates, organizations must focus on developing expertise rather than just increasing staff numbers to enhance security.
Google reported that the North Korean group UNC2970 used its AI model, Gemini, for reconnaissance on high-value targets, including cybersecurity firms. This trend of hacking groups leveraging generative AI for malicious purposes raises concerns about the evolving methods of cyber attacks. Google is enhancing its safety measures to counteract these threats.
The article discusses the importance of verifiability over model performance in AI cybersecurity. It highlights how offensive AI has a clear advantage due to easy verification of tasks, while defensive security struggles with complex, hard-to-verify challenges. Effective verifiers are essential for improving defense strategies against AI-driven attacks.
This article argues that AI integration in cybersecurity can create more vulnerabilities rather than enhance security. It highlights how hype around AI often overshadows the real risks, such as data leaks and poorly integrated systems, which can lead to significant security breaches.
This article discusses how Google integrates AI agents into its cybersecurity operations. It outlines key lessons learned in building these agents, focusing on trust, real problem-solving, performance measurement, and the importance of foundational practices.
The Konni hacker group is targeting blockchain developers with AI-generated PowerShell malware. Their attacks involve sending malicious links via Discord that deliver a backdoor capable of compromising sensitive assets like API credentials and cryptocurrency. Researchers have identified the malware as being developed with AI assistance, indicating a shift in their tactics.
This article discusses upcoming changes in cybersecurity, emphasizing the shift towards using AI agents to manage security tasks due to the challenges of hiring and retaining skilled personnel. It highlights the need for improved asset management and the potential pitfalls of relying on automated systems, including the risk of accumulating technical debt.
Tenzai has introduced an AI-driven platform that conducts penetration testing to identify and fix vulnerabilities in enterprise software. Backed by $75 million in funding, the service aims to automate and scale the work of elite hackers, addressing the talent shortage in cybersecurity.
AI models like Claude Sonnet 4.5 can now execute complex multi-stage attacks on networks using standard open-source tools, eliminating the need for custom toolkits. This advancement allows AIs to exploit known vulnerabilities quickly, emphasizing the urgent need for timely security updates.
The World Economic Forum's report analyzes major cybersecurity trends for 2026, focusing on the impact of AI, geopolitical tensions, and rising cyber inequity. It highlights the growing threat of AI vulnerabilities and the need for organizations to adapt their strategies to mitigate risks, particularly in the face of geopolitical instability and supply chain challenges.
The article reports that cybersecurity firms attracted $14 billion in funding in 2025, with investors prioritizing governance, identity solutions, and fraud prevention. This shift reflects a demand for vendors that can deliver measurable outcomes amid tightening budgets and a preference for larger contracts.
The article criticizes Anthropic's recent report on a Chinese state-sponsored cyber espionage operation, arguing it lacks verifiable details and fails to provide essential indicators for threat detection. It highlights the report's shortcomings in transparency and accountability, questioning the motivations behind its release and the credibility of the claims made.
This article discusses a new AI system designed to enhance threat detection and response in cybersecurity. It emphasizes features like speed, accuracy, and seamless integration with existing security tools, while also providing measurable insights for businesses.
Indurex offers an AI-driven platform that enhances decision-making and operational visibility in industrial environments. It integrates safety, cybersecurity, and engineering data to improve system performance and compliance while reducing operator workload. The platform aims to keep operations running smoothly and securely.
Researchers found a malicious npm package named eslint-plugin-unicorn-ts-2 that attempts to deceive AI security scanners. It contains a hidden prompt and exfiltrates sensitive data during installation, highlighting a new tactic in cybercrime where attackers manipulate AI to avoid detection.
The article introduces CyberSOCEval, a set of open source benchmarks designed to evaluate Large Language Models (LLMs) in malware analysis and threat intelligence reasoning. It highlights the need for improved assessments of LLMs to better support cybersecurity efforts, especially as malicious actors leverage AI for attacks. The findings show that current models are underperforming in cybersecurity scenarios, indicating room for enhancement.
MIT Sloan has withdrawn a paper claiming that over 80% of ransomware attacks are driven by AI after criticism from cybersecurity experts. The paper faced backlash for its lack of evidence and methodology, leading to accusations of misleading research.
Novee has launched an AI-driven penetration testing service that continuously identifies and addresses security vulnerabilities. Unlike traditional methods, it simulates real attacks, providing specific remediation steps and adapting to changes in the environment. This approach aims to help organizations stay ahead of potential threats.
Researchers have identified four new phishing kits—BlackForce, GhostFrame, InboxPrime AI, and Spiderman—that enable large-scale credential theft. These kits utilize advanced techniques, including AI automation and evasion strategies, to deceive users and bypass security measures.
Infosec Compliance Now 2026 is a free virtual event focused on AI and cyber risk trends. It features experts discussing topics like AI governance and third-party risk management, offering actionable insights for organizations. Attendees can earn 4 CPE credits by participating live.
OpenAI has released GPT-5.2-Codex, an advanced coding model designed for software development and cybersecurity. It enhances long-context understanding, tool reliability, and cybersecurity capabilities, enabling more effective coding and threat detection. The release aims to balance accessibility with safety in deployment.
This article discusses how 7AI's platform uses AI agents to automate alert triage and security operations, significantly reducing the workload for human analysts. With these agents handling routine tasks, security teams can focus on more strategic challenges. The results include drastic reductions in false positives and faster incident response times.
This article explores how large language models (LLMs) can be used for both defensive and offensive purposes in cybersecurity, highlighting the rise of malicious models like WormGPT and WormGPT 4. These tools bypass ethical constraints, making cybercrime more accessible for less skilled attackers. The piece details their capabilities, including generating phishing content and malware, and discusses the implications for the threat landscape.
The article discusses AI Security Posture Management (SPM) and its importance in enhancing cybersecurity measures for businesses. It highlights how AI-driven tools can help organizations assess and improve their security posture by identifying vulnerabilities and automating responses to threats. Additionally, it outlines the benefits of integrating AI into security strategies for better risk management and compliance.
Anthropic's chief security officer warns that fully AI-powered virtual employees could start operating in corporate environments within the next year. This development necessitates a reevaluation of cybersecurity strategies to prevent potential breaches and manage the unique challenges posed by these AI identities.
Ransomware groups like Black Basta and FunkSec are increasingly using AI to enhance their extortion tactics, resulting in significant financial losses, such as $724 million stolen using TrickBot malware. The report highlights the growing prevalence of extortion methods, including DDoS attacks, and offers insights into regional trends and mitigation strategies.
The session, led by Brian Correia, discusses how AI is transforming the workforce and the challenges organizations face in adopting AI technologies. It will provide attendees with strategies to enhance AI readiness and practical solutions to overcome barriers such as tool overload and cultural resistance. Participants will gain insights and frameworks to lead effectively in an AI-driven environment.
The article discusses how Vercel's new AI tool has been exploited by malicious actors to automate and enhance phishing attacks. As a result, organizations are urged to bolster their cybersecurity measures to counteract the increasing sophistication of such threats. The misuse of AI in this context raises concerns about the broader implications for digital security and user safety.
The article discusses the impending rise of cyberattacks conducted by AI agents, highlighting the potential threats and vulnerabilities that could emerge as these technologies become more advanced. It emphasizes the need for stronger cybersecurity measures to counteract the sophisticated tactics that AI can employ in malicious activities.
Researchers from ESET have identified PromptLock, the first known AI-powered ransomware, which is currently a non-functional proof-of-concept. This prototype utilizes OpenAI's gpt-oss-20b model to generate malicious Lua scripts and operates within a controlled environment, highlighting the potential dangers of AI in cybercrime despite no active infections being reported.
The article discusses the alarming trend of sensitive data leaks associated with AI technologies, particularly through websites that utilize Vibe coding. It highlights the potential risks and implications of these leaks, emphasizing the need for better security measures to protect user information in the evolving digital landscape.
Researchers at Mandiant have discovered a new malware strain dubbed "UNC6032," which utilizes AI-generated video content to deceive victims. The malware operates primarily through phishing campaigns, leveraging convincing videos to trick users into downloading malicious software. This highlights a growing trend in cyber threats where AI technology is exploited for malicious purposes.
Lattica has unveiled a new platform utilizing fully homomorphic encryption (FHE) to allow AI models to process encrypted data without exposure. The company secured $3.25 million in pre-seed funding to enhance the security and privacy of AI applications. This innovative approach enables AI providers to host and manage models while ensuring that sensitive data remains protected.
Researchers from Tel Aviv University have demonstrated a new type of cyber attack they call "promptware" by using calendar events to manipulate Google's AI, Gemini, into controlling smart home devices. By embedding malicious instructions in calendar appointments, they successfully executed indirect prompt injection attacks, allowing unauthorized control over devices like lights and thermostats. This incident marks a significant shift in how AI vulnerabilities can impact the physical world.
The article discusses a critical vulnerability identified in NVIDIA's software, designated CVE-2025-23266, which poses significant risks to AI systems using NVIDIA hardware. It highlights the implications of this vulnerability, potential exploits, and the necessity for immediate patching by users to safeguard their systems.
An impersonator used AI to mimic Senator Marco Rubio in attempts to contact foreign ministers and U.S. officials, prompting a warning from the State Department. Although the hoaxes were deemed unsophisticated, the incident highlights growing concerns over AI misuse in impersonation and cybersecurity threats.
Google has launched Sec-Gemini v1, an experimental AI model aimed at enhancing cybersecurity by providing advanced reasoning capabilities and real-time knowledge to support cybersecurity workflows. This model outperforms existing benchmarks and is available for research collaboration with select organizations to help shift the balance in favor of cybersecurity defenders.
ShadowLeak is a new AI-driven data theft method that operates undetected, posing significant risks to organizations. It allows malicious actors to extract sensitive information without triggering traditional security alerts, making it a formidable threat in the cybersecurity landscape. As AI continues to evolve, the implications for data protection are profound, necessitating enhanced security measures.
Prompts used in large language models (LLMs) are emerging as critical indicators of compromise (IOCs) in cybersecurity, highlighting how threat actors exploit these technologies for malicious purposes. The article reviews a recent report from Anthropic detailing various misuse cases of the AI model Claude and emphasizes the need for threat analysts to focus on prompt-based tactics, techniques, and procedures (TTPs) for effective monitoring and detection. The author proposes the NOVA tool for detecting adversarial prompts tailored to specific threat scenarios.
The Gartner Identity & Access Management Summit 2025 will explore critical IAM topics, including program management, agentic AI, and resource optimization amidst today's economic and geopolitical challenges. The conference aims to guide organizations in establishing resilient IAM infrastructures, featuring numerous sessions tailored to various interests and roles within the IAM field.
An underground AI tool called SpamGPT is emerging as a CRM for cybercriminals, providing advanced marketing capabilities that enable more effective and targeted spam campaigns. This tool is designed to streamline operations for cybercriminals, offering features similar to legitimate business software, thus enhancing their ability to execute scams and phishing attacks. The rise of such tools highlights the ongoing challenges in cybersecurity and the increasing sophistication of cybercriminal activities.
Google is leveraging AI to enhance cybersecurity defenses, focusing on key areas such as agentic capabilities, new security models, and public-private collaborations. Notable advancements include the AI agent Big Sleep, which identifies vulnerabilities, and new tools like Timesketch and FACADE that streamline forensic investigations and insider threat detection. The company emphasizes safe and responsible AI deployment to reshape the future of cybersecurity.
At the Gartner Security & Risk Management Summit 2025, analysts discussed how security teams can capitalize on the current hype surrounding AI and other technologies to enhance their security strategies. Emphasizing the importance of informed decision-making, they recommended using metrics and transparency to align cybersecurity investments with organizational goals.
Convera warns that the rise of AI-driven scams poses significant risks to businesses, particularly in the financial sector. Bridget Pruzin emphasizes the importance of recognizing fraud indicators, such as voice cloning and urgent requests for sensitive information, and advocates for proactive education and collaboration to combat these sophisticated threats.
Dropzone AI offers autonomous SOC analysts that replicate elite investigative techniques, allowing security teams to respond to threats with speed and accuracy. By automating routine tasks, Dropzone AI reduces false positives and significantly increases alert handling capacity, freeing human analysts to focus on more complex security challenges. Organizations report substantial improvements in response times and overall security posture with the integration of this AI-powered solution.
The article discusses the decreasing number of unicorn startups and their recent exits, with a focus on sectors such as AI, cybersecurity, and health. It highlights the challenges facing these companies in the current economic climate and the implications for investors.
A newly discovered malware prototype named "Skynet" attempts to manipulate AI tools by instructing them to ignore its malicious code. Although the malware's design is rudimentary and ineffective, it highlights emerging trends in the intersection of AI and cybersecurity, raising concerns about future evasion tactics.
The article discusses the emerging role of artificial intelligence in enhancing cybersecurity measures for defenders. It highlights various AI tools and techniques that can help organizations better detect, respond to, and mitigate cyber threats. Additionally, it emphasizes the importance of integrating AI into existing security frameworks to improve resilience against attacks.
Open-source AI is revolutionizing cybersecurity by enhancing innovation and operational maturity among startups, while also presenting challenges regarding security and compliance. Industry leaders emphasize the importance of embedding governance, automating security processes, and contributing purpose-built tools to improve resilience and manage risks effectively.
SANS Institute is focused on developing a secure, AI-capable workforce through training and resources tailored for cybersecurity professionals. Their initiatives include frameworks for securing AI systems, enhancing defensive strategies against AI-driven threats, and addressing the evolving roles within cybersecurity as AI technology advances. The organization emphasizes the importance of integrating AI into security practices responsibly and ethically.
The Cloud Security Alliance and Dropzone AI conducted a benchmark study revealing that AI assistance significantly enhances the efficiency and accuracy of SOC analysts. Findings show that AI-assisted teams completed investigations 45-61% faster and achieved 22-29% higher accuracy compared to manual methods, with 94% of participants becoming advocates for AI after using it.
The article discusses the misuse of AI agents for data theft, highlighting how malicious actors exploit AI technologies to automate and enhance their cybercriminal activities. It emphasizes the need for robust security measures and awareness to combat these evolving threats in the digital landscape.
Microsoft is developing an AI prototype called Project Ire, designed to autonomously reverse-engineer malware without human intervention. This initiative aims to enhance cybersecurity by quickly analyzing and understanding malicious software to improve defenses against cyber threats.
Cybermon 2025 introduces a gamified campaign designed to enhance cybersecurity awareness among developers by tackling vulnerabilities in an AI-driven environment. Participants engage in challenges involving quirky digital monsters, earning badges and rewards while promoting secure coding practices. The campaign runs for four weeks starting October 6, 2025.
The article discusses a report released by Anthropic, which highlights the growing threats posed by artificial intelligence in the realm of cybersecurity. It emphasizes the potential for AI to be used in hacking and other malicious activities, urging for better frameworks to mitigate these risks. The report outlines various scenarios where AI could exacerbate security challenges in the digital landscape.
AI is transforming the cybercrime landscape by enhancing existing attack methods rather than creating new threats, making cybercriminal activities more efficient and accessible. The panel at RSA Conference 2025 emphasized the importance of adapting defense strategies to counter AI-driven attacks, highlighting the need for international cooperation and innovative security frameworks. As AI continues to evolve, both defenders and threat actors will need to adapt rapidly to the changing dynamics of cyber threats.
The takedown of DanaBot, a major Russian malware platform, demonstrates how agentic AI significantly reduced the time required for Security Operations Centers (SOCs) to analyze threats from months to weeks. By automating threat detection and response, agentic AI empowers SOC teams to better combat increasingly sophisticated cyber threats, showcasing its essential role in modern cybersecurity.
North Korean IT workers are reportedly engaging in AI recruitment scams to exploit global job markets, using sophisticated techniques to lure potential victims. These scams may be part of a broader strategy to generate revenue for the North Korean regime amid international sanctions. Authorities are concerned about the implications of such operations on cybersecurity and financial fraud.
NYU researchers developed a proof-of-concept AI-powered ransomware, dubbed Ransomware 3.0, which utilizes large language models to create customized attacks targeting specific files on victim systems. The project unexpectedly gained attention when security analysts mistakenly identified it as a real threat, prompting discussions about the implications of AI in ransomware development. While the malware is not functional outside a lab setting, researchers warn that the techniques could inspire actual cybercriminals to create similar threats.
The UK's National Cyber Security Centre (NCSC) has launched a Vulnerability Research Initiative (VRI) to enhance collaboration with external cybersecurity experts and improve the identification of software and hardware vulnerabilities. The initiative aims to expedite the sharing of critical insights while leveraging the expertise of skilled researchers in various technology areas, including emerging fields like AI. Interested specialists can contact the NCSC to participate in this program.
Utilizing AI to analyze cyber incidents can significantly enhance the understanding of attack patterns and improve response strategies. By leveraging machine learning algorithms, organizations can automate the detection and classification of threats, leading to more efficient and effective cybersecurity measures. The integration of AI tools into incident response frameworks is becoming increasingly essential for modern security practices.
An attempt to create an autonomous AI pentester revealed significant limitations in AI's capability to effectively perform offensive security tasks. Despite its potential for planning and executing complex strategies, the AI struggled with accuracy and lacked the critical intuition and drive that human hackers possess. The project ultimately highlighted the importance of combining AI's strengths with human creativity and critical thinking in cybersecurity.
A new attack method called "Echo Chamber" has been identified, allowing attackers to bypass advanced safeguards in leading AI models by manipulating conversational context. This technique involves planting subtle cues within acceptable prompts to steer AI responses toward harmful outputs without triggering the models' guardrails.
The article presents four key questions that Chief Information Security Officers (CISOs) should consider when integrating artificial intelligence into their cybersecurity strategies. These questions focus on assessing the effectiveness, risks, compliance, and the overall impact of AI technologies in enhancing security measures.
Databricks has launched a new AI-driven platform aimed at enhancing cybersecurity measures. The platform integrates machine learning capabilities to help organizations detect and respond to threats more effectively, positioning Databricks as a significant player in the cybersecurity space.
Ransomware is evolving with the integration of GenAI and LLMs, leading to more sophisticated attacks such as AI-driven phishing and quadruple extortion. Experts discuss how groups like CL0P and FunkSec utilize AI to enhance their operations and pressure victims, while emphasizing the need for defenders to implement AI-aware security measures across various platforms. Strategies for securing identities and leveraging API visibility against emerging threats are also highlighted.
Microsoft has introduced an autonomous AI system named Project Ire that can reverse-engineer and identify malware without human intervention. This innovative approach marks a significant advancement in cybersecurity, automating processes traditionally performed by security experts. The company continues to prioritize security, launching initiatives like the Zero Day Quest to enhance its defenses.
The new executive order issued by Trump revokes previous mandates related to digital identity, impacting the use of secure credentials in government programs. It shifts focus from software compliance to managing AI vulnerabilities and revises sanctions policies to specifically target foreign actors, thus limiting potential misuse against domestic political opponents.
Cybersecurity AI (CAI) is an open-source framework designed to assist security professionals in developing AI-driven tools for offensive and defensive cybersecurity tasks. It features over 300 AI models, built-in security tools, and a modular architecture, making it suitable for both individual researchers and organizations aiming to enhance their security measures. CAI promotes democratization and transparency in cybersecurity AI, enabling more efficient vulnerability discovery and assessment.
Augur Security leverages AI-powered behavioral modeling to preemptively block cyberattacks by identifying attack infrastructure before exploitation occurs. By integrating seamlessly with existing security tools, Augur provides actionable insights and near-zero false positive rates, effectively transforming threat detection from reactive to proactive.
Generative AI models, such as OpenAI's GPT-4, are enabling rapid development of exploit code from vulnerability disclosures, reducing the time from flaw announcement to proof-of-concept to mere hours. Security experts have observed a significant increase in the speed at which vulnerabilities are exploited, necessitating quicker responses from defenders in the cybersecurity landscape. This shift underscores the need for enterprises to be prepared for immediate action upon the release of new vulnerabilities.