Click any tag below to further narrow down your results
+ cybersecurity
(11)
+ prompt-injection
(10)
+ risk-management
(8)
+ vulnerabilities
(8)
+ governance
(6)
+ compliance
(5)
+ software-development
(4)
+ community
(3)
+ cloud-security
(3)
+ vulnerability
(3)
+ vulnerability-management
(3)
+ threat-modeling
(2)
+ mcp
(2)
+ remote-code-execution
(2)
+ coding
(2)
Links
This article discusses the urgent need for security to be integrated into AI development processes. It highlights the unique risks posed by AI's unpredictable nature and stresses the importance of collaboration between AI developers and security teams to implement effective safeguards and testing methods.
Researchers at HiddenLayer found a flaw in the guardrails of popular AI models like GPT-5.1 and Claude. The EchoGram attack uses specific words to trick these safety systems, allowing harmful requests to bypass defenses or causing harmless requests to be flagged as dangerous.
This article details how to replicate a cyber espionage attack using Anthropic's Claude Code by jailbreaking the AI. It outlines the methods used to manipulate Claude into executing harmful operations, along with a step-by-step guide for setting up the environment and configurations needed for the attack.
This article discusses the security challenges of exposing AI workloads in Kubernetes, emphasizing the need for enhanced ingress security measures. It highlights various threats, such as resource exhaustion and prompt injection, and suggests using a specialized gateway like Calico Ingress Gateway with integrated WAF for better protection.
This webinar provides a technical demonstration on how to secure AI agents using Okta's tools. It builds on previous discussions about AI security, focusing on user authentication, API management, and access control. Participants will learn practical strategies to enhance the security of GenAI applications.
Moltbook is a social network for digital assistants, allowing them to interact and share skills. Built on OpenClaw, it uses a simple installation process and offers various functionalities, but raises security concerns due to the nature of its operations. The article discusses the creative uses of Moltbook and the potential risks involved with AI assistants.
Vectra AI offers a platform that enhances network detection and response by integrating observability across networks, identities, and cloud environments. It uses advanced AI to monitor behavior, detect threats in real-time, and streamline incident response. This approach aims to stop attacks before they escalate into breaches.
The article discusses how AI agents could spread harmful instructions, similar to the Morris worm that infected early Internet computers. These "prompt worms" exploit AI's nature of following commands, potentially leading to widespread security issues. Researchers warn that this new type of contagion could emerge as AI systems communicate with each other.
This whitepaper discusses the unique identity challenges posed by AI agents that operate beyond simple interactions. It highlights the limitations of traditional identity management systems and outlines necessary adjustments for security and scalability. Key topics include agent security during logins, the importance of human approvals, and strategies for securing AI workflows.
A recent investigation revealed over thirty vulnerabilities in major AI-integrated IDEs, exposing them to data theft and remote code execution. The flaws stem from how AI agents interact with existing IDE features, creating new attack vectors that attackers can exploit. Immediate mitigations are possible, but a fundamental redesign of IDEs is necessary for long-term security.
The Codacy AI Risk Hub helps teams enforce secure coding practices for AI-generated code. It prevents vulnerabilities by tracking model usage, scanning for security risks, and managing hardcoded secrets across projects. This tool aims to maintain code quality while leveraging AI capabilities.
This article discusses the security risks associated with AI agents, particularly prompt injection vulnerabilities. It introduces the "Agents Rule of Two," a framework designed to minimize risks by limiting the properties an agent can have in a session to avoid harmful outcomes.
This article presents a security scanner specifically designed for AI agent skills, capable of detecting issues like prompt injection and data exfiltration. It supports various analysis methods, including static and behavioral detection, and integrates with tools like VirusTotal and cloud providers.
Researchers have developed AURA, a tool that injects fake data into knowledge graphs, making stolen proprietary data useless to attackers while remaining accessible to authorized users. This method is designed to safeguard sensitive information in AI systems from theft and misuse.
The article details a security flaw in AI agent skills, demonstrated through a logic-based attack that uses an invisible instruction hidden in a PDF. This attack bypasses human review and platform safety measures, leading to potential phishing schemes. It highlights the need for improved governance over agent behavior rather than relying solely on static defenses.
The article reviews various AI-driven security tools that analyze source code for vulnerabilities, malicious code, and bugs. The author shares personal experiences testing these tools, highlighting their effectiveness and the challenges of finding reliable products in the market. Key recommendations include ZeroPath, Corgea, and Almanax based on their performance.
SAFE-MCP is a collaborative framework designed to enhance the security of AI agents by standardizing their connections to tools and APIs. Recently adopted by the Linux Foundation and the OpenID Foundation, it provides a living catalog of security tactics and mitigations tailored for AI environments. The framework encourages open collaboration among developers, researchers, and enterprises to address evolving security challenges.
AI-Infra-Guard (A.I.G) is a platform designed for scanning AI infrastructure vulnerabilities and assessing security risks in AI tools. It offers features like vulnerability scans, jailbreak evaluations, and API documentation for easy integration. The tool is open-source and intended for internal use by enterprises and individuals.
The article discusses the importance of securing AI agents as their use in organizations increases. It highlights risks like credential exposure and unintended behaviors, urging companies to adopt strict governance and management practices throughout the AI agent lifecycle. A unified identity platform is recommended to ensure proper oversight and control.
A serious vulnerability in ServiceNow's AI tools allows unauthenticated users to create backdoor admin accounts. Dubbed "BodySnatcher," this flaw highlights the risks of rapidly integrating AI features without proper security measures. ServiceNow has patched the issue, but potential risks remain due to custom configurations.
This article discusses Lumia's platform for managing AI usage in organizations. It focuses on monitoring employee interactions with AI, ensuring compliance with policies, and providing risk assessments. Key features include shadow AI analysis and control measures for autonomous agents.
This article explains how AI systems handle web links while protecting user data from exposure. It focuses on preventing URL-based data leaks through a mechanism that verifies if a URL is publicly accessible. The approach aims to keep users informed and in control when an unverified link is accessed.
ClawSec is a security toolkit for OpenClaw agents that installs and manages various protective skills. It offers features like integrity verification, automated security audits, and live CVE updates to safeguard against vulnerabilities.
Malicious actors can exploit default settings in ServiceNow's Now Assist AI to execute prompt injection attacks, allowing unauthorized access to sensitive data. These attacks leverage agent collaboration features, making it easy for attackers to manipulate benign requests into harmful actions without detection. Organizations must reassess their configurations to mitigate these risks.
Nullify uses AI to automate product security tasks, replacing multiple tools and minimizing the need for human intervention. It identifies vulnerabilities, triages issues, and facilitates fixes through integrations with platforms like Jira and GitHub. The system learns from its environment, continuously improving its effectiveness.
This article reviews key developments in large language models (LLMs) throughout 2025, highlighting trends such as reasoning, coding agents, and the rise of CLI tools. It details significant releases like Claude Code and the impact of agents on coding and search tasks. The author also discusses the implications of using LLMs in YOLO mode and the evolving landscape of AI applications.
The N-able Cyber Resiliency Summit focuses on the rising cyber threats faced by small and medium enterprises (SMEs) and the importance of developing robust cyber resilience strategies. Experts discuss the evolving threat landscape, the convergence of IT and SecOps, and the role of AI in modern security measures. Key topics include endpoint protection and the shifting role of backups in cybersecurity.
OpenNHP is an open-source toolkit designed to implement Zero Trust security in an AI-driven environment by utilizing cryptography and advanced protocols to conceal server resources and ensure data privacy. It introduces the Network-infrastructure Hiding Protocol (NHP) and Data-object Hiding Protocol (DHP), which together enhance security against rising AI-driven cyber threats. With a focus on proactive defense and rapid response strategies, OpenNHP addresses vulnerabilities effectively while providing a modular architecture for scalability and integration with existing security systems.
Comet, an AI assistant, faces the challenge of malicious prompt injection, which manipulates its decision-making without exploiting software bugs. To combat this, Perplexity employs a defense-in-depth strategy that includes real-time detection, user controls, and transparent notifications to maintain user trust and safety.
Mondoo's Agentic Vulnerability ManagementTM autonomously identifies, prioritizes, and remediates vulnerabilities across various IT infrastructures, significantly reducing vulnerabilities and speeding up remediation processes. By leveraging AI for continuous monitoring, Mondoo enhances security posture and compliance while allowing security teams to focus on strategic initiatives. The platform offers flexible deployment options and features a proactive approach to vulnerability management through its unique Mondoo flow.
The article discusses a security vulnerability known as prompt injection that can lead to remote code execution (RCE) in AI agents. It outlines the mechanisms of this exploit, the potential impact on AI systems, and the importance of implementing robust security measures to mitigate such risks. The findings underscore the need for vigilance in the development and deployment of AI technologies.
The Critical AI Security Guidelines draft offers a comprehensive framework for securing AI deployments, focusing on multi-layered security approaches, governance adaptations, and risk management. Public comments are encouraged to enhance the guidelines, fostering community engagement and collaboration in developing AI security standards.
Google is enhancing cybersecurity in the AI era by introducing tools like CodeMender, an AI-powered agent that autonomously fixes code vulnerabilities, and launching an AI Vulnerability Reward Program to encourage security research. They are also expanding their Secure AI Framework to address risks associated with autonomous AI agents, aiming to use AI to strengthen defenses against cyber threats.
OpenAI has made its first investment in the cybersecurity sector, signaling a strategic move to enhance its capabilities in addressing cyber threats. The investment aims to bolster the security of AI technologies and safeguard user data against emerging cyber risks.
As AI becomes integral to security operations, the speed of cyber threats demands a shift away from human oversight in tactical responses. Emphasizing the need for AI security over AI safety, the article advocates for a containment strategy that allows AI to innovate within strict boundaries to ensure accountability and mitigate risks.
Large Language Models (LLMs) are vulnerable to data poisoning attacks that require only a small, fixed number of malicious documents, regardless of the model's size or training data volume. This counterintuitive finding challenges existing assumptions about AI security and highlights significant risks for organizations deploying LLMs, calling for urgent development of robust defenses against such vulnerabilities.
SecureVibes is an AI-powered security system designed to detect vulnerabilities in codebases through a collaborative multi-agent architecture. Utilizing five specialized agents, it provides thorough security assessments, threat modeling, code reviews, and dynamic testing across multiple programming languages while offering customizable reporting options.
Pillar Security offers a comprehensive platform for managing security risks throughout the AI lifecycle, providing tools for asset discovery, risk assessment, and adaptive protection. The solution integrates seamlessly with existing infrastructures, enabling organizations to maintain compliance, protect sensitive data, and enhance the trustworthiness of their AI systems. With real-time monitoring and tailored assessments, Pillar aims to empower businesses to confidently deploy AI initiatives while mitigating potential threats.
Check Point has acquired Lakera to enhance its capabilities in AI-driven security solutions, aiming to build a unified AI security stack. This acquisition is part of Check Point's strategy to address evolving cybersecurity threats with advanced technology.
AI is transforming workplace productivity but introduces significant security challenges, as revealed by a survey of security leaders. Key issues include limited visibility into AI tool usage, weak policy enforcement, unintentional data exposure, and unmanaged AI, highlighting the urgent need for enhanced governance and security strategies to mitigate risks associated with AI adoption.
The article discusses the current state of AI security readiness among organizations, emphasizing the importance of developing robust security measures to protect against potential AI-related threats. It highlights the challenges and strategies companies face in implementing effective AI security protocols.
Rowhammer attacks pose a significant threat by allowing malicious actors to manipulate AI models through a single bit flip, potentially compromising their integrity and security. This vulnerability highlights the need for enhanced protections in the development and deployment of AI systems.
Octane Security provides AI-powered tools that help organizations identify and fix critical vulnerabilities in their code before they lead to costly hacks. By integrating into CI/CD pipelines, Octane enhances the security of software development, reduces the need for expensive audits, and improves overall confidence in code quality. Users have praised its efficiency, speed, and ability to uncover issues that traditional manual reviews might miss.
Arcanum presents a three-stage model for scaling AI adoption within security teams, emphasizing the importance of addressing privacy concerns and organizational trust before moving to task-level assistance, domain-level automation, and ultimately, organization-wide automation. The model outlines practical steps for integrating AI into security workflows, aiming for significant efficiency improvements over time.
A new zero-click vulnerability named 'EchoLeak' has been discovered in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive data without user interaction. Although Microsoft has fixed the issue and there is no evidence of real-world exploitation, the flaw highlights significant risks associated with AI-integrated systems and emphasizes the need for improved security measures against such vulnerabilities.
Organizations are rapidly adopting AI technologies without sufficient security measures, creating vulnerabilities that adversaries exploit. The SANS Secure AI Blueprint offers a structured approach to mitigate these risks through three key imperatives: Protect AI, Utilize AI, and Govern AI, equipping cybersecurity professionals with the necessary training and frameworks to secure AI systems effectively.
Aurascape offers an AI-native security architecture designed to enhance visibility and control over AI tool usage within enterprises, addressing gaps left by traditional security measures. It enables real-time detection and classification of AI interactions, ensuring sensitive data protection and compliance while fostering innovation. The platform empowers organizations to manage shadow AI and safeguard data without hindering AI adoption.
Secure Code Warrior has released a set of free AI Security Rules on GitHub to help developers ensure secure coding practices while using AI-assisted coding tools. These lightweight, adaptable rules serve as guidelines for safer defaults in projects, addressing common security flaws across web frontend, backend, and mobile applications. The initiative aims to enhance security in the fast-paced environment of modern software development.
DevSecCon25 is a virtual conference focused on the intersection of AI and security, highlighting the need for secure AI-driven software development. The event features keynotes, hands-on demos, and discussions led by industry experts, addressing the challenges and innovations in AI security while promoting community engagement and fun activities. Attendees will explore critical strategies for navigating the evolving landscape of AI technology while ensuring security and governance.
Fraim provides AI-powered workflows for security engineers to identify and manage vulnerabilities throughout the development lifecycle. It offers tools for risk flagging, code security analysis, and infrastructure-as-code analysis, enhancing visibility and focusing security resources on high-priority issues. The platform integrates seamlessly into CI/CD processes and supports customization for specific organizational needs.
Researchers from Check Point discovered a critical remote code execution vulnerability dubbed "MCPoison" in the Cursor AI coding tool, allowing attackers to alter approved Model Context Protocol (MCP) configurations to inject malicious commands. Cursor has since released an update to address the flaw, requiring user approval for any modifications to MCP Server entries, but the incident raises concerns about trust in AI-assisted development environments. Further vulnerabilities in AI platforms are expected to be reported by Check Point.
A user successfully utilized ChatGPT-4o to create a replica of his passport in just five minutes, raising significant concerns about the potential misuse of AI in generating fraudulent identification documents. This incident highlights the need for stronger security measures and monitoring of AI capabilities to prevent identity theft and other criminal activities.
The article provides an in-depth explanation of the Model Context Protocol (MCP), highlighting its role in enhancing the capabilities of large language models (LLMs) through improved context provision. It also conducts a detailed threat model analysis, identifying key security vulnerabilities and potential attack vectors associated with MCP's functionalities, such as sampling and composability.
Symbiotic Security v1 integrates AI-driven code security directly into developers' IDEs, providing real-time detection, remediation, and educational insights for coding vulnerabilities. By automatically suggesting secure code replacements and facilitating interactive learning, it enhances developer productivity and ensures clean code from the outset. Teams have successfully mitigated thousands of vulnerabilities before they reach production, streamlining the development process.
Google DeepMind has released a white paper detailing the security enhancements made to Gemini 2.5, focusing on combating indirect prompt injection attacks which pose cybersecurity risks. The article highlights the use of automated red teaming and model hardening to improve Gemini's defenses, ensuring the AI can better recognize and disregard malicious instructions while maintaining performance on normal tasks.
HackerOne has disbursed $81 million in bug bounties over the past year, reflecting a 13% year-over-year increase. The demand for AI security has surged, with AI vulnerabilities rising by over 200%, while traditional vulnerabilities like XSS and SQL injection are declining. A significant number of researchers are now utilizing AI tools to enhance their security testing efforts.
Dreadnode, co-founded by Will Pearce and Nick Landers, successfully transitioned from pre-seed funding to a $14M Series A round led by Decibel, thanks to their rapid growth and demand for their offensive AI security solutions. The founders navigated the fundraising process with a focus on building strong customer relationships first, which ultimately attracted investors on their own terms, allowing them to bypass several interim funding rounds.
Security questionnaires for AI vendors must evolve beyond traditional SaaS templates to effectively address the unique risks associated with AI systems. Delve proposes a new framework focusing on governance, data handling, model security, lifecycle management, and compliance to enhance trust and reliability in AI procurement. This approach aims to foster better communication between vendors and enterprises, ultimately leading to more secure AI solutions.