31 links
tagged with ai-security
Click any tag below to further narrow down your results
Links
OpenNHP is an open-source toolkit designed to implement Zero Trust security in an AI-driven environment by utilizing cryptography and advanced protocols to conceal server resources and ensure data privacy. It introduces the Network-infrastructure Hiding Protocol (NHP) and Data-object Hiding Protocol (DHP), which together enhance security against rising AI-driven cyber threats. With a focus on proactive defense and rapid response strategies, OpenNHP addresses vulnerabilities effectively while providing a modular architecture for scalability and integration with existing security systems.
Comet, an AI assistant, faces the challenge of malicious prompt injection, which manipulates its decision-making without exploiting software bugs. To combat this, Perplexity employs a defense-in-depth strategy that includes real-time detection, user controls, and transparent notifications to maintain user trust and safety.
Mondoo's Agentic Vulnerability ManagementTM autonomously identifies, prioritizes, and remediates vulnerabilities across various IT infrastructures, significantly reducing vulnerabilities and speeding up remediation processes. By leveraging AI for continuous monitoring, Mondoo enhances security posture and compliance while allowing security teams to focus on strategic initiatives. The platform offers flexible deployment options and features a proactive approach to vulnerability management through its unique Mondoo flow.
The article discusses a security vulnerability known as prompt injection that can lead to remote code execution (RCE) in AI agents. It outlines the mechanisms of this exploit, the potential impact on AI systems, and the importance of implementing robust security measures to mitigate such risks. The findings underscore the need for vigilance in the development and deployment of AI technologies.
The Critical AI Security Guidelines draft offers a comprehensive framework for securing AI deployments, focusing on multi-layered security approaches, governance adaptations, and risk management. Public comments are encouraged to enhance the guidelines, fostering community engagement and collaboration in developing AI security standards.
Google is enhancing cybersecurity in the AI era by introducing tools like CodeMender, an AI-powered agent that autonomously fixes code vulnerabilities, and launching an AI Vulnerability Reward Program to encourage security research. They are also expanding their Secure AI Framework to address risks associated with autonomous AI agents, aiming to use AI to strengthen defenses against cyber threats.
OpenAI has made its first investment in the cybersecurity sector, signaling a strategic move to enhance its capabilities in addressing cyber threats. The investment aims to bolster the security of AI technologies and safeguard user data against emerging cyber risks.
As AI becomes integral to security operations, the speed of cyber threats demands a shift away from human oversight in tactical responses. Emphasizing the need for AI security over AI safety, the article advocates for a containment strategy that allows AI to innovate within strict boundaries to ensure accountability and mitigate risks.
Large Language Models (LLMs) are vulnerable to data poisoning attacks that require only a small, fixed number of malicious documents, regardless of the model's size or training data volume. This counterintuitive finding challenges existing assumptions about AI security and highlights significant risks for organizations deploying LLMs, calling for urgent development of robust defenses against such vulnerabilities.
SecureVibes is an AI-powered security system designed to detect vulnerabilities in codebases through a collaborative multi-agent architecture. Utilizing five specialized agents, it provides thorough security assessments, threat modeling, code reviews, and dynamic testing across multiple programming languages while offering customizable reporting options.
Pillar Security offers a comprehensive platform for managing security risks throughout the AI lifecycle, providing tools for asset discovery, risk assessment, and adaptive protection. The solution integrates seamlessly with existing infrastructures, enabling organizations to maintain compliance, protect sensitive data, and enhance the trustworthiness of their AI systems. With real-time monitoring and tailored assessments, Pillar aims to empower businesses to confidently deploy AI initiatives while mitigating potential threats.
Check Point has acquired Lakera to enhance its capabilities in AI-driven security solutions, aiming to build a unified AI security stack. This acquisition is part of Check Point's strategy to address evolving cybersecurity threats with advanced technology.
AI is transforming workplace productivity but introduces significant security challenges, as revealed by a survey of security leaders. Key issues include limited visibility into AI tool usage, weak policy enforcement, unintentional data exposure, and unmanaged AI, highlighting the urgent need for enhanced governance and security strategies to mitigate risks associated with AI adoption.
The article discusses the current state of AI security readiness among organizations, emphasizing the importance of developing robust security measures to protect against potential AI-related threats. It highlights the challenges and strategies companies face in implementing effective AI security protocols.
Rowhammer attacks pose a significant threat by allowing malicious actors to manipulate AI models through a single bit flip, potentially compromising their integrity and security. This vulnerability highlights the need for enhanced protections in the development and deployment of AI systems.
A new zero-click vulnerability named 'EchoLeak' has been discovered in Microsoft 365 Copilot, allowing attackers to exfiltrate sensitive data without user interaction. Although Microsoft has fixed the issue and there is no evidence of real-world exploitation, the flaw highlights significant risks associated with AI-integrated systems and emphasizes the need for improved security measures against such vulnerabilities.
Arcanum presents a three-stage model for scaling AI adoption within security teams, emphasizing the importance of addressing privacy concerns and organizational trust before moving to task-level assistance, domain-level automation, and ultimately, organization-wide automation. The model outlines practical steps for integrating AI into security workflows, aiming for significant efficiency improvements over time.
Octane Security provides AI-powered tools that help organizations identify and fix critical vulnerabilities in their code before they lead to costly hacks. By integrating into CI/CD pipelines, Octane enhances the security of software development, reduces the need for expensive audits, and improves overall confidence in code quality. Users have praised its efficiency, speed, and ability to uncover issues that traditional manual reviews might miss.
Organizations are rapidly adopting AI technologies without sufficient security measures, creating vulnerabilities that adversaries exploit. The SANS Secure AI Blueprint offers a structured approach to mitigate these risks through three key imperatives: Protect AI, Utilize AI, and Govern AI, equipping cybersecurity professionals with the necessary training and frameworks to secure AI systems effectively.
Aurascape offers an AI-native security architecture designed to enhance visibility and control over AI tool usage within enterprises, addressing gaps left by traditional security measures. It enables real-time detection and classification of AI interactions, ensuring sensitive data protection and compliance while fostering innovation. The platform empowers organizations to manage shadow AI and safeguard data without hindering AI adoption.
Secure Code Warrior has released a set of free AI Security Rules on GitHub to help developers ensure secure coding practices while using AI-assisted coding tools. These lightweight, adaptable rules serve as guidelines for safer defaults in projects, addressing common security flaws across web frontend, backend, and mobile applications. The initiative aims to enhance security in the fast-paced environment of modern software development.
DevSecCon25 is a virtual conference focused on the intersection of AI and security, highlighting the need for secure AI-driven software development. The event features keynotes, hands-on demos, and discussions led by industry experts, addressing the challenges and innovations in AI security while promoting community engagement and fun activities. Attendees will explore critical strategies for navigating the evolving landscape of AI technology while ensuring security and governance.
Fraim provides AI-powered workflows for security engineers to identify and manage vulnerabilities throughout the development lifecycle. It offers tools for risk flagging, code security analysis, and infrastructure-as-code analysis, enhancing visibility and focusing security resources on high-priority issues. The platform integrates seamlessly into CI/CD processes and supports customization for specific organizational needs.
Researchers from Check Point discovered a critical remote code execution vulnerability dubbed "MCPoison" in the Cursor AI coding tool, allowing attackers to alter approved Model Context Protocol (MCP) configurations to inject malicious commands. Cursor has since released an update to address the flaw, requiring user approval for any modifications to MCP Server entries, but the incident raises concerns about trust in AI-assisted development environments. Further vulnerabilities in AI platforms are expected to be reported by Check Point.
A user successfully utilized ChatGPT-4o to create a replica of his passport in just five minutes, raising significant concerns about the potential misuse of AI in generating fraudulent identification documents. This incident highlights the need for stronger security measures and monitoring of AI capabilities to prevent identity theft and other criminal activities.
The article provides an in-depth explanation of the Model Context Protocol (MCP), highlighting its role in enhancing the capabilities of large language models (LLMs) through improved context provision. It also conducts a detailed threat model analysis, identifying key security vulnerabilities and potential attack vectors associated with MCP's functionalities, such as sampling and composability.
Symbiotic Security v1 integrates AI-driven code security directly into developers' IDEs, providing real-time detection, remediation, and educational insights for coding vulnerabilities. By automatically suggesting secure code replacements and facilitating interactive learning, it enhances developer productivity and ensures clean code from the outset. Teams have successfully mitigated thousands of vulnerabilities before they reach production, streamlining the development process.
Google DeepMind has released a white paper detailing the security enhancements made to Gemini 2.5, focusing on combating indirect prompt injection attacks which pose cybersecurity risks. The article highlights the use of automated red teaming and model hardening to improve Gemini's defenses, ensuring the AI can better recognize and disregard malicious instructions while maintaining performance on normal tasks.
HackerOne has disbursed $81 million in bug bounties over the past year, reflecting a 13% year-over-year increase. The demand for AI security has surged, with AI vulnerabilities rising by over 200%, while traditional vulnerabilities like XSS and SQL injection are declining. A significant number of researchers are now utilizing AI tools to enhance their security testing efforts.
Dreadnode, co-founded by Will Pearce and Nick Landers, successfully transitioned from pre-seed funding to a $14M Series A round led by Decibel, thanks to their rapid growth and demand for their offensive AI security solutions. The founders navigated the fundraising process with a focus on building strong customer relationships first, which ultimately attracted investors on their own terms, allowing them to bypass several interim funding rounds.
Security questionnaires for AI vendors must evolve beyond traditional SaaS templates to effectively address the unique risks associated with AI systems. Delve proposes a new framework focusing on governance, data handling, model security, lifecycle management, and compliance to enhance trust and reliability in AI procurement. This approach aims to foster better communication between vendors and enterprises, ultimately leading to more secure AI solutions.