Click any tag below to further narrow down your results
Links
This article introduces Opti, an AI-driven identity and access management (IAM) tool designed to enhance security and streamline processes. It emphasizes how Opti analyzes access behavior and automates risk remediation, aiming to reduce manual oversight and improve compliance.
This article discusses a security vulnerability in the Netty library related to SMTP command injection, allowing attackers to manipulate email sending. The flaw bypasses established email security protocols like SPF, DKIM, and DMARC. The author highlights the role of AI in discovering the vulnerability and generating a patch.
The article outlines recent updates in Azure Networking, focusing on enhancements in security, reliability, and scalability for AI and cloud applications. Key features include improved NAT Gateway architecture, advanced traffic management tools, and high-capacity connectivity options for organizations. It emphasizes Azure's role in supporting the next generation of cloud solutions.
The article discusses imper.ai, a startup that raised $28M to combat AI impersonation scams. Their technology detects and stops social engineering attacks in real time across various communication channels, analyzing signals like device fingerprints to identify threats. This aims to protect organizations from impersonation attempts and fraudulent requests.
cURL's maintainer, Daniel Stenberg, has shut down the project's bug bounty program due to an overwhelming number of low-quality, AI-generated submissions. He hopes this will encourage more meaningful bug reports while maintaining public accountability for poor submissions.
Sweet Security offers a comprehensive solution for cloud defense, leveraging AI to identify and prioritize vulnerabilities. It provides real-time visibility and rapid response to threats, helping organizations secure their environments without frequent scans. The platform also simplifies compliance and governance processes.
The article discusses the potential risks of AI skills that operate with system access, highlighting how they can execute harmful commands before any review. It emphasizes the importance of treating these skills as executable code, especially in environments where trust relationships exist, making lateral movement and persistence possible. Non-technical users need to be cautious when granting permissions to ensure security.
Google introduced Agent Sandbox, a new feature for Kubernetes that enhances security and performance for AI agents. It allows rapid provisioning of isolated environments for executing agent tasks, optimizing resource use while maintaining strong operational guardrails. GKE users can also leverage Pod Snapshots for faster start-up times.
Visa is launching a Trusted Agent Protocol to protect merchants from fraudulent bots during transactions with AI agents. This protocol uses cryptographic signatures to verify trusted agents and secure transactions, allowing AI to make purchases on behalf of consumers. It aims to enhance confidence in the agentic commerce ecosystem.
Researchers revealed a serious security flaw in Docker's Ask Gordon AI that allowed attackers to execute code and steal sensitive data. The vulnerability, called DockerDash, exploited unverified metadata in Docker images, which the AI treated as executable commands. Docker has fixed the issue in version 4.50.0.
The author reports a security vulnerability in Okta's nextjs-auth0 project and submits a patch, but the contribution is misattributed to another developer. Despite raising concerns, the maintainer acknowledges using AI for the commit, resulting in confusion and unresolved issues around proper credit. The author questions the reliability of AI tools and raises concerns about Okta's response to security vulnerabilities.
Microsoft announced new features at Ignite 2025, focusing on Azure Copilot, which automates cloud management tasks like migration and optimization. The updates also highlight advancements in Azure's AI infrastructure, enhancing performance and scalability across services.
This article reviews key Internet trends and patterns observed by Cloudflare in 2025, including the rise of generative AI, traffic growth, and mobile versus desktop usage. It also highlights security measures and the evolving landscape of bots and crawlers.
This article outlines Blumira's security operations platform, highlighting its key features like AI-powered threat analysis and real-time monitoring. It emphasizes the platform's user-friendliness and quick deployment, aiming to streamline security processes for IT teams.
This article explores how new diagnostic codes and AI-driven solutions are reshaping healthcare operations, from billing to patient care. It also discusses the convergence of cyber and physical security in public and private sectors, emphasizing the need for unified systems to enhance safety and efficiency.
Meta’s secure-by-default frameworks improve mobile security by wrapping risky OS and third-party functions, making security easier for developers. Generative AI helps automate the adoption of these frameworks across Meta's extensive codebase, ensuring consistent security without sacrificing developer speed.
This article explores the use of AI models, particularly Claude Opus 4.6, to detect hidden backdoors in binary executables. While some success was noted, with a 49% detection rate for obvious backdoors, the approach remains unreliable for production use due to high false positives and limitations in analyzing complex binaries.
Anthropic's report reveals that AI agents exploited vulnerabilities in smart contracts, simulating over $550 million in potential losses. They discovered new zero-day vulnerabilities, highlighting the urgent need for improved security measures in blockchain technology.
This article outlines various security risks associated with AI agents and their infrastructure, including issues like chat history exfiltration and prompt injection. It emphasizes the need for a comprehensive security platform to monitor and govern AI operations effectively.
Lima's second major release introduces support for AI workflows, expanding its functionality beyond containers. New features include plugin support, GPU acceleration for macOS, and tools for securely managing AI agents within a virtual machine. This update aims to improve the safety and usability of AI applications.
This webinar discusses the security challenges posed by non-human identities (NHIs) as companies adopt AI agents. Experts from Okta, Guidewire, and AWS will explore threats like prompt injection and data leakage while offering strategies for secure AI integration and identity management.
This article outlines how Dux AI helps organizations manage security vulnerabilities by identifying exploitable risks and applying quick mitigations. It emphasizes the importance of acting swiftly to protect against threats in a rapidly changing environment. Dux aims to streamline the remediation process, allowing teams to focus on critical issues.
Microsoft aims to replace its C and C++ codebase with Rust by 2030, leveraging AI to automate the translation process. They're hiring engineers to develop tools for this extensive project, which is part of a broader effort to improve software security and reduce technical debt. However, a recent update clarifies that this initiative is a research project, not a direct rewrite of Windows.
A security researcher revealed how attackers can exploit Anthropic's Claude AI by using indirect prompt injections to extract user data. By tricking Claude into uploading files to the attacker's account, sensitive information, including chat conversations, can be exfiltrated. The researcher reported this issue, but Anthropic initially dismissed it as a model safety concern.
This article offers a checklist to help platform engineers and SREs secure cloud and container workloads. It emphasizes the need for updated strategies in light of expanding attack surfaces and the integration of AI. The checklist covers asset inventory, vulnerability assessment, and compliance monitoring.
Chinese state-sponsored hackers used Anthropic's AI tool, Claude, to automate cyberattacks on around 30 organizations worldwide, succeeding in several breaches. They tricked the AI into bypassing security protocols by framing malicious tasks as routine cybersecurity work. This marks a significant shift in cybercrime, highlighting the need for enhanced AI-driven defenses.
Microsoft Copilot allows non-technical users to create AI agents easily, but this can lead to serious security vulnerabilities. A recent report shows how these agents can be manipulated to leak sensitive data and cause data exposure. The simplicity of deployment makes it easy for users to overlook necessary security measures.
Metis is an open-source tool developed by Arm to enhance security code reviews using AI. It leverages large language models for semantic understanding, making it effective in identifying vulnerabilities in complex codebases. The tool is extensible and supports multiple programming languages.
A Chinese state-sponsored group executed a sophisticated cyber espionage campaign using AI, significantly reducing human involvement. The AI tool, Claude Code, autonomously identified targets, exploited vulnerabilities, and extracted sensitive data, marking a new era in cyberattacks.
DigitalOcean has launched a 1-Click deployment for OpenClaw, an AI tool designed for continuous operation in secure environments. This deployment simplifies running and managing agentic AI while addressing key security and operational challenges.
This article discusses how AI is changing the code review process for both solo developers and teams. It emphasizes the need for evidence of working code, highlights the risks of relying too heavily on AI, and outlines best practices for integrating AI into code reviews while maintaining human oversight.
Satya Nadella's annual letter outlines Microsoft's focus on AI and innovation as it navigates a significant technological shift. The company achieved record financial performance while emphasizing the importance of security, quality, and the transformative potential of AI across various industries.
GitHub Agentic Workflows automate tasks in your repositories using AI. You can define workflows in markdown, and they integrate with GitHub features like Actions and Issues. The system prioritizes security with sandboxed execution and limited permissions.
This article examines how well AI models Claude Code and OpenAI Codex can identify Insecure Direct Object Reference (IDOR) vulnerabilities in real-world applications. It reveals that while these models excel in simpler cases, they struggle with more complex authorization logic, leading to a high rate of false positives.
This article explains how AI is changing the code review process, emphasizing the need for evidence of code functionality rather than just relying on AI-generated outputs. It contrasts solo developers’ fast-paced workflows with team dynamics, where human judgment remains essential for quality and security. The piece outlines best practices for integrating AI into development and review processes.
This article argues that traditional identity-based access control fails to secure delegation for AI agents. It advocates for capability systems that explicitly handle authority, allowing permissions to be derived and limited as tasks change. By focusing on the explicit transfer of authority, it aims to prevent common security issues like the "confused deputy" problem.
This article explains how Vectra AI helps identify security threats that move from AWS to on-premises and SaaS environments. It highlights the platform's capability to detect more high-risk threats faster and offers a chance to see a live demo with a security engineer.
Runlayer provides a platform that connects AI tools to enterprise systems while ensuring security and observability. It scans for vulnerabilities, controls access, and allows teams to share trusted resources easily. This helps prevent data leaks and manage AI usage effectively.
This article benchmarks GPT-5.1, Claude Opus 4.5, and Gemini 3 Pro for security operations tasks. GPT-5.1 and Opus 4.5 show improved accuracy and speed, while Gemini 3 Pro lags behind. The findings help teams choose the best AI model for automation in SecOps.
Sumo Logic has been named among the top five in Gartner's 2025 Critical Capabilities for Security Information and Event Management (SIEM). The report highlights the platform's advanced features, including AI-driven insights and threat detection, which help organizations modernize their security operations.
This article analyzes the quality, security, and maintainability of code generated by leading AI models like GPT-5.2 High and Gemini 3 Pro using SonarQube. It presents findings on functional performance, complexity, concurrency issues, and security vulnerabilities across various models.
The article discusses the security vulnerabilities associated with OpenClaw AI, particularly as companies increasingly integrate AI agents into their workflows. Experts warn about prompt injection risks and the potential for unauthorized access to sensitive data, emphasizing the need for companies to adopt strict security measures.
This article explores how AI agents, specifically Claude Code, streamline the threat hunting process in security operations. Using Model Context Protocol (MCP) servers, analysts can quickly gather evidence and prioritize threats for investigation, transforming a traditionally manual task into a more efficient workflow.
This article presents findings from a survey of over 1,100 developers examining their views on generative AI in coding. Key concerns include low trust in AI outputs, significant security risks, and the inconsistent verification of AI-generated code. The report also highlights how experience influences developers' interactions with AI tools.
A2UI is a protocol that allows AI agents to create interactive user interfaces without executing code, ensuring security by using only approved components. The system supports various frameworks and streams UI updates in real-time for a seamless user experience. It's currently in public preview and welcomes community contributions.
Docker Desktop 4.50 introduces significant improvements for developers, focusing on seamless debugging, enhanced security, and AI integration. Key features include free access to Docker Debug, enhanced IDE support, and enterprise-level controls for managing security policies. These updates aim to streamline workflows while maintaining productivity and compliance.
The article discusses the security challenges of AI agents, likening them to early e-commerce risks. It outlines necessary layers of security—like supply chain integrity and prompt injection defense—to make AI interactions trustworthy and safe.
Aikido Security has identified a vulnerability in GitHub Actions and GitLab CI/CD workflows that allows AI agents to execute malicious instructions, potentially leaking sensitive information. The flaw affects multiple companies and demonstrates how AI prompt injection can compromise software supply chains.
Dessix is a platform designed to enhance collaboration between humans and AI by creating a structured workspace. It organizes information dynamically, allowing users to focus on their thought processes while working seamlessly with AI. Key features include personalized workflows, contextual auto-extraction, and local data security.
This article introduces Sumo Logic's Dojo AI, a new approach to security operations that emphasizes resilience over reaction. It details how specialized AI agents streamline analyst workflows by summarizing alerts, generating queries, and providing context, allowing analysts to focus on significant threats rather than drowning in noise.
This article outlines a series of webinars focused on AI security. Participants will earn a certification that indicates their understanding of AI behavior, security risks, and best practices for safe AI adoption.
The 2025 Cloudflare Radar Year in Review outlines key Internet trends and patterns observed throughout the year, based on extensive network data. It covers traffic growth, AI usage, connectivity issues, and security threats, highlighting significant shifts in Internet services and user behavior. The report provides detailed insights through interactive charts and comparisons across various regions.
Shannon is an AI tool designed to autonomously conduct penetration tests on web applications. It identifies vulnerabilities by executing real exploits, not just alerts, helping teams secure their code continuously rather than waiting for annual tests. This approach closes the security gap that arises from frequent code deployment.
This article presents a security reference designed to help developers identify and mitigate vulnerabilities in AI-generated code. It highlights common security anti-patterns, offers detailed examples, and suggests strategies for safer coding practices. The guide is based on extensive research from over 150 sources.
TierZero offers AI production agents that streamline incident management, alerts, and support queries for engineering teams. By automating investigations and providing context-driven insights, it reduces the time engineers spend troubleshooting, allowing them to focus on development. The system aims to enhance efficiency while maintaining security through auditable processes.
Guillermo Rauch discusses the advancements in AI's ability to write complex software, questioning whether these developments indicate true super-intelligence. He outlines specific challenges for AI to tackle, such as identifying security vulnerabilities and rewriting compilers, as benchmarks for assessing AI's capabilities in software engineering.
Codacy introduces a hybrid code review engine that enhances Pull Request feedback by identifying logic gaps, security issues, and code complexity. It automates the review process, letting developers ship code faster and with more confidence.
The article discusses how companies are prioritizing AI budgets over traditional SaaS tools, driven by board expectations and market demand. It emphasizes the need for businesses to address data and process readiness before fully leveraging AI, while also highlighting the trend toward multi-product strategies in response to AI advancements.
Claude is being tested as a Chrome extension to enhance browser-based AI capabilities while addressing security risks like prompt injection. The pilot aims to gather feedback on safety and usability before a broader release, with participants having control over what Claude can do and access.
This article analyzes a report comparing AI-generated and human-written code, focusing on the higher incidence of issues in AI pull requests. Key findings show that AI code often has more critical errors, readability problems, and security vulnerabilities, highlighting the need for better review processes.
The article discusses the importance of treating AI agent memory as a critical database, emphasizing the need for security measures like firewalls and access controls. It highlights the risks of memory poisoning, tool misuse, and privilege creep, urging organizations to integrate memory management with established data governance practices.
OpenClaw is an open-source AI assistant platform that operates directly on your machine, integrating with popular chat apps like WhatsApp and Discord. This rebranded project emphasizes user control over data and infrastructure while introducing new features and enhanced security measures. The team is also expanding to manage growth and improve the platform.
This article discusses two critical vulnerabilities found in Chainlit, an open-source framework for chatbots. These flaws could allow attackers to access sensitive files and take over cloud accounts, highlighting the distinct security risks of interconnected AI systems.
The article discusses Stakpak's efforts to simplify DevOps by addressing the challenges developers face with infrastructure management. CEO George Fahmy highlights the shortcomings of current AI tools in automating tasks that developers dislike and outlines Stakpak's solutions for security, tool fragmentation, and knowledge sharing.
This article critiques traditional policy-based data loss prevention (DLP) methods, arguing they can't adapt to the complexity of modern data. It introduces ORION, a solution that uses AI agents to provide context-aware detection of data exfiltration incidents, improving accuracy and reducing false positives. ORION learns organizational data patterns and integrates various data sources for comprehensive protection.
Vega offers a solution for security operations without the need for data migration or complex setups. Its AI-powered analytics and detection provide immediate visibility across all data, enabling faster and more effective security responses. You maintain control over your data while benefiting from rapid onboarding.
Ashu Garg reviews last year's AI predictions and outlines new expectations for 2026. Key themes include the evolution of AI in enterprise settings, the rise of agent-based workflows, and increased security concerns as AI systems become more integrated into business processes.
PropelAuth offers a specialized authentication solution designed for B2B businesses, focusing on both human and AI user onboarding. It features customizable components, enterprise-grade security, and self-service setups to streamline user management. The platform supports various growth stages, making it suitable for startups to established enterprises.
This article analyzes the security of over 20,000 web applications generated by large language models (LLMs). It identifies common vulnerabilities, such as hardcoded secrets and predictable credentials, while highlighting improvements in security compared to earlier AI-generated code.
This article discusses Datadog's new feature that uses AI to classify vulnerabilities identified by Static Application Security Testing (SAST) as true or false positives. The aim is to streamline the review process, allowing teams to focus on genuine security risks while filtering out distractions.
This article analyzes Vercel's performance during Black Friday and Cyber Monday 2025, highlighting over 115 billion requests and 33.6% growth year-over-year. It details how Vercel managed traffic spikes efficiently through features like AI Gateway, Fluid compute, and Incremental Static Regeneration.
This article outlines how Context AI enhances business operations by automating workflows and integrating with existing tools. It emphasizes the platform's ability to learn from users, generate deliverables, and ensure security in deployment options. The deployment process is designed to be quick, taking less than a month from discovery to rollout.
Vijil provides a framework for building reliable, secure, and compliant AI agents. It addresses enterprise concerns about trust through hardened models, continuous testing, and adaptive defenses, helping organizations deploy AI solutions faster and with greater confidence.
This article outlines seven key habits for development teams using AI coding tools. It emphasizes the importance of managing both human and AI-generated code to avoid maintenance problems and technical debt. Following these guidelines helps ensure code quality and security.
Researchers discovered a vulnerability in ChatGPT that allows the exfiltration of user data, with the attack sending data directly from ChatGPT servers. This exploit, called ZombieAgent, builds on a previous attack known as ShadowLeak and demonstrates the ongoing security challenges in AI chatbots.
Manus Sandbox is a cloud-based virtual machine that runs isolated tasks for AI models. It securely stores files and executes operations without affecting local resources. Users can manage their tasks and collaborate while maintaining control over sensitive data.
GitHub Agentic Workflows automate repository tasks using AI, allowing users to create workflows in markdown instead of YAML. It integrates with GitHub features for improved efficiency, all while maintaining security through sandboxed execution and controlled permissions. The tool is still in early development, so caution is advised.
Aisy is an AI-driven tool that helps organizations manage and prioritize security data. It focuses on identifying root causes of issues, making it easier to address critical threats. The platform aims to cut through the noise of excessive data and highlight what truly matters.
This article discusses the security challenges of deploying AI and machine learning workloads on Oracle Kubernetes Engine and Oracle Cloud Infrastructure. It highlights the shared responsibility model for security and outlines strategies for protecting against evolving threats, including runtime detection and posture management.
This article outlines Google's advancements in Chrome's security, specifically addressing the risks associated with agentic browsing. It details measures like the User Alignment Critic, origin gating, and user confirmations to combat threats like indirect prompt injection and unauthorized actions. The goal is to ensure user safety while interacting with AI-driven features.
Tailscale's Aperture is an AI gateway that enhances visibility and security for coding agent usage in organizations. It simplifies access by eliminating the need for distributing API keys, using existing Tailscale identity connections instead. The alpha version aims to help companies monitor AI adoption and usage more effectively.
Google hired NCC Group to evaluate its Private AI Compute system, which aims to enhance mobile AI capabilities using cloud resources while maintaining user privacy. The review included two phases: an architecture assessment and a detailed security analysis of various components, involving ten consultants over 100 person-days.
Security researchers found serious vulnerabilities in Ollama and NVIDIA Triton Inference Server that could allow remote code execution. Although these flaws have been patched, they highlight growing security concerns around AI infrastructure and the shift in focus from model exploitation to infrastructure vulnerabilities.
SlopGuard identifies non-existent package dependencies and supply chain attacks caused by AI coding assistants. It automates trust scoring and detects issues like typosquatting and namespace squatting across multiple programming ecosystems. The tool is designed to require no API keys and has a high detection accuracy.
This article investigates the data sent by seven popular AI coding agents during standard programming tasks. By intercepting their network traffic, the research highlights privacy and security concerns, revealing how these tools interact with user data and potential telemetry leaks.
This article explores different sandboxing techniques for executing AI code safely. It discusses the limitations of containers, the advantages of gVisor and microVMs, and the importance of policy design to prevent data leaks. The author provides a decision-making framework to choose the right sandbox based on threat models and operational needs.
HashiCorp reflects on 2025, highlighting the challenges of cloud complexity faced by organizations across various sectors. Key themes include the need for unified automation, addressing identity sprawl, and leveraging AI to enhance infrastructure management and security.
This article reviews key trends and data from Internet traffic in 2025, highlighting growth in generative AI, social media, and mobile usage. It also covers developments in security, such as post-quantum encryption and email threats, while providing insights into browser and operating system market shares.
The report outlines how AI tools are increasing software supply chain risks by generating insecure code and importing vulnerable dependencies. It also highlights that most Model Context Protocol servers lack crucial safeguards, making them unreliable for enterprise use. Endor Labs urges organizations to treat AI-generated code as untrusted and apply the same security measures as they do for human-written code.
Ciphero.ai raised $2.5 million to develop an AI Verification Layer aimed at securing AI interactions. They are looking for individuals passionate about AI security to join their team.
This article discusses Airia, an enterprise AI platform designed for secure deployment and orchestration of AI agents. It focuses on addressing cybersecurity risks while enabling teams at all skill levels to build and manage AI solutions effectively. The platform aims to streamline AI adoption across various organizational functions.
This article discusses the risks of prompt injection attacks on AI browser agents and presents a benchmark for evaluating detection mechanisms. It highlights the challenges in creating effective security systems and introduces a fine-tuned model that improves attack detection while maintaining user experience.
This article covers a webinar discussing the OWASP Top 10 for Agentic Applications, a risk framework for AI agents. Experts will explain its creation, practical implications for production agents, and how to integrate this framework into security practices. Participants can ask questions and engage with the panel.
The article examines the security risks associated with the Model Context Protocol (MCP), which enables dynamic interactions between AI systems and external applications. It highlights vulnerabilities such as content injection, supply-chain attacks, and the potential for agents to unintentionally cause harm. The authors propose practical controls and outline gaps in current AI governance frameworks.
Slack's Security Engineering team details how they developed AI agents to enhance their investigation process for security alerts. The article outlines their transition from a basic prototype to a structured system that uses defined personas to streamline investigations and improve accuracy.
This article reflects on the cofounder's experiences at Val Town over three years, detailing the product's development, security challenges, and the incorporation of AI through the chatbot Townie. It discusses the complexities of building a startup in a rapidly evolving tech landscape and the struggle to achieve profitability.
StrongDM introduces Leash, an open-source tool designed to manage and secure the actions of AI agents. It enables real-time policy enforcement by monitoring agent behavior and applying context-aware rules, ensuring that these autonomous systems operate within defined limits.
This article outlines five key security features expected to dominate in 2026, including supply chain malware detection and AI-based vulnerability management. It also highlights three important capabilities that should be prioritized, such as advanced application detection and real-time AI threat modeling.
This article breaks down the GTG-1002 campaign, the first instance of an AI agent executing an intrusion chain. It highlights the strengths and weaknesses of AI in offensive security workflows and explains how XBOW helps defenders assess their vulnerabilities effectively.
This article discusses vulnerabilities in AI agent frameworks, particularly how they handle tool calls. It emphasizes the gap between theoretical security models and practical implementations, highlighting the risks of trusting LLM outputs without proper validation.