8 links tagged with all of: ai-security + risk-management
Click any tag below to further narrow down your results
+ compliance
(4)
+ governance
(3)
+ employee-monitoring
(1)
+ collaboration
(1)
+ penetration-testing
(1)
+ devsecops
(1)
+ software-development
(1)
+ vulnerabilities
(1)
+ coding
(1)
+ meta
(1)
+ agents
(1)
+ prompt-injection
(1)
+ threat-intelligence
(1)
+ data-protection
(1)
+ policy-enforcement
(1)
Links
This article discusses the urgent need for security to be integrated into AI development processes. It highlights the unique risks posed by AI's unpredictable nature and stresses the importance of collaboration between AI developers and security teams to implement effective safeguards and testing methods.
This article discusses the security risks associated with AI agents, particularly prompt injection vulnerabilities. It introduces the "Agents Rule of Two," a framework designed to minimize risks by limiting the properties an agent can have in a session to avoid harmful outcomes.
The Codacy AI Risk Hub helps teams enforce secure coding practices for AI-generated code. It prevents vulnerabilities by tracking model usage, scanning for security risks, and managing hardcoded secrets across projects. This tool aims to maintain code quality while leveraging AI capabilities.
This article discusses Lumia's platform for managing AI usage in organizations. It focuses on monitoring employee interactions with AI, ensuring compliance with policies, and providing risk assessments. Key features include shadow AI analysis and control measures for autonomous agents.
Pillar Security offers a comprehensive platform for managing security risks throughout the AI lifecycle, providing tools for asset discovery, risk assessment, and adaptive protection. The solution integrates seamlessly with existing infrastructures, enabling organizations to maintain compliance, protect sensitive data, and enhance the trustworthiness of their AI systems. With real-time monitoring and tailored assessments, Pillar aims to empower businesses to confidently deploy AI initiatives while mitigating potential threats.
AI is transforming workplace productivity but introduces significant security challenges, as revealed by a survey of security leaders. Key issues include limited visibility into AI tool usage, weak policy enforcement, unintentional data exposure, and unmanaged AI, highlighting the urgent need for enhanced governance and security strategies to mitigate risks associated with AI adoption.
Organizations are rapidly adopting AI technologies without sufficient security measures, creating vulnerabilities that adversaries exploit. The SANS Secure AI Blueprint offers a structured approach to mitigate these risks through three key imperatives: Protect AI, Utilize AI, and Govern AI, equipping cybersecurity professionals with the necessary training and frameworks to secure AI systems effectively.
Security questionnaires for AI vendors must evolve beyond traditional SaaS templates to effectively address the unique risks associated with AI systems. Delve proposes a new framework focusing on governance, data handling, model security, lifecycle management, and compliance to enhance trust and reliability in AI procurement. This approach aims to foster better communication between vendors and enterprises, ultimately leading to more secure AI solutions.