Click any tag below to further narrow down your results
Links
This article outlines various security risks associated with AI agents and their infrastructure, including issues like chat history exfiltration and prompt injection. It emphasizes the need for a comprehensive security platform to monitor and govern AI operations effectively.
The article discusses the importance of treating AI agent memory as a critical database, emphasizing the need for security measures like firewalls and access controls. It highlights the risks of memory poisoning, tool misuse, and privilege creep, urging organizations to integrate memory management with established data governance practices.
The article examines the security risks associated with the Model Context Protocol (MCP), which enables dynamic interactions between AI systems and external applications. It highlights vulnerabilities such as content injection, supply-chain attacks, and the potential for agents to unintentionally cause harm. The authors propose practical controls and outline gaps in current AI governance frameworks.
The article discusses the challenges posed by agentic artificial intelligences (AIs) in the context of the OODA loop—Observe, Orient, Decide, Act—framework. It highlights the complexities of integrating AI decision-making into human processes and the implications for security and governance. The author emphasizes the need for a deeper understanding of these interactions to ensure effective management of AI systems.