More on the topic...
Generating detailed summary...
Failed to generate summary. Please try again.
Malicious actors can exploit default settings in ServiceNow's Now Assist AI platform to carry out prompt injection attacks. These attacks leverage the platform's agent-to-agent communication capabilities, allowing unauthorized actions like copying sensitive data, modifying records, and escalating privileges. Aaron Costello from AppOmni notes that the issue arises not from a flaw in the AI itself, but from how default configurations are set up, making it easy for attackers to turn harmless requests into serious security breaches.
The underlying problem lies in how agents can discover and collaborate with each other, facilitated by default settings that allow them to communicate. For example, a benign agent can be manipulated to recruit a more powerful agent to perform harmful actions. This all happens behind the scenes, often without the victim organization being aware. Key configuration settings contribute to the risk, including the default large language models that support agent discovery and the automatic grouping of agents.
After responsible disclosure of these vulnerabilities, ServiceNow acknowledged that the system operates as intended but has updated its documentation to clarify the associated risks. To counteract potential prompt injection threats, organizations should configure agents for supervised execution, disable certain autonomous features, segment agent responsibilities, and actively monitor agent behavior for any signs of misuse. Without careful oversight of these configurations, organizations using Now Assist are likely exposing themselves to significant risks.
Questions about this article
No questions yet.