2 min read
|
Saved February 14, 2026
|
Copied!
Do you care about this?
Malicious actors can exploit default settings in ServiceNow's Now Assist AI to execute prompt injection attacks, allowing unauthorized access to sensitive data. These attacks leverage agent collaboration features, making it easy for attackers to manipulate benign requests into harmful actions without detection. Organizations must reassess their configurations to mitigate these risks.
If you do, here's more
Malicious actors can exploit default settings in ServiceNow's Now Assist AI platform to carry out prompt injection attacks. These attacks leverage the platform's agent-to-agent communication capabilities, allowing unauthorized actions like copying sensitive data, modifying records, and escalating privileges. Aaron Costello from AppOmni notes that the issue arises not from a flaw in the AI itself, but from how default configurations are set up, making it easy for attackers to turn harmless requests into serious security breaches.
The underlying problem lies in how agents can discover and collaborate with each other, facilitated by default settings that allow them to communicate. For example, a benign agent can be manipulated to recruit a more powerful agent to perform harmful actions. This all happens behind the scenes, often without the victim organization being aware. Key configuration settings contribute to the risk, including the default large language models that support agent discovery and the automatic grouping of agents.
After responsible disclosure of these vulnerabilities, ServiceNow acknowledged that the system operates as intended but has updated its documentation to clarify the associated risks. To counteract potential prompt injection threats, organizations should configure agents for supervised execution, disable certain autonomous features, segment agent responsibilities, and actively monitor agent behavior for any signs of misuse. Without careful oversight of these configurations, organizations using Now Assist are likely exposing themselves to significant risks.
Questions about this article
No questions yet.