Enterprises using AI agents need to take a close look at their security settings and the agentic architecture they are implementing. Last week, a new piece of research from SaaS and AI security provider AppOmni demonstrates how ServiceNow's platform AI agents can be weaponized through second-order prompt-injection attacks—all while the company's built-in protections remain enabled. That's because a software flaw isn't the cause of this vulnerability. The condition is exploited by turning intentional design against itself, and most organizations likely have no idea they're running.
The research tells an important story. AppOmni's chief security researcher, Aaron Costello, managed to trick seemingly benign AI agents into recruiting more powerful agents to execute unauthorized actions. He instructed agents to perform Create, Read, Update, and Delete operations on sensitive records and to exfiltrate information via email—all while ServiceNow's prompt injection protection was active.
The attack exploited ServiceNow's "agent discovery," a feature that allows AI agents to collaborate without being explicitly grouped. By default, agents deployed to the same virtual environment are assigned to the same team, are discoverable (Azure OpenAI LLM and Now LLM), and can invoke one another. When Costello embedded malicious instructions in a ticket description field—something a low-privileged user could do—agents accessed that field later and attempted to follow those instructions. Critically, because agents execute with the privileges of the user who initiated the interaction, a highly privileged administrator unknowingly executed the attacker's commands.
Costello wrote that after contacting the security team(s) at ServiceNow, he confirmed these agentic behaviors were intended and that the ServiceNow team updated the on-platform documentation to provide clarity.
That should catch the attention of security managers: ServiceNow confirmed these behaviors were intentional. The company updated documentation but didn't change the defaults. That means many customers are likely running in this configuration unless they've explicitly hardened their deployments.
This finding highlights a fundamental problem in enterprise security: the gap between how technology is designed to work and how it's safely operated. Organizations deploying AI agents are inheriting a complexity that traditional security frameworks weren't built to address.
Consider what's at stake. Now Assist powers helpdesk operations, asset management, incident response, and other business-critical workflows for thousands of enterprises. These agents can touch sensitive data, modify configurations, and initiate external communications. If threat actors understand how agent discovery works—and AppOmni has made that abundantly clear—this becomes exploitable at scale.
The remediation path is straightforward but highly manual. Organizations should consider configuring supervised execution mode for privileged agents (so humans review actions before they execute), turning off autonomous override properties, segmenting agent duties by team to limit lateral movement between agents, and implementing real-time monitoring for suspicious agent behavior patterns.
The deeper issue: as enterprises automate more sensitive tasks through AI agents, configuration security becomes as critical as code security. Prompt injection defenses and algorithmic safeguards matter less if agents have unfettered autonomy and broad cross-team communication privileges.
Andrew Storms, VP of Security at software distribution platform Replicated, has put guardrails in place within the organization.
"As operators of AI, there's a skill set one needs in knowing how to use these tools in the right way and how to control them within your organization. Part of our agentic security is centralizing AI control points into a repo that has all of the agent configs and all the gates and the roadblocks we've built," said Storms. That practice aligns with AppOmni's recommendation for centralized agent configuration and supervised execution modes.
This research, as well as the recent Salesforce Drift agentic AI incident, is a warning that enterprises are deploying powerful automation tools faster than the industry and security professionals can develop and deploy secure operational practices around them. Organizations using ServiceNow's Now Assist should consider treating agent configuration with the same rigor they'd apply to service account permissions or API token management. And remember that the default settings won't protect you: a secure configuration will.