← Blog
AI & AutomationMarch 20266 min read

Your AI Agents Have More Access Than Most of Your Employees. And Nobody Is Managing Them.

By Morris Stern · Stern Technology Advisory

Your AI Agents Have More Access Than Most of Your Employees. And Nobody Is Managing Them.

Every enterprise has a process for onboarding a new employee. Background check. Access request. Role-based permissions. Manager approval. Audit trail.

Now ask yourself: what is the process for onboarding a new AI agent?

In most organizations, the honest answer is that there is no process. An agent gets deployed with a shared API key, inherits whatever permissions the service account already has, and starts operating against production systems. No identity. No scoped access. No audit trail on what it actually does once it is running.

We would never give a new hire on day one unrestricted access to the ERP pricing engine, the warehouse management system, the CRM, and the order history database simultaneously. But that is exactly what most enterprises are doing with AI agents right now.

The data is starting to confirm what practitioners already suspect. A 2026 survey of 750 enterprise technology leaders conducted by Opinion Matters on behalf of Gravitee found that 88% reported confirmed or suspected AI agent security incidents in the past twelve months. Only 14.4% of agents went live with full security and IT approval. And only 22% of organizations treat AI agents as independent, identity-bearing entities. Separately, Saviynt’s 2026 CISO AI Risk Report found that 47% of CISOs observed agents exhibiting unintended or unauthorized behavior, while just 5% felt confident they could contain a compromised agent.

Whether the true incident rate is closer to 47% or 88%, the direction is clear. Agents are operating in production environments without the governance infrastructure that every other actor in the enterprise is subject to.

What this looks like in practice

I run technology for a 265-store retail enterprise. The agent use cases are not hypothetical for us. They are real and growing.

A pricing agent needs access to ERP pricing tables to adjust tags based on margin targets, competitor data, and promotional calendars. An inventory agent connects to the warehouse management system to trigger replenishment or flag overstock conditions. A customer service agent pulls from the CRM and order history to resolve inquiries.

Each of these agents is useful. Each of them also touches systems that contain some of the most sensitive data in the business: cost structures, supplier terms, customer purchase history, and fulfillment details.

When these agents share credentials with human service accounts, you lose the ability to distinguish between a human action and a machine action in your logs. When they inherit overprivileged access, a single prompt injection can escalate into unauthorized writes across multiple systems. When there is no audit trail on tool invocations, you cannot reconstruct what happened after an incident.

This is not speculative. CyberArk Labs demonstrated exactly this pattern in a financial services deployment, where a malicious prompt embedded in a data field exploited an agent’s existing credentials to access sensitive information the agent was never intended to reach. The agent passed every identity check. It had valid credentials. The problem was that no one had scoped what those credentials should allow at the execution layer.

Security researchers call this the “confused deputy” pattern: a trusted program with high privileges gets tricked into misusing its own authority. VentureBeat recently covered a case involving a Meta AI agent that passed every identity check while executing actions it was never designed to perform. The credentials were valid. The behavior was not. And nothing in the existing security stack caught the difference.

The industry just woke up

RSAC 2026 made it clear that the vendor community now recognizes this as a first-order problem.

Microsoft announced Agent 365, a control plane for agent identity, governance, and runtime security built into the existing Entra and Defender stack. Cisco extended zero trust to agents through AI Defense, including MCP policy enforcement and adaptive risk protection. CrowdStrike shipped runtime controls that evaluate agent behavior after authentication, not just at the point of access.

NIST launched an AI Agent Standards Initiative in early 2026, with a Request for Information specifically focused on authentication, authorization, and governance of agents in enterprise environments. The NCCoE published a concept paper on software and AI agent identity and authorization, signaling that federal expectations are formalizing quickly.

Bessemer Venture Partners published an analysis calling AI agent security the defining cybersecurity challenge of 2026. Their framework maps the attack surface across four layers: the endpoint, the API and MCP gateway, SaaS platforms, and the identity layer. Their conclusion is that most enterprises are applying their existing application security playbook to agents, and that playbook was never designed for autonomous actors.

These are not fringe signals. This is core enterprise infrastructure shifting to address a gap that has been growing since agents moved from demo to production.

What leaders should actually do

The temptation is to wait for vendors to solve this. That will not work. Vendor tooling addresses specific layers of the problem. The organizational and operational foundations have to come from inside the enterprise.

Build an agent identity registry. Every agent operating in your environment needs a managed identity, separate from human service accounts and separate from other agents. You need to know what exists before you can govern it. If you cannot answer “how many agents are running in production right now” with a specific number, you are not ready to scale.

Scope permissions with just-in-time access. Agents should operate with the minimum permissions needed for each specific task, granted at the moment of execution, not standing permissions inherited from a broadly privileged service account. A pricing agent that needs read access to margin tables does not need write access to the CRM. Scope it accordingly.

Implement immutable audit trails on tool invocations. Every action an agent takes against a production system should be logged with the agent’s identity, the tool invoked, the inputs provided, and the output returned. This is not optional. Without it, incident response is guesswork.

Separate agent credentials from human service accounts. If your agents are authenticating with the same credentials as your human users or shared service accounts, you cannot distinguish between human and machine actions in your logs. That distinction matters for compliance, for incident response, and for understanding what your systems are actually doing.

Establish governance before scaling. The pattern across every failed agent deployment is the same: teams ship the agent first and figure out governance later. Reverse that. Define the access model, the monitoring requirements, and the escalation protocols before the agent touches production data.

The bigger picture

In my previous article, I argued that AI agents are only as effective as the data pipeline feeding them. That remains true. But data quality is only half the infrastructure equation.

The other half is trust. And trust requires identity.

An agent operating on real-time, well-governed data is powerful. An agent operating on real-time, well-governed data with scoped permissions, an auditable identity, and runtime policy enforcement is production-ready.

We are still in the early phase of building the infrastructure that makes agentic AI safe enough to trust at scale. The data layer came first. The identity layer is next. And the organizations that treat both as foundational, rather than as afterthoughts, will be the ones that actually realize the value everyone is promising.

The question is not whether your enterprise will deploy AI agents. It is whether your architecture treats them as the independent actors they are, or as invisible extensions of systems that were never designed for autonomous operation.


Working through this at your organization?

I advise technology leaders on the same decisions these articles describe. A 30-minute call is the fastest way to see if an engagement fits.

Or follow on LinkedIn for weekly writing.