3 min read

Multi-Agent Systems: Why Authorization Is the Hardest Problem in Enterprise AI

Multi-Agent Systems: Why Authorization Is the Hardest Problem in Enterprise AI
Multi-Agent Systems: Why Authorization Is the Hardest Problem in Enterprise AI
6:50

At the AI Agent Conference in New York, a session on multi-agent identity and authorization infrastructure tackled what several speakers across the two-day event kept naming as the most consequential unsolved problem in enterprise AI deployment: how do you govern what agents are allowed to do?

The session's framing was precise and worth starting with. AI agents are not human users. They're autonomous, probabilistic, non-human actors that operate dynamically across multiple enterprise systems simultaneously. Every assumption baked into existing enterprise identity and authorization infrastructure was built for humans. Agents break those assumptions systematically — and most organizations deploying agents haven't fully reckoned with what that means.

Agents Are Non-Human Actors and Your IAM System Wasn't Built for Them

Enterprise identity and access management systems were designed around a model that's been stable for decades: a human authenticates, gets a defined role, receives a corresponding set of permissions, and those permissions govern what they can access. Clean, auditable, understood.

Agents operate completely differently. They authenticate on behalf of users or systems, act autonomously across sessions, access multiple systems in sequence, synthesize information across permission boundaries, and perform actions that weren't explicitly anticipated when the permission model was designed. "The hardest problem is authorization. Agents need constrained access."

The problem compounds with scale. A single agent accessing three enterprise systems and combining their outputs may be surfacing information that no individual in the organization is permitted to see in aggregate. The individual data sources are each permissioned correctly. The combination creates an exposure that the permission model doesn't address. Current IAM systems have no mechanism for governing derived information — they govern access to sources, not what can be inferred from combining them.

The Bright Line Between Probabilistic and Deterministic Systems

The most important architectural concept from the session — and one that kept surfacing across multiple talks at the conference — is what the panel called the "bright line" between probabilistic AI systems and deterministic infrastructure.

"You need a bright line. The deterministic layer matters. You need hard boundaries."

The architecture this implies: AI agents operate above the line. They reason, recommend, orchestrate, and make decisions. But critical enterprise infrastructure — permission systems, policy enforcement, audit trails, data systems — sits below the line and remains deterministic, policy-constrained, and auditable regardless of what the agent layer does.

The probabilistic nature of agents is not a bug — it's what makes them useful. But probabilistic systems making consequential decisions about what to access, modify, or act on in enterprise systems without deterministic guardrails beneath them is the failure mode that creates real risk. The agent reasons. The infrastructure enforces. Those functions have to remain separate.

New call-to-action

Over-Permissioned Agents Are the Most Common Security Risk

The session identified the main security concerns that organizations are already running into with multi-agent deployments. Over-permissioned agents are at the top of the list. The path of least resistance when building agent systems is to give them broad access so they can accomplish whatever task they're assigned. The problem is that broad access creates blast radius — an agent behaving unexpectedly or being manipulated through prompt injection has a much larger potential impact if it has extensive system permissions than if its access is tightly scoped.

"Agents need constrained access." Minimum viable permissions, scoped to the specific workflow the agent is designed to execute, with hard limits on what it can modify versus what it can only read. This is the principle of least privilege applied to non-human actors — a well-understood security concept that most agent deployments are currently ignoring.

Related concerns the panel raised: unintended access through third-party integrations, uncontrolled actions in systems that weren't explicitly included in the agent's intended scope, and contextual authorization — cases where what an agent is permitted to do depends on situational factors that static permission models can't capture.

Authorization Has to Be Designed Into the Architecture, Not Retrofitted

The practical implication of everything the session covered is that authorization for multi-agent systems has to be a design-time decision, not a deployment-time afterthought. "You need hard boundaries." Those boundaries are significantly harder to establish after an agent is already running in production than before it's built.

The design questions that have to be answered before deploying any agent at enterprise scale: Who is the agent acting as? What systems can it access, and in what ways — read, write, execute? What permissions does it inherit from the user it's acting on behalf of, and what does it explicitly not inherit? Can it act independently of user context? What creates an audit trail for its actions, and who reviews it?

These are not primarily technical questions. They're governance and policy questions that require input from legal, compliance, security, and operations — not just the engineering team that built the agent. The organizations that treat agent deployment as a technical project and figure out governance later are the ones that will discover their authorization gaps through a compliance event rather than a design review.

Enterprise AI Requires Security Architecture at the Center

The session's closing argument was a reframe of what enterprise AI capability actually requires. Security architecture — identity, authorization, trust boundaries, audit infrastructure — is not a compliance layer added on top of an AI deployment. It's a prerequisite for safe deployment at scale.

"AI agents should recommend, reason, and orchestrate. Critical infrastructure remains deterministic, policy-constrained, and auditable."

The organizations that get this right early are building agent systems that can be trusted, scaled, and expanded over time. The ones that treat authorization as a secondary concern are building technical debt that becomes harder and more expensive to resolve with every additional agent they deploy.


This session was presented at the AI Agent Conference 2026 in New York, focused on identity, authorization, and deterministic infrastructure for multi-agent AI systems.