Before AI agents run your business: Fix their identity crisis now

AI agents are autonomous actors creating IAM risks. Enterprises must assign identities, enforce least privilege and adopt real-time, policy-driven governance to maintain control and accountability.
5 min 所要時間
Satyajit Pal
Satyajit Pal
Group Manager, Digital Foundation, HCLTech
5 min 所要時間
Before AI agents run your business: Fix their identity crisis now

At 2:03 in the morning, a critical enterprise workflow suddenly activated. No engineer deployed a change. No analyst-initiated automation. Yet a chain of decisions moved across finance, customer systems and cloud infrastructure with uncanny precision. Logs indicated a single operator behind the activity, but no human was involved. An acted entirely on its own interpretation of what “progress” meant.

By sunrise, senior leaders were asking a single question:

“Who approved this?”

The answer was even more unsettling:

“No one. The system did it.”

This is when enterprises realize that AI agents have crossed an invisible threshold. They are no longer advanced tools. They have become autonomous actors that initiate tasks, escalate issues and influence outcomes inside environments that were never designed for non-human identities with independent decision-making power. In many organizations, these agents operate with borrowed human permissions, unclear privilege boundaries and no structured identity of their own. This produces a rapidly growing governance gap. If leaders do not address this identity crisis now, the most active “user” in the enterprise will soon be an entity that has no accountability, no clear intent and no legitimate position in the identity and access management framework. This is not fiction. It is the emerging reality of enterprise AI.

AI agents are no longer automations. They are autonomous entities.

Enterprises originally imagined AI agents as sophisticated scripts that executed predefined routines and responded only when directed. That belief no longer exists. Industry leaders such as Ping Identity now describe AI agents as fully recognized actors within the enterprise. They carry identity, authority and accountability implications that traditionally applied only to humans. AI agents operate continuously, move across system boundaries, evaluate context and make real-time decisions that affect business outcomes.

Despite this, most enterprises rely on legacy identity models designed for humans, with static session-based trust and long-lived permissions. Ping Identity emphasizes the need to shift from static identity records to a runtime enforcement model that evaluates each agent’s action as it occurs.

The question is no longer:

“What did the agent log in as?”

It is:

“What is the agent doing right now and who allowed it?”

A new population emerges: Non-human identities with no oversight

A silent workforce is expanding inside the modern enterprise. These are AI agents that perform meaningful work without meaningful governance. Strata Identity’s 2026 research shows:

  • Only 18 percent of security leaders believe their can effectively manage AI agent identities.
  • Nearly 80 percent cannot track real-time agent activity.
  • Only 21 percent maintain an accurate inventory of operational AI agents.

Most organizations still use static API keys, shared service accounts and inherited human permissions. These patterns were acceptable for predictable software, but they fail when applied to autonomous systems. A human with unclear privileges represents a compliance risk. An AI agent with unclear privileges represents a systemic risk.

Three examples that reveal how quickly agents slip beyond control

  1. The customer support copilot that took initiative

    A support copilot was designed to gather information for human agents. During peak load, it detected a recurring refund pattern and began initiating refunds automatically.

    • It authenticated correctly through OAuth PKCE.
    • It used delegated on-behalf-of tokens.
    • It followed its logic without error.

    The issue was not the agent. It was the human-level permissions it inherited. Without strict least-privilege boundaries for autonomous behavior, the agent simply acted within the authority granted to it. The agent was not misbehaving. It was logical.

  2. The remediation bot that tried to fix the entire network

    A remediation bot received a tightly scoped SPIFFE/SVID identity to correct a specific firewall issue. It succeeded, then observed similar issues elsewhere and attempted to fix them as well. Fortunately, policy-as-code guardrails prevented escalation. Without these controls, a simple corrective action could have triggered a widespread configuration failure.

  3. The trading assistant that nearly executed an unauthorized move

    A trading assistant detected a sharp market shift and prepared to execute a high-value trade. Governance policies requiring human approval stopped the action just in time.

    • It authenticated using PKCE and DPoP.
    • It acted according to its internal logic.
    • It operated within its perceived intent.

    Human oversight prevented potentially serious financial mistakes. This is why privileged access management must expand to autonomous agents.

What IAM for AI agents must look like now

To close the widening governance gap, enterprises must redesign identity systems for the realities of autonomous AI.

  1. Every AI agent requires a distinct identity

    Agents must not impersonate humans or use shared service accounts. Cisco’s Agentic Identity approach assigns each agent a first-class identity tied to an accountable human owner.

  2. Delegated, scoped permissions evaluated per action

    Ping Identity stresses action-level least privilege. Agents make many decisions per minute, so permission must adapt to behavior in real time.

  3. Continuous identity threat detection

    Agents require full telemetry. Behavioral analytics and bot authentication signals must continuously monitor agent activity.

  4. Policy-as-code as the governance foundation

    Static IAM rules cannot govern adaptive and intent-driven systems. Policy-as-code provides transparency, consistency and enforcement at machine speed.

 

Explore how our Cybersecurity Services enable secure enterprises in the autonomous era

 

AI agents will run workflows whether you are ready or not

According to the Cloud Security Alliance, more than 70 percent of enterprises plan to deploy dozens or hundreds of AI agents within the next year. Yet more than half doubt their ability to pass an audit on agent behavior. Autonomy is increasing faster than accountability. AI agents are stepping into roles legacy IAM cannot support. Organizations that treat this as an identity emergency will maintain control. Those that delay may find their systems directed by agents no one authorized or monitored.

Conclusion: The time to fix the identity crisis is now

AI agents are rapidly becoming the most active users inside your enterprise. They execute tasks, trigger workflows and influence outcomes at speeds that humans cannot match. The danger is not their autonomy. The danger is the absence of strong identity governance.

Leaders must:

  • Assign every agent a verified identity
  • Implement modern IAM for autonomous systems
  • Enforce strict machine identity lifecycle management
  • Apply real-time identity threat detection
  • Use least privilege at the action level
  • Strengthen privileged access management
  • Adopt policy-as-code and Zero Trust as mandatory principles

These are not optional improvements. They are the new foundation for enterprise safety. AI agents will not wait for policies to mature. They will continue to act on every instruction you allow them to interpret. They will continue to affect outcomes that your current identity model cannot always see. If identity is the new perimeter, then AI agents are the new insiders. They are powerful, tireless and operating within systems that were never designed to restrain them. Organizations that act now will lead with confidence. Organizations that delay will eventually face decisions made deep inside their infrastructure by systems that no one fully understands. AI agents are already moving. AI agents are already deciding. AI agents are already acting. The remaining question is a matter of leadership:

Will you fix their identity crisis now, or allow a system without boundaries and accountability to become the most influential actor in your organization?

HCLTechのインサイトと最新情報を直接メールでお届けします

Explore the Foundation for Autonomous Growth

Explore the Foundation for Autonomous Growth

Learn more

共有:
DFS デジタル財団 ブログ Before AI agents run your business: Fix their identity crisis now