The veteran strategist perspective
In the hushed boardrooms across APAC, the conversation has shifted. No longer are we debating whether to adopt AI. The question keeping CISOs and CTOs awake is far more fundamental: who exactly is responsible when an autonomous agent acts on behalf of the organisation?
In my work across APAC, a recurring pattern has become clear. Many organisations have rapidly scaled agentic AI systems into production — agents that autonomously review transactions, make decisions, and interact with core systems. What often starts as an efficiency success story can quickly turn into a governance nightmare.
A particularly dangerous scenario emerges when these agents are granted broad, persistent permissions without clear ownership, scoped boundaries, or robust monitoring. In several cases we have observed, a compromised agent — or one behaving in unintended ways — exploited supply chain vulnerabilities during model updates, leading to significant data exposure and regulatory consequences.
The root cause is almost always the same: powerful non-human entities operating with identities that lack proper lifecycle governance, accountability, and containment mechanisms.
This is the new reality of agentic AI. As we shift from chat tools to autonomous agents that pursue goals, use tools, and spawn sub-agents, we are exponentially expanding non-human identities (NHIs). These now vastly outnumber human users in most enterprises. Yet our IAM frameworks remain stubbornly designed for people.
The bottom line is clear: AI agents need robust identity management at least as much as your employees. Non-human access is the next critical security frontier.
The Silent Explosion of Non-Human Identities
Traditional NHIs – service accounts, API keys, OAuth tokens, IoT certificates – have long been a blind spot, typically outnumbering human identities by 10:1 or more. Agentic AI is accelerating the problem dramatically.
Gartner predicts that by 2026, around 30% of enterprises will deploy AI agents that act with minimal human intervention. The 2026 Delinea Identity Security Report reveals that while 78% of leaders feel confident in their AI security, only 31% have proper governance for AI identities. Tenable’s Cloud and AI Security Risk Report 2026 highlights the resulting “AI exposure gap”: 86% of organisations introduce vulnerable third-party packages, and 65% expose critical assets through weak or forgotten credentials tied to AI workloads. Supply chain risks and poor identity controls create an invisible attack surface.
These are not abstract figures. They mean regulatory fines, operational disruption, eroded trust, and direct impacts to the P&L.
Why Human-Centric IAM Cannot Scale to Agents
I have spent two decades advising C-level executives on technology transformations across APAC. The pattern when new technologies emerge is consistent: we layer them onto existing controls and act surprised when they fail.
Traditional IAM focuses on human login events, RBAC, MFA, and employee lifecycle reviews. AI agents break these assumptions. They operate with long-running or event-driven sessions, require dynamic context-aware permissions, and often lack clear individual ownership. “It belongs to the AI team” is unacceptable when sensitive systems are involved. Model updates can alter behaviour in ways that violate original intents, often without monitoring.
The hard truth is that treating agents as “just code” or legacy service accounts is architectural malpractice in 2026. Agents with tool-calling capabilities can autonomously pivot and escalate if boundaries are weak. Some leaders argue heavy governance stifles innovation and autonomy. I respect the concern, but experience shows that organisations achieving the highest ROI from agentic AI are those that establish secure identity foundations first. Unchecked autonomy frequently leads to costly incidents that destroy value.
The Mechanics of Secure Agent Identity
A robust agent identity framework must include five core elements:
Clear Ownership and Accountability
Every agent must have a designated human or team owner who is accountable for its behaviour and any resulting incidents.
Dynamic Least-Privilege Controls
Permissions should be scoped, time-bound, and context-aware. Just-in-time access and automatic expiry are essential.
Continuous Behavioural Monitoring
We need visibility into agent intent, tool usage, decision patterns, and behavioural drift — not just traditional logs.
Rapid Containment Mechanisms
Every production agent should have clearly defined “kill switches” and containment playbooks that can be triggered instantly.
Full Lifecycle Governance
From provisioning through regular attestation to secure decommissioning, agents require formal processes with periodic reviews.
Organisations that implement these controls early will be able to scale agentic AI with confidence. Those that delay will eventually face incidents that could have been prevented.
The age of autonomous agents is no longer coming — it is here. The question is whether we will govern their identities with the seriousness they deserve.
Word count: 1624