Skip to content

The Agentic Operating Model: Why Automating Old Workflows Is the Wrong Goal

Published: at 12:00 AMSuggest Changes

Most agentic AI pilots are asking the wrong question.

They ask, “Which human task can we automate?” That sounds sensible, but it usually leads to a disappointing answer: take the existing workflow, bolt an agent onto one step, and hope the old process becomes faster. It rarely does. The agent becomes a clever intern trapped inside a bad organisational chart.

The better question is harder: “How should this work be designed if autonomous execution were available from the start?”

That is the agentic operating model. It is not a tool rollout. It is a redesign of how work moves through the enterprise: where decisions are made, which systems exchange state, how exceptions are handled, who owns outcomes, and what evidence proves the workflow is working.

Frankly, if leaders treat agents as a skin on old business processes, they will automate yesterday’s waste at tomorrow’s speed.

Why Old Workflows Reject New Agents

Traditional enterprise workflows were designed for humans working through screens, queues, emails, meetings, and approvals. They assume people can interpret ambiguity, chase missing context, negotiate handoffs, and remember the unwritten rules. That is why so many processes are held together by spreadsheets, side conversations, and heroic middle managers.

Agents do not fix that by magic. They expose it.

I once advised a regional insurer where claims processing looked well documented on paper. In reality, every high-value case depended on a few experienced operations leads who knew when to call underwriting, when to escalate fraud review, and when to nudge legal. A pilot agent could summarise cases beautifully, but it could not move work safely because the real decision model was tribal knowledge. The workflow was not ready for autonomy.

This is the first operating-model lesson: before agents can execute work, the enterprise must make work legible.

That means defining inputs, outputs, owners, policies, thresholds, service levels, audit evidence, and failure modes. Without that discipline, an agent is just another participant in a confusing process.

The Market Is Moving From Assistance to Execution

The shift is not theoretical. Gartner predicted in August 2025 that 40% of enterprise applications would include task-specific AI agents by the end of 2026, up from less than 5% in 2025. In April 2026, Gartner went further, saying that by 2028 most enterprises would move away from assistive AI toward outcome-focused workflows, especially in approval-heavy and timing-sensitive areas.

That distinction matters. A copilot helps a human complete work. An agentic workflow allows software to execute parts of the work within defined authority.

Deloitte’s November 2025 analysis makes the same point through the lens of orchestration. It argues that enterprises will need multiagent systems that can interpret requests, design workflows, delegate tasks, coordinate work, and validate outcomes. Deloitte cites market estimates that the autonomous AI agent market could reach US$8.5 billion in 2026 and US$35 billion by 2030, with better orchestration potentially lifting the 2030 projection by 15% to 30%.

The hard truth is that the money is not in chat. The money is in operating leverage: fewer handoff delays, fewer rework loops, faster exceptions, and cleaner accountability.

Start With the Workflow, Not the Agent

The common enterprise reflex is to create an “agent catalogue” before redesigning the work. That is backwards.

Start with the value stream. Pick a workflow where speed, consistency, and judgement all matter: vendor onboarding, loan exception review, customer renewal risk, incident response, claims triage, procurement approval, or regulatory evidence collection. Then decompose it into modules.

Each module needs five things.

Only then should leaders ask which modules belong to agents, deterministic automation, humans, or a hybrid pattern.

McKinsey’s September 2025 article on the agentic organisation captures this shift well. It argues that work and workflows should be reimagined as AI-first, with humans and IT systems selectively reintroduced into the design. That sounds radical, but it is practical. If an agent can gather evidence, prepare options, and execute low-risk steps, the human should not click through legacy screens just to preserve the theatre of control.

API-First Handoffs Replace Email-First Handoffs

Old workflows often pass work through email, spreadsheets, ticket notes, and meeting decisions. That is tolerable for humans because humans can infer context. It is dangerous for agents because unstructured handoffs create ambiguity.

An agentic operating model needs API-first handoffs. Each step should expose what the next participant needs: current state, source evidence, decision constraints, confidence score, action history, and escalation reason. The handoff should be machine-readable before it is manager-readable.

Think of it as replacing corridor conversations with operational contracts.

In one manufacturing transformation, I saw purchase approvals delayed not because policy was unclear, but because every system held a different slice of the truth. Procurement had supplier data, finance had budget status, legal had contract exceptions, and operations had urgency. An agent could not approve anything until the handoff model was redesigned around shared state.

The P&L impact is straightforward. Poor handoffs create latency. Latency creates working-capital drag, missed revenue, frustrated customers, and expensive management intervention.

Decision Rights Must Be Explicit

Agentic AI forces an uncomfortable question: who, or what, is allowed to decide?

Most organisations avoid this question by keeping humans nominally in charge while allowing systems to shape choices invisibly. That will not survive agentic execution. If an agent can trigger an email, update a record, approve a refund, prioritise a ticket, or route a case, it has decision influence. If it can commit inventory, change access, or file regulatory evidence, it has delegated authority.

Decision rights must therefore be explicit. Define what the agent can recommend, what it can execute, what requires human approval, and what is forbidden. Tie those rights to risk, value, reversibility, customer impact, and regulatory exposure.

AWS Prescriptive Guidance offers a useful principle for human-in-the-loop design: use human intervention when the cost of failure is higher than the cost of human review. That is exactly the economic lens leaders need. Do not put a human in the loop because it feels safer. Put a human in the loop where judgement materially reduces downside risk.

The bottom line: autonomy is not binary. It is a spectrum of delegated authority.

Observability Becomes a Business Control

Traditional automation monitoring asks whether a job ran. Agentic observability asks a deeper set of questions: why did the agent choose that step, what evidence did it use, which tool did it call, where did confidence drop, who approved the exception, and what outcome followed?

IBM’s October 2025 Instana GenAI Observability announcement describes the operational challenge clearly: as LLM and agentic workflows move into production, teams must debug opaque AI pipelines, control unpredictable token costs, and maintain reliable customer experiences. IBM’s watsonx Orchestrate governance material also frames visibility as more than orchestration, emphasising the need to see what agents are doing across workflows, measure outcomes in real time, and enforce policies consistently.

For CIOs, this is not only an IT concern. Observability becomes audit evidence, cost control, customer protection, and management discipline. If a workflow misses its outcome, leaders need to know whether the failure came from bad data, a weak policy, a poor prompt, a model limitation, an unavailable system, or a human approval bottleneck.

Without that trace, agentic transformation becomes unmanageable theatre.

The Human Layer Changes Shape

The lazy narrative says agents replace people. The better operating model says agents change where people add value.

Deloitte predicts that businesses will move along an autonomy spectrum: humans in the loop, humans on the loop, and humans out of the loop, depending on task complexity, criticality, and workflow design. That is the right framing. Humans should not approve every small action. They should design policy, handle exceptions, review ambiguous cases, monitor drift, and own the outcome.

Microsoft’s 2025 Agent 365 announcement describes the governance problem from another angle: as agents multiply, enterprises need a control plane to manage and govern them at scale. ServiceNow’s January 2025 announcement used similar language, positioning its AI Agent Orchestrator as a control tower for managing agents across business workflows.

These product announcements are not proof that every platform is mature. They are proof that the enterprise problem is shifting. The scarce capability is no longer prompt writing. It is operating control over a mixed workforce of humans, automations, and agents.

Outcome Metrics Beat Activity Metrics

Agentic AI programmes fail when they celebrate activity. Number of agents launched. Number of tasks automated. Number of prompts run. Number of hours supposedly saved.

Those are weak metrics.

An agentic operating model should measure business outcomes: cycle time, exception rate, rework, approval latency, forecast accuracy, customer resolution time, compliance evidence completeness, cost per case, and defect escape rate. For high-risk workflows, measure reversibility and containment speed. If an agent makes a bad decision, how quickly can the organisation detect, stop, reverse, and learn from it?

Deloitte notes that only 28% of respondents in its 2025 Tech Value Survey believed their organisations had mature capabilities with basic automation and AI agent-related efforts, compared with 80% for basic automation. It also cites estimates that more than 40% of agentic AI projects could be cancelled by 2027 because of cost, scaling complexity, or unexpected risks. That is a warning against pilot theatre.

The issue is not whether agents can do impressive things in demos. The issue is whether they improve the operating metrics that executives actually care about.

Build the Operating Model Before Scaling

Leaders should resist the urge to sprinkle agents across every department. Start with one end-to-end workflow and make it production-grade.

Define the work modules. Clean up the data handoffs. Assign decision rights. Create exception queues. Instrument the workflow. Establish rollback paths. Train the human supervisors. Set outcome metrics. Run controlled pilots. Then scale the pattern.

This may feel slower than launching a dozen agents. It is not. It is how enterprises avoid building a fast-moving pile of disconnected automation debt.

I have seen this movie before with robotic process automation. Companies automated screen clicks across broken processes, celebrated early savings, and then spent years maintaining brittle bots whenever applications, policies, or teams changed. Agentic AI will repeat that failure at a higher level of abstraction unless leaders redesign the work itself.

AI agents are not the operating model. They are participants in the operating model.

The winners will be the organisations that stop asking agents to imitate human work and start redesigning work for accountable autonomy. That means fewer cosmetic pilots, more explicit contracts, stronger observability, sharper decision rights, and a ruthless focus on outcomes. Anything less is just old process debt wearing an intelligent mask.


Previous Post
Exception-First Operations: The New Role of Humans in AI-Orchestrated Workflows
Next Post
From Human-Driven CRM to AI-Orchestrated Revenue Operations