Skip to content

Agentic AI Needs a Data Constitution: Governance Patterns for Autonomous Systems

Published: at 01:00 AMSuggest Changes

Last month, I sat in a strategy session with a regional bank’s technology leadership team in Singapore. They’d deployed an AI agent to handle routine customer inquiries—account balances, transaction histories, the straightforward stuff. It was working beautifully. Then someone asked the obvious next question: “What if we give it access to the loan decisioning system?”

The room went quiet. Not because anyone doubted the technology could handle it, but because nobody could articulate the rules. What data could the agent see? What actions could it take? Who would be accountable if it approved a loan it shouldn’t have—or declined one it should have approved? The legal team started scribbling notes. The compliance officer looked genuinely alarmed.

That conversation captures where most enterprises find themselves right now. The technology works. The use cases are compelling. But the governance architecture? It’s being invented on the fly, often after the agents are already in production.

Why Prompts Are Not Enough

Here’s the uncomfortable truth that the AI vendor marketing won’t tell you: better prompts don’t solve governance problems. You can craft the most elegant system prompt imaginable, instructing your agent to “always respect customer privacy” and “never access financial data without authorisation.” The agent will dutifully acknowledge these instructions—and then violate them the moment its underlying model hallucinates or encounters an edge case the prompt didn’t anticipate.

Agentic AI systems don’t read policy documents. They can’t “interpret the spirit” of a security policy written in legalese. They operate on whatever permissions and data access they’ve been granted at the infrastructure level. If an agent has read-write access to your CRM, it has read-write access to your CRM—regardless of what the prompt says about being careful with customer data.

This is why the industry is converging on a concept I’ve started calling the “data constitution”—a codified set of policies, embedded at the data layer itself, that defines what agents can see, what they can change, and what triggers require human intervention. Think of it as the difference between telling a new employee to “be careful with sensitive files” versus configuring their system permissions so they literally cannot access files outside their role.

The Architecture of Controlled Agency

The most sophisticated enterprises I’m advising are building what researchers call “controlled agency”—a model that enforces accountability while allowing agents to act independently within defined limits. This isn’t about constraining AI capabilities; it’s about making those capabilities safe enough to trust at scale.

The architecture has several critical components. First, every dataset—whether structured, unstructured, real-time, or model-generated—must carry its own semantics, lineage, and guardrails. This embedded context transforms the data layer from passive storage into an active intelligence layer that can contextualise information, enforce policy, audit decisions, and preserve traceability.

Second, policy must live in runtime, not in documents. The leading practitioners are writing policy as code—executable rules that attach directly to data pipelines and agent workflows. When an agent attempts to access customer financial records, the policy engine checks in real time: Does this agent have the appropriate credentials? Has the customer consented? Is this action within the approved scope for this use case? Does this trigger a human approval requirement?

Third, agent registries are becoming essential infrastructure. Just as enterprises maintain identity and access management systems for human users, they now need equivalent systems for AI agents—tracking ownership, versioning, permissions, and audit trails. MuleSoft’s Agent Fabric and Teradata’s Enterprise AgentStack are early examples of platforms that manage the full agent lifecycle: discover, orchestrate, govern, and observe.

Singapore Shows the Way

On 22 January 2026, Singapore became the first country to release a governance framework specifically designed for agentic AI. The Model AI Governance Framework for Agentic AI, developed by the Infocomm Media Development Authority (IMDA), establishes four core dimensions that every deploying organisation should address.

The first dimension is risk assessment—evaluating potential harms before deployment, not after. What could go wrong if this agent takes an incorrect action? What’s the blast radius of a failure? The framework acknowledges that different use cases carry different risk profiles and requires proportionate controls.

The second dimension is human accountability. This sounds straightforward, but gets complicated quickly. When an agent autonomously approves a transaction that later proves fraudulent, who’s responsible? The developer who built it? The business owner who deployed it? The manager who approved the use case? Singapore’s framework requires organisations to define these accountabilities explicitly.

The third dimension is technical controls—and this is where the data constitution comes to life. The framework calls for mechanisms including access permissions, approval workflows for high-impact actions, and auditable logs and observability so teams can monitor behaviour, investigate incidents, and prove compliance. Kill switches and behaviour monitoring are specifically mentioned as requirements.

The fourth dimension is end-user responsibility. If an employee instructs an agent to do something that violates policy, who bears the consequence? The framework recognises that as agents become more powerful, the humans directing them must also be more accountable for their instructions.

Compliance with Singapore’s framework is voluntary—but legal accountability for agent behaviour is not. That’s a distinction worth understanding.

The Tiered Autonomy Model

I’ve found it useful to think about agent governance in tiers, a concept that several leading analysts are now advocating. One influential CIO article proposes what it calls a “hierarchy of autonomy” that maps agent actions to appropriate controls.

At Tier 1, agents operate in what the article calls a “sandbox of trust.” They can gather data, identify issues, and prepare recommendations—but execution requires a “human nod” before anything changes. This is appropriate for routine tasks where the cost of human review is low and the consequences of errors are manageable.

At Tier 2, agents can execute autonomously, but must present a “reasoning trace” to administrators explaining why the action was taken. This creates an audit trail and allows for post-hoc review without requiring real-time intervention. It’s appropriate for higher-frequency decisions where human review would create bottlenecks but transparency remains important.

At Tier 3 are what I call “existential actions”—things no agent should ever do autonomously. Deleting production databases. Transferring large sums. Communicating with regulators. These require multi-factor authentication or dual-key approvals, regardless of how confident the agent is in its decision.

The mistake many organisations make is treating all agent actions as Tier 1, requiring human approval for everything. This defeats the purpose of automation. The art is in correctly classifying actions by their true risk profile—and having the governance infrastructure to enforce different controls at different tiers.

The Non-Human Identity Problem

Here’s a challenge that caught many organisations off guard: AI agents aren’t just software—they’re identities. They have credentials. They access systems. They take actions on behalf of users. And the traditional identity and access management (IAM) systems that enterprises have spent decades building were never designed for non-human actors.

Non-human identities (NHIs)—which include bots, API keys, service accounts, and now AI agents—are proliferating faster than security teams can track them. A Sailpoint survey found that 80% of IT professionals have witnessed AI agents acting unexpectedly or performing unauthorised actions. That’s a staggering number, and it reflects the governance-containment gap that defines this moment.

The solution requires treating agents as first-class identities with the same rigour, controls, and auditability as human users—but adapted for their unique attributes. Agents may have ephemeral lifespans, delegated authority, and cross-domain execution patterns that don’t fit neatly into traditional IAM frameworks. Zero-trust principles become essential: every agent action should be authenticated and authorised as if it were a new request, regardless of what the agent did five seconds ago.

The Observability Imperative

You cannot govern what you cannot see. This principle becomes critical when autonomous agents make decisions at machine speed across your technology estate.

The emerging best practice is “governance observability”—real-time dashboards and monitoring systems that track agent behaviours and flag anomalies. If an agent suddenly accesses a database it has never touched, that’s worth investigating. If decision patterns shift unexpectedly, that could indicate drift or compromise.

By 2026, most enterprises will rely on an AI gateway layer to centralise routing, policy enforcement, cost controls, and observability across agents and models. As AI stacks sprawl, gateways become the only practical place to impose consistency. Observability warehouses are replacing all-in-one black boxes, providing a central data layer to audit any decision, any agent, at any time.

The Business Case for Governance

I’ve spent enough years in this industry to know that governance is often seen as a cost centre—something that slows innovation and adds overhead. But the data tells a different story.

Research indicates that companies using AI governance tools get over twelve times more AI projects into production compared to those without structured governance. That’s not a marginal improvement; that’s the difference between pilot projects and enterprise-scale deployment.

Why? Because governance creates confidence. When business leaders know that an agent cannot access data outside its approved scope, that every action is logged and auditable, that human oversight exists for high-stakes decisions—they’re willing to approve broader deployment. Without that confidence, agents remain stuck in controlled experiments, never reaching the scale where they deliver real value.

The organisations winning at agentic AI aren’t the ones deploying the most agents. They’re the ones deploying agents with the clearest constitutions—the most thoughtfully designed boundaries, the most robust observability, the most explicit accountability frameworks. Governance isn’t the brake on innovation. It’s the accelerator.

What to Do Now

If you’re a technology leader navigating this transition, here’s where I’d focus.

Audit your agent inventory. What agents are running today—officially sanctioned and otherwise? What data do they access? The answers may surprise you.

Design your tiered autonomy model. Which actions can agents perform autonomously? Which require reasoning traces? Which demand human approval? Document these as executable rules, not policy prose.

Build your agent registry. Every agent needs an owner, version history, permission set, and audit trail. If you can’t answer “who’s responsible for this agent?” in thirty seconds, you have a governance gap.

Invest in observability before capability. Resist the temptation to build more powerful agents until you can see what existing ones are doing.

Treat governance as a board-level concern. When agents access customer data and trigger financial transactions, “who approved this?” becomes a question of enterprise risk.

The era of agentic AI demands more than better prompts. It demands a data constitution—a codified, enforceable, auditable set of rules that defines the boundaries within which autonomous systems can safely operate.

The organisations that build these constitutions will scale their AI investments confidently. Those that don’t will find themselves explaining to regulators, customers, and boards why their agents did things they were never supposed to do.

The choice is straightforward. The work is not. But it starts now.


Previous Post
Four AI Research Trends Every Enterprise Team Should Watch in 2026
Next Post
Brains Need Senses: Building the Operational Data Fabric That Makes AI Agents Trustworthy