There is a dangerous sentence I hear in AI steering meetings: “Let’s wait until the regulation is clearer.” It sounds prudent. It is a way to fall behind.
Regulators are moving, but not in one clean line. The EU AI Act is rolling into force in phases. Singapore’s financial sector is moving from principles to operational guidance. US state and federal debates remain unsettled. Meanwhile, vendors are embedding AI assistants, copilots, and agents into platforms faster than governance teams can map them.
For CIOs, the problem is fragmented regulation arriving after AI has entered the workflow.
The hard truth is that boards will not accept “we were waiting” as an answer after an AI incident, data leak, biased decision, hallucinated compliance report, or vendor failure. The work now is to build a defensible baseline: controls that make sense regardless of which final rulebook wins.
The Regulatory Fog Is Real
The European Commission says the AI Act entered into force on 1 August 2024, with obligations applying in stages. Prohibited AI practices and AI literacy obligations started from 2 February 2025. Governance rules and obligations for general-purpose AI models became applicable on 2 August 2025. The broader high-risk and transparency rules are staged for August 2026 and, for some embedded regulated-product systems, August 2027.
That is enough to affect global companies serving EU customers or using global platforms. But the EU is only one part of the map.
In the United States, CIO reported in January 2026 that enterprises face continuing ambiguity as federal and state AI policy pulls in different directions. Existing state rules remain relevant unless displaced, federal guidance takes time to stabilise, and enterprise leaders still own the responsibility to govern AI responsibly.
That same uncertainty is visible across APAC. Financial regulators, privacy authorities, cyber agencies, and sector supervisors are using existing tools: technology risk rules, outsourcing expectations, data protection laws, fairness principles, operational resilience, and board accountability.
Frankly, this is how regulation usually works. The rulebook does not arrive before the technology. Supervisors interpret old duties through new risks.
Governance First, Compliance Second
The mistake is treating AI governance as a compliance project. Compliance asks, “What rule must we satisfy?” Governance asks, “Can we prove this AI system is appropriate, controlled, monitored, and owned?”
That distinction matters because the second question survives every regulatory change.
NIST’s AI Risk Management Framework for generative AI, published in July 2024, is voluntary, but its structure is useful because it focuses on lifecycle risk management. It pushes organisations to incorporate trustworthiness considerations into AI design, development, use, and evaluation.
I once advised a CIO who wanted a legal memo before approving an internal generative AI rollout. The memo was useful, but it did not answer the operational questions: permitted teams, allowed data, output review, incident reporting, vendor settings, and blocked use cases.
Start With the AI Inventory
Every serious AI governance programme starts with one unglamorous question: where are we using AI?
Most enterprises cannot answer it cleanly. Some AI is built by data science teams. Some is embedded in SaaS tools. Some arrives through vendor upgrades. Some appears as a browser plug-in. Some lives in shadow workflows because the official process is too slow.
The inventory should include internally built models, generative AI platforms, embedded AI inside enterprise applications, vendor AI features, agentic workflows, pilots, proofs-of-concept, and third-party dependencies.
This inventory underpins risk tiering, data protection, vendor management, testing, monitoring, audit, and board reporting.
Without an inventory, the enterprise is governing a rumour.
Use Risk Tiers, Not Blanket Approval
The lazy version of AI governance creates one approval process for everything. That fails in both directions. Low-risk use cases get stuck in bureaucracy, while high-risk use cases are hidden or misclassified to avoid delay.
CIOs need a risk-tiering model that separates a meeting summariser from a credit decisioning model, a marketing copy tool from an HR screening system, and a customer-facing chatbot from an internal research assistant.
The criteria do not need to be exotic. Start with business impact, customer harm, regulated decision-making, data sensitivity, autonomy, explainability, operational dependency, third-party reliance, and reversibility. An internal writing assistant and a system influencing hiring, lending, pricing, claims, medical triage, or fraud action do not deserve the same controls.
Gartner’s 2025 forecast that task-specific AI agents could appear in 40% of enterprise applications by the end of 2026 makes this more urgent. When AI is embedded in normal software, governance cannot rely on a separate “AI project” label. The risk model must follow the capability, not the procurement category.
Data Controls Are the Real Governance Layer
AI governance collapses quickly when data governance is weak.
The CIO.com piece makes the point plainly: without clean data, strong controls, and clear ownership, powerful AI tools create more risk than value. That is the practical reality of systems that summarise, infer, recommend, classify, and act based on the data they can access.
The baseline should define what data can be used with which AI tools. Public tools, enterprise tools, internal models, and regulated workloads should not sit under the same policy. Sensitive customer data, employee data, source code, contracts, health data, financial records, and strategy documents need explicit handling rules.
Data controls should answer five questions:
- Can this AI tool access the data?
- Can the vendor use prompts or outputs for model training?
- Where is the data processed and retained?
- Who can see the output?
- What logs prove the rule was followed?
I have seen AI pilots fail not because the model was poor, but because nobody could explain the data path. Once legal, privacy, and security teams asked where prompts were stored, how outputs were logged, and whether confidential data could leave the region, the pilot froze. The problem was missing evidence.
Human Accountability Cannot Be Decorative
“Human in the loop” is one of the most overused phrases in enterprise AI.
In many organisations it means a person is somewhere near the system. That is not accountability. A meaningful human-control model defines who can approve, override, reject, investigate, and stop an AI-assisted decision.
For low-risk productivity tools, human review may mean the employee owns the final output. For higher-risk systems, review should be explicit: named owner, decision threshold, escalation path, exception queue, and evidence that the reviewer had enough information.
This matters even more for agentic AI. An assistant drafts. An agent may act. Once AI can trigger a workflow, update a system, contact a customer, approve an exception, or call another tool, human accountability must move from vague supervision to explicit control rights.
The bottom line: if nobody can stop the system, nobody truly governs it.
Testing Must Cover Behaviour, Not Just Accuracy
Traditional software testing asks whether the system does what the specification says. AI testing has to go further because the behaviour can vary with prompts, data, context, model versions, and vendor changes.
For CIOs, the minimum standard should include pre-deployment testing, bias and fairness checks where people-impacting decisions are involved, security review, prompt and output testing for generative systems, adversarial misuse testing for exposed tools, and post-deployment monitoring.
Do not turn this into a laboratory exercise for every use case. Risk tiers should drive depth. High-risk AI needs stronger testing and a documented acceptance decision.
Every material AI system should have a plain-language record of what it is not reliable for. That tells users when to trust, when to verify, and when to escalate.
Vendor AI Is Still Your AI Risk
One of the biggest governance blind spots is vendor AI. Many CIOs are more disciplined with internally built models than with AI embedded in tools they already license.
That is backwards. A vendor feature can still expose enterprise data, influence a regulated decision, change a workflow, create retention issues, or introduce fourth-party dependencies. CIO’s January 2026 article specifically calls out vendor exposure, fourth-party risk, and contractual flexibility as questions leaders should ask now.
Standardise the vendor review. Ask what AI features are active, what data they use, whether customer inputs train models, where processing occurs, how logs are retained, what subcontractors are involved, how model changes are communicated, and what exit options exist.
Contract language should not be frozen around today’s terminology. It should cover data use, auditability, incident notification, change management, explainability support, regulator cooperation, and termination rights as AI capability evolves.
Build the Audit Trail Before the Audit
The best governance programmes produce evidence as a by-product of normal work.
For each material AI use case, the organisation should be able to show its purpose, owner, risk tier, data sources, vendor dependencies, approval record, testing summary, human control model, monitoring results, incidents, changes, and retirement path.
That sounds heavy until an incident happens. Then it is the difference between a controlled response and a scavenger hunt.
Lenovo’s 2026 CIO research, based on IDC insights, reported that only 27% of surveyed organisations had a comprehensive AI governance framework in place, even as many were pushing AI proofs-of-concept into production. The direction is familiar: adoption is ahead of control maturity.
The answer is not to slow all AI. It is to make governed AI faster than shadow AI. Give teams a clear path: register the use case, classify the risk, apply the right controls, document the decision, and move.
What CIOs Should Standardize Now
The practical baseline is not complicated. It is disciplined.
Standardise the AI inventory. Standardise risk tiers. Standardise data-use rules. Standardise human accountability. Standardise testing and monitoring. Standardise vendor AI reviews. Standardise audit evidence.
Then connect those standards to governance rhythm: monthly AI risk review for material use cases, quarterly board reporting, annual policy refresh, incident review, training updates, and procurement gates when vendors add AI capabilities.
This is where CIOs earn credibility. They do not need to predict every regulation. They need to show that the organisation has a coherent, repeatable way to manage AI risk while still delivering value.
The Real Competitive Advantage
There is a temptation to see governance as a drag on AI adoption. That is a narrow view.
Good governance lets the enterprise scale AI with confidence. It lets business teams move quickly because the boundaries are clear. It gives legal and risk teams evidence instead of anxiety. It gives boards a view of value and exposure. It gives auditors a trail. It gives customers a reason to trust the outcome.
Firms that wait for regulatory certainty will spend the next few years reacting. Firms that standardise now will adapt as the rules mature.
AI governance is not about guessing the future law. It is about proving that today’s AI is owned, controlled, tested, monitored, and worthy of trust. That is the standard CIOs should build before anyone forces them to.