AI is everywhere, but governance is lagging. While 96% of large enterprises already use AI and nearly all plan to boost AI investment in 2025, only 5% have implemented formal AI governance frameworks. At the same time, 82% of executives say governance is a top priority, recognising it as essential for managing risk, maintaining trust, and complying with fast-evolving regulations.
This blog is produced in collaboration with Parker and Lawrence, leaders in AI, risk, and compliance research,. Combining our respective research and client engagements, we explain the crucial role of AI governance in 2025, including the principles that underpin effective governance and the business case for developing your own world-class framework.
Defining AI Governance
AI governance is the set of policies, processes, and controls that guide the responsible development, deployment, use, and oversight of artificial intelligence systems. It ensures AI is ethical, safe, transparent, and aligned with organisational and social values, helping enterprises manage risk, build trust, and unlock value with confidence.
AI governance intersects technology, strategy, risk, compliance, and operations. It connects leadership intent with day-to-day AI practices, embedding accountability, oversight, and ethical considerations across the entire AI lifecycle. From data governance to model validation, deployment monitoring to post-launch audits, AI governance ensures that AI systems remain robust, explainable, and aligned with evolving business goals.
The Hourglass Model
There are many ways to implement and visualise AI Governance. The hourglass model is one such approach.

The Hourglass Model of AI Governance provides a layered framework to operationalise ethical and regulatory requirements for AI systems within organisations. It consists of three interconnected levels: the environmental layer, which includes external forces such as laws, ethical guidelines, and stakeholder pressure; the organisational layer, where strategic and value alignment decisions are made to translate those external demands into internal practices; and the AI system layer, where governance is implemented through concrete controls in data, models, risk management, and accountability mechanisms.
The model visualises governance as a dynamic, bidirectional flow, from regulatory intent to system implementation, and from technical feedback to strategic refinement, helping organisations embed responsible AI practices across the full lifecycle of their systems while aligning with frameworks like the EU AI Act and ISO/IEC 42001.
Why is AI Governance Dominating the Enterprise Agenda?
AI governance is not just a matter of ethics or compliance; it’s rapidly becoming a core business requirement in 2025. As enterprises scale AI, the risks grow with it. A McKinsey study underscores this reality: even as of March 2024, 83% of generative AI adopters reported negative consequences, detailed in the chart below.

Question was asked only of respondents whose organisations have adopted generative AI in at least 1 function, n = 876. The 17 percent of respondents who said “don’t know/not applicable” are not shown. Source: McKinsey Global Survey on AI, March 2024.
Increasingly, AI assurance is a market expectation. For enterprise buyers, governance is shifting from a nice-to-have to a must-have. This shift is driven by corporate leaders who recognise that demonstrating control over AI is essential to securing partnerships, entering new markets, and aligning with emerging regulations like the EU AI Act.
How AI Governance Differs From Existing Frameworks
AI governance isn’t just an extension of existing risk or compliance frameworks, it requires fundamentally different thinking.
Probabilistic Outputs
Unlike traditional systems, the latest wave of AI models are non-deterministic, opaque, and dynamic. For decades, institutions relied on deterministic models like logistic regression for credit scoring: submit the same application twice, and you’d get the same score every time. These models produced consistent, transparent, and stable outputs.
However, with today’s cutting-edge models, outputs aren’t always predictable, and their performance can degrade or shift over time due to model drift, data changes, or feedback loops. This means governance can’t be a one-off design decision; more so than ever, it demands continuous, multidisciplinary oversight.
Cross-Functional Coordination
Governing AI systems is more complex than traditional software because AI development and use involves a broader and more fragmented set of stakeholders, and its impact extends beyond internal operations, influencing real people in legal, social, and ethical contexts.
Internally, it draws in HR teams who apply AI to hiring or performance decisions, managers who act on AI-driven recommendations, and legal and compliance teams who must ensure fairness, non-discrimination, and data protection. IT, security, and data science teams are responsible for system reliability, integration, and transparency, while senior leadership remains ultimately accountable for reputational and ethical risks.
Externally, the governance environment is shaped by vendors who develop or maintain AI systems, regulators and standard setters who define compliance expectations, and civil society groups and researchers who highlight emerging risks. Even individuals affected by AI decisions contribute indirectly, as their experiences can surface issues that drive policy or oversight responses.
In this context, AI governance evolves from a contained compliance function into an enterprise-wide coordination effort, requiring ongoing collaboration across legal, risk, product, ethics, and external stakeholders. For many organisations, this shift represents a steep but necessary learning curve.
Accountability for Autonomy
Effective accountability hinges on two essential conditions:
-
Clear responsibility who is accountable, to whom, for what, under which standards, and;
-
The capacity to explain and justify actions and face consequences.
With AI, especially autonomous or generative models, both conditions frequently break down. Complex supply chains, third‑party components, tuning cycles, and multiple human/non‑human actors blur responsibility (the “many hands” problem). Even when responsibility can be assigned, technical opacity prevents actors from truly explaining outcomes. And in the absence of agreed standards—ethical, legal, or regulatory—it’s unclear what they should be answerable for.
As a result, AI governance must confront an accountability gap unseen in traditional systems, insisting not only on formal structures but also on ongoing transparency, traceability, and enforceable norms.
Which Principles Underpin AI Governance?
At the heart of AI governance are a set of foundational principles that guide responsible AI use. They reflect growing regulatory expectations, stakeholder concerns, and enterprise risk priorities. The most widely recognised include transparency, reliability, fairness, privacy and accountability, as the results below confirms:

For many organisations, these principles are already embedded, at least nominally, in AI policy documents. But that’s often where the challenge begins. Translating principles into practice is a complex task: explainability means different things in HR than it does in financial services; fairness may demand distinct metrics depending on context; robustness can require deep technical testing and ongoing monitoring.
This is why implementation maturity varies widely. While standard policies may be templated across the business, the actual controls and processes, such as how fairness is measured or how oversight is enforced, must be tailored to the organisation’s structure, systems, and risk profile. Bridging this gap is the next frontier in enterprise AI governance, and it’s where most current efforts are focused.
The Business Case for AI Governance
High-quality AI governance, such as that defined by standards like ISO 42001, offers more than just regulatory cover. It creates the conditions for AI to scale safely, credibly, and competitively. Here’s how:
-
Reduces compliance risk: AI governance helps organisations meet regulatory obligations such as the EU AI Act, which imposes different requirements depending on the risk level of the AI system. High-risk systems are subject to stringent mitigation measures, while lower-risk categories focus more on transparency and disclosure. Frameworks like ISO 42001 provide a structured way to assess and manage these risks, making legal alignment more achievable and reducing the likelihood of non-compliance and penalties.
-
Builds public and stakeholder trust: Demonstrating that AI systems are explainable, safe, and fair helps reinforce trust with users, regulators, and the broader market. Certification against a standard like ISO 42001 sends a clear signal of commitment to responsible AI.
-
Improves operational efficiency: Clear governance structures reduce ambiguity and bottlenecks in the AI lifecycle. By defining roles, responsibilities, and standard processes, governance enables faster, more confident development and deployment.
-
Enables faster, safer market entry: Enterprises with mature governance can launch AI products into regulated markets more quickly. Delays and disruptions, common when governance is lacking, can be avoided through structured controls and readiness documentation.
-
Supports smarter, controlled risk-taking: Governance doesn’t mean zero risk; it means informed risk. With oversight mechanisms in place, organisations can make bolder AI bets, knowing that risks are assessed, monitored, and mitigated appropriately.
-
Enhances credibility with clients and partners: Many large enterprises and governments now require assurance before procuring AI solutions. Certification frameworks like ISO 42001 increasingly serve as commercial prerequisites, not just best practice.
-
Provides a blueprint for scalable AI: ISO 42001 mandates the creation of an AI Management System (AIMS): a formal structure for managing AI across its lifecycle. This not only supports initial implementation but also sets the foundation for continuous improvement and future innovation.
What Next?
In our next blog, A Practical Path to Trustworthy AI, we move from principles to practice. We’ll explore how to operationalise AI governance through practical steps which support trustworthy, accountable AI systems that organisations can scale with confidence. Stay tuned!