Skip to content
  • SERVICES
    • CERTIFICATIONS
      • ISO 42001
      • ISO 27001
      • ISO 27701
      • ISO 22301
    • ASSESSMENTS
      • EU AI ACT
      • NIST AI RMF
      • AI Impact Assessments
      • SOC 2 & 3
      • AI Lite Audit
    • AI AUDITS
      • Bias
      • Explainability
      • Fairness & Safety
      • Performance
    • TRAINING
      • Professionals
      • Corporate
  • TECHNOLOGY
  • COMMUNITY
  • RESOURCES
  • ABOUT US
    • Español
Contact us
    • Español
  • SERVICES
    • CERTIFICATIONS
      • ISO 42001
      • ISO 27001
      • ISO 27701
      • ISO 22301
    • ASSESSMENTS
      • EU AI ACT
      • NIST AI RMF
      • AI Impact Assessments
      • SOC 2 & 3
      • AI Lite Audit
    • AI AUDITS
      • Bias
      • Explainability
      • Fairness & Safety
      • Performance
    • TRAINING
      • Professionals
      • Corporate
  • TECHNOLOGY
  • COMMUNITY
  • RESOURCES
  • ABOUT US
Contact us
  • Services
  • Assesments
  • EU AI Act

Be EU AI Act Ready with Our Regulatory Conformity Assessments

CONTACT US
SCHEDULE A CALL

What is the EU AI Act?

The EU AI Act is a regulatory framework governing the development, commercialisation, and use of artificial intelligence systems in the European Union. Its primary goal is to ensure that AI operates safely and ethically, balancing the protection of fundamental rights with the promotion of innovation.

The Act classifies AI systems by risk level, determining obligations accordingly. For example, high-risk applications, such as medical diagnostics or workplace performance monitoring, require providers to establish a risk management system. The enforcement of the Act will be handled by national regulators, which may vary across countries.

What Are the Four Risk Levels in the EU AI Act?

The EU AI Act categorises AI systems into four different risk levels, each with specific regulatory requirements.

1Unacceptable Risk

These systems are completely banned due to the high risks they present to public safety and citizens’ rights. They include applications such as subliminal manipulation intended to alter behaviour, social scoring by public authorities, and remote biometric identification in real-time in public spaces.

2 High Risk

These systems may be components within safety-regulated products under EU harmonisation legislation, meaning they are already subject to stringent EU safety and quality standards. Examples include systems embedded in medical devices and automobiles.

They can also be critical products in themselves, such as an AI system used to determine credit scores.

High-risk AI applications include applications in areas such as critical infrastructure (water, gas, electricity), education, (or) employment, worker management, judicial administration, public services, and credit scoring assessments​​.

3Limited Risk

Providers and deployers of limited-risk AI must implement transparency measures, such as notifying users when they are interacting with an AI system rather than a human (e.g., chatbots).

4Minimal or No Risk

All other AI systems that do not fall into the categories above are not subject to specific requirements under the AI Act, although providers may choose to adopt voluntary codes of conduct to enhance safety and transparency. Examples include AI-driven video games and spam filters. However, as AI technology evolves, these minimal-risk applications may be reviewed and could require additional transparency measures.

What Are the Penalties for Non-Compliance?

The severity of the penalty depends on the type of infringement:

Prohibited AI Practices

Up to €35 million or 7% of global revenue

High-Risk AI Obligations

Up to €15 million or 3% of global revenue

False or misleading information

Up to €7.5 million or 1% of global revenue

What Is the Applicability of the EU AI Act?

The AI Act applies to AI systems with a connection to the EU, either through development, use, or market presence.

The Presence of an AI System

The Act defines an “AI System” as a machine-based system that can analyse inputs and generate outputs like predictions or decisions, with some adaptability or learning capability. Examples include chatbots, autonomous robots, and predictive models in sectors like finance and healthcare.

Geographic Connection with the EU

The AI Act is a regulation of the EU, so its scope is inherently tied to the region. However, the Act also has extraterritorial reach (similar to the GDPR), meaning it can apply to entities and activities outside the EU if there is a significant connection to the EU.

How Can Zertia Help?

Conformity Assessment

A conformity assessment is like an AI compliance audit—ensuring your system meets the EU AI Act’s requirements. Zertia provides an independent, objective review, identifying any gaps so you’re fully prepared for compliance with confidence.

Future Notified Body Status

Zertia is committed to becoming a Notified Body under the EU AI Act and will apply as soon as the process opens. This will allow us to provide official certification for compliance with the EU AI Act, helping businesses demonstrate regulatory approval for their high-risk AI systems.

Why Zertia?

Technical and regulatory expertise to evaluate conformities and non-conformities with a nuanced understanding of the EU AI Act.

Independent oversight and conflict-of-interest safeguards ensure all conformity assessments are impartial and objective.

Industry and use-case expertise to ensure assessments reflect real-world AI scenarios and risks.

Resources

Operationalising AI Governance with a Strategic Mindset

Key Steps to Operationalize AI Governance Choose the Right Framework Design Your Living AI Policy Define your Governance Roles and Structure Mapp your model inventory Control Third-Party AI Risks Manage Risks and impact In a 2024 report, the Australian Securities & Investments Commission (ASIC) analysed 624 AI use cases across 23 regulated financial institutions and the findings are deeply concerning. Only 52% of the entities had policies referencing fairness, inclusivity or accessibility in the use of AI. None had implemented specific mechanisms to ensure contestability — clear processes for consumers to understand, challenge, or appeal automated decisions. Even more troubling, in 30% of the cases reviewed, the AI models had been developed by third parties, and some organisations couldn’t even identify the algorithmic techniques behind them. These are just some examples of what falls under the scope of AI governance—implementing policies, processes, practices, and frameworks that guide the ethical and responsible development and use of AI.As AI becomes more deeply integrated into organisations and governments, the risks of misuse grow more evident. This reinforces the need for a well-defined and strategic approach to governance. That’s why, in collaboration with Parker & Lawrence Research, a leading market research firm specialising in AI, risk, and compliance, we’re exploring how to operationalise AI governance —drawing on our combined research and client engagements to make AI Governance practical. Missed our last post? Start here: What is AI Governance? And why is it Dominating the Enterprise AI Agenda? Key Steps to Operationalize AI Governance 1. Identify Your Obligations The first step in operationalizing AI governance is to map your legal and market obligations by asking a critical question: Is my organisation required to comply with specific AI standards or regulations? AI compliance generally falls into two categories: regulations, which are must-haves, and voluntary standards, which are nice-to-haves. Must-have regulations are legally binding, and non-compliance can lead to serious legal and financial consequences. These regulations set the foundation for responsible AI use and require organizations to implement formal governance structures and documentation. Key examples include: EU AI Act – Establishes a risk-based approach to regulating AI systems, with strict requirements for high-risk applications. UK Data Protection Act – Enforces data privacy standards, including transparency, accountability, and fairness in the use of AI technologies. On the other hand, the nice-to-have standards are voluntary guidelines and frameworks that serve as industry best practices. While not legally enforceable, they play a critical role in promoting ethical AI development and strengthening stakeholder trust. In many cases, they become market-driven requirements, as large clients may demand adherence as a condition for collaboration, or organisations may adopt them to enhance efficiency and resource management.Examples include: OECD AI Principles – Encourage AI that is innovative, trustworthy, and respects human rights and democratic values. ISO/IEC 42001:2023 – A globally recognized standard for AI management systems, offering structured guidance on implementing responsible AI governance. When mapping your obligations and choosing an AI governance framework, it ultimately comes down to asking: What is the purpose of integrating AI governance in our organisation? Is it to meet legal obligations, gain competitive advantage, strengthen brand reputation, or improve operational control? Clarity here will determine which combination of regulations and voluntary standards you should adopt. 2.  Design Your Living AI Policy Once your organisation has identified which framework it must comply with, the next step is to operationalise it through a clear and actionable AI policy. An AI policy is a formalised set of principles, rules, and procedures that guide the responsible development, deployment, and use of AI within an organisation. It is important not to adopt a static document, but rather a dynamic and editable framework. One example is the AI Policy Template developed by the Responsible AI Institute (RAI Institute). With 14 detailed sections, it covers essential domains like governance, procurement, workforce management, compliance, documentation, and AI lifecycle oversight.   3.  Define your Governance Roles and Structure One of the first practical steps in bringing such a policy to life is the creation of a dedicated governance structure—one that clearly defines responsibilities and ensures both accountability and oversight. The Three Lines of Defense (3LoD) model provides a solid and widely adopted foundation for doing just that. At the top sits the governing body, typically the Board of Directors, ultimately accountable to stakeholders. Its role is to uphold organisational integrity, align strategy, and ensure compliance with ethical and regulatory standards. To exercise effective oversight, the board must rely on independent information sources—not only from executives but also from internal audit or ethics committees that offer an unfiltered view of risk. Beneath the board, management executes strategy and oversees risk across two lines. The first line includes operational teams—such as AI research and product development—who manage risk directly by building systems responsibly and ensuring compliance. The second line—risk, legal, compliance, and ethics—supports them with expertise, policy development, performance monitoring, and by operationalizing governance principles. The third line, internal audit, offers independent assurance to the board, assessing how well risks are identified and controlled. In AI organisations, this requires audit functions capable of addressing AI-specific risks—like model misuse, fairness violations, or systemic impact—beyond traditional financial or compliance concerns.   The chart below shows a sample organisational structure of an AI company, with equivalent responsibilities mapped across the three lines of defence. 4.  Map your model inventory Effective AI governance requires full visibility into all AI systems used across your organisation. This starts with building and maintaining a centralised model inventory that lists all AI systems—whether developed, acquired, or resold. The inventory should include key metadata: types of data used (especially if it involves personal data), model type, algorithm, and deployment context. This visibility is essential for managing risks, supporting audits, and ensuring continuous improvement. This inventory must be actively maintained, with regular attestations from model owners and integration into broader risk and compliance workflows. Organisations can start with simple tools like spreadsheets, but enterprise-scale or high-risk environments benefit from dedicated platforms that automate discovery, metadata capture, and monitoring.  A

LEARN MORE

What is AI Governance? And why is it Dominating The Enterprise AI Agenda?

AI is everywhere, but governance is lagging. While 96% of large enterprises already use AI and nearly all plan to boost AI investment in 2025, only 5% have implemented formal AI governance frameworks. At the same time, 82% of executives say governance is a top priority, recognising it as essential for managing risk, maintaining trust, and complying with fast-evolving regulations.   This blog is produced in collaboration with Parker and Lawrence, leaders in AI, risk, and compliance research,. Combining our respective research and client engagements, we explain the crucial role of AI governance in 2025, including the principles that underpin effective governance and the business case for developing your own world-class framework.   Defining AI Governance AI governance is the set of policies, processes, and controls that guide the responsible development, deployment, use, and oversight of artificial intelligence systems. It ensures AI is ethical, safe, transparent, and aligned with organisational and social values, helping enterprises manage risk, build trust, and unlock value with confidence.   AI governance intersects technology, strategy, risk, compliance, and operations. It connects leadership intent with day-to-day AI practices, embedding accountability, oversight, and ethical considerations across the entire AI lifecycle. From data governance to model validation, deployment monitoring to post-launch audits, AI governance ensures that AI systems remain robust, explainable, and aligned with evolving business goals.   The Hourglass Model  There are many ways to implement and visualise AI Governance. The hourglass model is one such approach.   The Hourglass Model of AI Governance provides a layered framework to operationalise ethical and regulatory requirements for AI systems within organisations. It consists of three interconnected levels: the environmental layer, which includes external forces such as laws, ethical guidelines, and stakeholder pressure; the organisational layer, where strategic and value alignment decisions are made to translate those external demands into internal practices; and the AI system layer, where governance is implemented through concrete controls in data, models, risk management, and accountability mechanisms.    The model visualises governance as a dynamic, bidirectional flow, from regulatory intent to system implementation, and from technical feedback to strategic refinement, helping organisations embed responsible AI practices across the full lifecycle of their systems while aligning with frameworks like the EU AI Act and ISO/IEC 42001.   Why is AI Governance Dominating the Enterprise Agenda? AI governance is not just a matter of ethics or compliance; it’s rapidly becoming a core business requirement in 2025. As enterprises scale AI, the risks grow with it. A McKinsey study underscores this reality: even as of March 2024, 83% of generative AI adopters reported negative consequences, detailed in the chart below.   Question was asked only of respondents whose organisations have adopted generative AI in at least 1 function, n = 876. The 17 percent of respondents who said “don’t know/not applicable” are not shown. Source: McKinsey Global Survey on AI, March 2024. For many organisations, AI governance is a gateway to growth. World-class governance practices signal credibility to customers, partners, and regulators. They show that AI systems are functional and safe, fair, and auditable. This is especially important in high-impact use cases like HR, healthcare, or financial services, where risk tolerance is low and trust is everything.   Increasingly, AI assurance is a market expectation. For enterprise buyers, governance is shifting from a nice-to-have to a must-have. This shift is driven by corporate leaders who recognise that demonstrating control over AI is essential to securing partnerships, entering new markets, and aligning with emerging regulations like the EU AI Act.   How AI Governance Differs From Existing Frameworks AI governance isn’t just an extension of existing risk or compliance frameworks, it requires fundamentally different thinking.    Probabilistic Outputs Unlike traditional systems, the latest wave of AI models are non-deterministic, opaque, and dynamic. For decades, institutions relied on deterministic models like logistic regression for credit scoring: submit the same application twice, and you’d get the same score every time. These models produced consistent, transparent, and stable outputs.   However, with today’s cutting-edge models, outputs aren’t always predictable, and their performance can degrade or shift over time due to model drift, data changes, or feedback loops. This means governance can’t be a one-off design decision; more so than ever, it demands continuous, multidisciplinary oversight.   Cross-Functional Coordination Governing AI systems is more complex than traditional software because AI development and use involves a broader and more fragmented set of stakeholders, and its impact extends beyond internal operations, influencing real people in legal, social, and ethical contexts.   Internally, it draws in HR teams who apply AI to hiring or performance decisions, managers who act on AI-driven recommendations, and legal and compliance teams who must ensure fairness, non-discrimination, and data protection. IT, security, and data science teams are responsible for system reliability, integration, and transparency, while senior leadership remains ultimately accountable for reputational and ethical risks.   Externally, the governance environment is shaped by vendors who develop or maintain AI systems, regulators and standard setters who define compliance expectations, and civil society groups and researchers who highlight emerging risks. Even individuals affected by AI decisions contribute indirectly, as their experiences can surface issues that drive policy or oversight responses.   In this context, AI governance evolves from a contained compliance function into an enterprise-wide coordination effort, requiring ongoing collaboration across legal, risk, product, ethics, and external stakeholders. For many organisations, this shift represents a steep but necessary learning curve.   Accountability for Autonomy Effective accountability hinges on two essential conditions:    Clear responsibility who is accountable, to whom, for what, under which standards, and;  The capacity to explain and justify actions and face consequences.    With AI, especially autonomous or generative models, both conditions frequently break down. Complex supply chains, third‑party components, tuning cycles, and multiple human/non‑human actors blur responsibility (the “many hands” problem). Even when responsibility can be assigned, technical opacity prevents actors from truly explaining outcomes. And in the absence of agreed standards—ethical, legal, or regulatory—it’s unclear what they should be answerable for.    As a result, AI governance must confront an accountability

LEARN MORE

Why Trump Reversed Biden’s AI Chip Export Ban

Chips are the fuel driving the global AI race, essential for both training models and running real-time applications. While the US leads in chip design, manufacturing is concentrated in Taiwan’s TSMC, which produces over 90% of the world’s most advanced chips. This dependence has become a geopolitical risk, especially amid tensions with China.TSMC’s dominance stems not only from technology but from decades of investment, scale, and expertise that are difficult to replicate. Some have even proposed destroying its facilities in the event of a Chinese invasion. To limit China’s access to this strategic technology, the US introduced the AI Diffusion Rule, aiming to block not just hardware exports but also the proprietary model parameters behind advanced AI systems. The policy sparked immediate backlash and was eventually repealed by the Trump administration. The Biden AI Diffusion Rule Explained In its final days, the Biden administration introduced the AI Diffusion Rule—a national security measure aimed at preventing adversarial states, primarily China, from accessing high-performance AI chips and proprietary model parameters developed by US firms such as Nvidia and AMD. Proprietary model parameters are the learned numerical values—like the DNA of an AI model—that determine how it processes inputs and makes decisions. If someone gains access to a model’s parameters, they can recreate that model exactly. The rule established a three-tier system to govern the export of AI chips and model parameters: Tier 1 – Strategic Allies (G7, Taiwan, Japan, South Korea): full access with no restrictions. Tier 2 – Friendly but Controlled (over 100 countries including India, Switzerland, Singapore, Israel): subject to export caps and licensing requirements. Tier 3 – Adversarial States (China, Russia, Iran, North Korea): total export ban. For example, Tier 2 countries could receive up to 100,000 Nvidia H100-equivalent chips in 2025, but US firms were required to ensure that no more than 7% of their global compute capacity was deployed in any single Tier 2 nation. Why It Faced Heavy Backlash The aim was to prevent indirect access by Chinese firms via intermediary countries such as India, Switzerland, and Singapore—but critics argued the policy was deeply flawed. Unworkable Complexity The licensing regime was overly bureaucratic and difficult to enforce. Exporters had to manage thresholds, government notifications, and fragmented approvals—creating a logistical nightmare. Damage to US Industry The rule restricted legitimate exports to friendly and strategic partners, putting billions in sales at risk. Industry leaders warned it would hand market share to Chinese competitors like Huawei, who could step in to fill the void. Industry Reaction The Trump administration’s decision to abandon the rule was warmly welcomed by the tech sector. Nvidia called it a “once-in-a-generation opportunity” for the US to lead a new industrial era, highlighting potential gains in jobs, infrastructure, and the trade balance. The market reacted positively: Nvidia +3% AMD +1.8% Broadcom +1.5% Security vs. Innovation Trump officials made it clear that repealing the rule does not equate to full deregulation. A new framework is in development to better balance national security with US tech competitiveness—potentially through bilateral licensing and more targeted restrictions. Meanwhile, the administration is reassessing semiconductor tariffs and tightening rules on China-specific chips, reinforcing a more nuanced and strategic approach to export policy. Broader Geopolitical Implications In recent Senate hearings, leaders from OpenAI, Microsoft, and AMD emphasised the urgent need for regulatory consistency and sustained investment to maintain US leadership in AI. With Nvidia’s Q1 results due on 28 May, investors are now watching closely to see how ongoing trade dynamics may impact the company’s outlook.  

LEARN MORE

Europe

P.º de la Castellana 93b
Suite 114
28046 Madrid
Spain

USA

1101 Brickell Ave
Suite N1400
33131 Miami
Florida, USA

Zertia
  • About us
  • Contact Us
  • Resources
  • Community
Services
  • Certifications
  • Assessments
  • AI Audits
  • Training
Technology
Information
  • Certification Procedures​
  • Impartiality Policy
  • Media Inquieries
Social
  • LinkedIn
  • Youtube
Memberships
  • IAPP
  • INCITS
  • EU AI Pact
  • © 2025 Zertia | All Rights Reserved
  • Legal Notice
  • Terms and Conditions of Use
  • Privacy Policy
  • Cookies Policy
We Care About Your Privacy

We use our own and third-party cookies to compile statistics on the use of the website in order to identify faults and improve the content and configuration of the website. We also use own and third party cookies to remember some options you have chosen (language, for example) and to show you advertising related to your preferences, based on a profile developed from your browsing habits (for example, from the web pages visited).

Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
3rd Party Cookies
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc. Keeping this cookie enabled helps us to improve our website.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
Show Purposes
{title} {title} {title}
We Care About Your Privacy
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
3rd Party Cookies
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc. Keeping this cookie enabled helps us to improve our website.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
Show Purposes
{title} {title} {title}