Skip to content
  • AUDIT & CERTIFICATION
    • ISO 42001
    • ISO 27001
    • ISO 27701
    • EU AI ACT
  • TRAINING
    • ISO 42001 LEAD AUDITOR
    • EU AI ACT
  • PARTNERS
    • GLOBAL NETWORK
    • AUDITOR COMMUNITY
  • RESOURCES
  • ABOUT US
  • AUDIT & CERTIFICATION
    • ISO 42001
    • ISO 27001
    • ISO 27701
    • EU AI ACT
  • TRAINING
    • ISO 42001 LEAD AUDITOR
    • EU AI ACT
  • PARTNERS
    • GLOBAL NETWORK
    • AUDITOR COMMUNITY
  • RESOURCES
  • ABOUT US
    • Español
Contact us
    • Español
  • AUDIT & CERTIFICATION
    • ISO 42001
    • ISO 27001
    • ISO 27701
    • EU AI ACT
  • TRAINING
    • ISO 42001 LEAD AUDITOR
    • EU AI ACT
  • PARTNERS
    • GLOBAL NETWORK
    • AUDITOR COMMUNITY
  • RESOURCES
  • ABOUT US
Contact us
October 29, 2024

The EU AI Act: A Comprehensive Overview

Discover the EU AI Act's framework governing AI development, risk categories, stakeholder obligations, and compliance deadlines for responsible AI use.

Share:

Introduction

The European Union’s Artificial Intelligence (AI) Act, enacted in June 2024, establishes a comprehensive legal framework to regulate AI development, deployment, and use across the EU. As the first regulation of its kind worldwide, the EU AI Act aims to manage AI risks while encouraging innovation and protecting fundamental rights. The Act specifies obligations based on risk levels, detailing requirements for various stakeholders, including AI developers, providers, and deployers.


Stakeholders: Who is Affected?

The EU AI Act defines several key roles affected by the regulation:

  • Providers – Entities or individuals who develop or bring an AI system or model to market. Providers might be located within or outside the EU but are subject to the Act if their AI systems are used within the EU.
  • Deployers – Organizations or individuals using AI systems in a professional capacity must meet specific operational requirements and oversight measures to ensure safety and regulatory compliance.
  • Importers and Distributors – These stakeholders introduce AI systems from outside the EU or act as intermediaries within the EU. They are responsible for compliance verification and may face similar liabilities to providers if their AI systems are non-compliant.
  • Product Manufacturers – Companies that integrate high-risk AI systems as essential parts of other products must fulfill provider obligations if they market these products under their name.

Risk Levels in the AI Act

The Act categorizes AI systems into four risk levels, each with specific regulatory requirements:

  • Unacceptable Risk – AI systems that pose an unacceptable risk are banned outright. These include AI applications that threaten EU values or fundamental rights, such as social scoring systems (e.g., those that assess individuals’ trustworthiness based on behavior) and emotion recognition AI in sensitive settings like workplaces and educational institutions.

 

  • High-Risk AI Systems – The Act imposes strict regulations on high-risk AI systems due to their impact on safety and rights. High-risk AI includes systems used as safety components or those integrated into products governed by EU laws listed in Annex I, requiring third-party conformity assessments. Additionally, AI applications that process personal data for profiling or assessing characteristics (e.g., job performance, health, interests) are always high-risk. Examples include biometric identification in public spaces and AI systems for recruitment or credit scoring.

 

  • Limited Risk AI Systems – While lower-risk, these systems are subject to transparency requirements. Providers and deployers of limited-risk AI must inform users they are interacting with an AI system, not a human. This applies to applications like chatbots and deepfake technology, where transparency is crucial.

 

  • Minimal Risk – Minimal-risk AI includes most AI applications on the EU market, which currently face no additional obligations under the Act. This category covers AI-driven video games, spam filters, and other non-critical AI systems. However, as AI evolves, especially with generative AI advancements, these applications may be reviewed for additional transparency requirements.

 

  • General-Purpose AI (GPAI) – General-purpose AI systems, particularly those with wide applicability or potential systemic impacts, also fall under the regulation.

 

  • Providers of GPAI models, such as large language models or machine learning frameworks, must ensure transparency and compliance if these models could lead to high-risk applications. GPAI providers are accountable for documentation, transparency, and risk management for high-risk uses within the EU.

Requirements for Providers of High-Risk AI Systems

Providers of high-risk AI systems must meet several extensive requirements, including:

  • Risk Management and Data Governance – Establishing a risk management system and ensuring high standards for data used in AI model training and testing.
  • Technical Documentation and Conformity Assessments – Developing detailed documentation that explains how AI systems function and demonstrates compliance with the Act’s standards. Providers may need to undergo third-party conformity assessments depending on the application’s risk level.
  • Human Oversight and Robustness – Implementing mechanisms to facilitate human intervention and ensuring the system’s robustness against tampering or adversarial attacks.
  • Post-Market Monitoring – Setting up continuous monitoring procedures after deployment to address unforeseen risks.
  • Reporting Obligations – Providers must report serious incidents or malfunctions that could impact fundamental rights or safety to relevant authorities within established timeframes.

Obligations for Deployers of High-Risk AI Systems

Deployers have a set of responsibilities to ensure the safe and compliant use of high-risk AI systems:

  • Risk Mitigation – Deployers must identify and mitigate risks associated with AI systems, ensuring human oversight within operational procedures.
  • Transparency and Communication – Deployers must provide clear information on AI system operations to users and affected individuals, maintaining accessible channels for inquiries or grievances.
  • Documentation and Record-Keeping – Deployers must maintain up-to-date records of the AI system’s operational history, including logs and documentation, for at least six months to ensure accountability.
  • Impact Assessments – Deployers must conduct regular impact assessments to evaluate risks to fundamental rights and notify the EU’s market surveillance authority of any identified risks.

Timeline of Key EU AI Act Deadlines

The EU AI Act’s phased implementation establishes compliance milestones for stakeholders, with specific requirements for each stage:

  • August 1, 2024 – EU AI Act Comes into Effect
    The Act officially became law, requiring businesses to align their AI systems with regulatory standards and prepare for compliance.
  • February 2, 2025 – Prohibition of Unacceptable Risk AI Systems
    AI systems categorized as “unacceptable risk,” such as social scoring and certain biometric applications in public spaces, are banned within the EU. Non-compliance can lead to significant penalties.
  • August 2, 2025 – General-Purpose AI Requirements and Commission Review Initiated
    Compliance obligations for general-purpose AI (GPAI) models take effect, addressing transparency, data governance, and human oversight. Member states will appoint national authorities for enforcement, and the European Commission will conduct its first annual review of banned AI systems.
  • February 2, 2026 – Post-Market Monitoring Obligations for High-Risk AI
    Post-market monitoring requirements are enacted for high-risk AI systems, obliging deployers to track and report on system safety, performance, and compliance.
  • August 2, 2026 – Key Compliance Obligations for High-Risk AI Systems
    Obligations begin for high-risk AI systems, including documentation, risk management, data governance, and cybersecurity. Member states will implement penalties for non-compliance and establish regulatory sandboxes. The Commission will also review and may adjust the high-risk AI list based on emerging technologies or risks.
  • August 2, 2027 – Compliance Deadline for Safety-Critical AI in Products
    High-risk AI systems integrated into safety-critical products like medical devices and toys must meet additional requirements, including third-party conformity assessments under EU product safety laws.
  • End of 2030 – Compliance for Large-Scale IT Systems under EU Law
    AI systems used in large-scale EU IT systems, such as the Schengen Information System, must comply with EU AI Act obligations. These systems, critical for areas like freedom, security, and justice, are subject to transparency, risk management, and data protection standards.

Penalties for Non-Compliance

The Act imposes stringent penalties for non-compliance, modeled after the General Data Protection Regulation (GDPR):

  • Fines for High-Risk Non-Compliance – Penalties can reach up to €30 million or 6% of the provider’s or deployer’s global annual revenue for serious violations involving high-risk systems.
  • Lower Penalties for Limited Compliance Failures – Failures involving limited risk, such as transparency or record-keeping issues, may incur fines up to €20 million or 4% of global turnover.

Conclusion

The EU AI Act establishes a robust regulatory framework that sets a global precedent for AI governance. By enforcing standards proportional to the associated risks, the Act seeks to balance innovation with ethical use and safety. Organizations that adopt proactive compliance strategies can build trust and align with the EU’s digital ambitions, while avoiding potential penalties and reputational risks. As AI continues to evolve, the Act is likely to become a cornerstone for ensuring responsible AI use in the EU.

You May Also Be Interested In

Videos

Purpose and Scope of the AI Act: Risks and rewards of AI- Examples

In this video, we explore the importance of AI compliance and how the EU's Artificial Intelligence Act (AI Act) aims to regulate AI systems to ensure they are used responsibly.

Videos

EU AI ACT: AI Systems Explained

In this video, we dive into the scope of the EU AI Act and explore the definition of "AI Systems" as outlined in Article 3. Learn how AI systems are distinguished from simpler algorithms.

Videos

EU AI Act: Key Roles Explained Providers, Deployers, Importers, and Distributors

In this video, we break down the roles of Providers, Deployers, Importers, and Distributors under the EU AI Act. From compliance responsibilities to risk management, understanding these roles is key for anyone working with AI systems.

Videos

Purpose and Scope of the AI Act: Risks and rewards of AI – Introduction

Exploring the Risks and Rewards of AI | In this video we break down how artificial intelligence is transforming industries and everyday life, delivering benefits in healthcare, education, and more. But as AI grows, so do the risks—bias in decision-making, deepfake manipulation, and privacy concerns.

Contact Us

Let us know how we can assist you by completing this short form.

Zertia
  • About us
  • Contact Us
  • Resources
Services
  • ISO 42001
  • ISO 27001
  • ISO 27701
  • EU AI Act
Partners
  • Global Network
  • Auditor Comunity
Information
  • Certification Procedures​
  • Impartiality Policy
  • Media
Social
  • LinkedIn
  • Youtube
Memberships
  • IAPP
  • INCITS
  • EU AI Pact
  • AI & Partners
  • © 2025 Zertia | All Rights Reserved
  • Legal Notice
  • Terms and Conditions of Use
  • Privacy Policy
  • Cookies Policy
We Care About Your Privacy

We use our own and third-party cookies to compile statistics on the use of the website in order to identify faults and improve the content and configuration of the website. We also use own and third party cookies to remember some options you have chosen (language, for example) and to show you advertising related to your preferences, based on a profile developed from your browsing habits (for example, from the web pages visited).

Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
3rd Party Cookies
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc. Keeping this cookie enabled helps us to improve our website.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
Show Purposes
{title} {title} {title}
We Care About Your Privacy
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
3rd Party Cookies
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc. Keeping this cookie enabled helps us to improve our website.
Manage options Manage services Manage {vendor_count} vendors Read more about these purposes
Show Purposes
{title} {title} {title}
Thank you for contacting us
Your message has been sent successfully, we will contact you as soon as possible.