EU's AI Act Mandates CE Marking and Conformity Assessments for High-Risk AI Systems

The European Union's AI Act, endorsed by all 27 member states, will become law this summer, establishing a comprehensive legal framework for AI applications. The Act categorizes AI systems into four risk levels, with high-risk systems facing strict compliance requirements and potential fines of up to 30 million euros.

author-image
Trim Correspondents
New Update
EU's AI Act Mandates CE Marking and Conformity Assessments for High-Risk AI Systems

EU's AI Act Mandates CE Marking and Conformity Assessments for High-Risk AI Systems

The European Union's groundbreaking AI Act, endorsed by all 27 member states on February 2, 2024, is set to become law this summer. The Act establishes a comprehensive legal framework that categorizes AI applications into different risk tiers, with high-risk AI systems facing the most stringent compliance requirements.

Why this matters: The EU's AI Act sets a precedent for regulating artificial intelligence globally, and its impact will be felt beyond Europe as companies adapt to the new standards. As AI becomes increasingly integrated into daily life, this legislation will play a crucial role in shaping the responsible development and deployment of AI technologies.

Under the EU AI Act, high-risk AI systems will be required to bear the CE marking, indicating conformity with EU legislation. These systems must also undergo a conformity assessment, which may involve a third-party notified body, before being placed on the internal market. The Act aims to ensure that high-risk AI applications meet rigorous standards and safeguard fundamental human rights and societal values.

Thierry Breton, the Commissioner for Internal Market of the European Union, described the AI Act as a "historic world first" and a "perfect balance" between innovation and safety. However, the Computer Communications Industry Association (CCIA), a prominent tech lobbying group, expressed concerns that many of the new AI rules remain unclear and could slow down the development and rollout of innovative AI applications in Europe.

The EU AI Act categorizes AI systems into four risk levels: Unacceptable, High, Limited, and Minimal. The use of technologies in the Unacceptable category, such as real-time facial and biometric identification systems in public spaces, is prohibited. High-risk AI applications include those used in critical infrastructure, employment, management of workers, law enforcement, and democratic processes.

Creators of high-risk AI systems must ensure compliance with strict regulations outlined in the AI Act, including conducting thorough risk assessments, implementing robust data governance protocols, and adhering to specific technical standards and documentation requirements. Non-compliance with the AI Act may result in fines of up to 30 million euros or 6% of the total worldwide annual turnover for companies, depending on the severity of the infringement.

The AI Act is expected to enter into force 20 days after publication in the EU Official Journal, likely in May or June 2024. Most provisions will become applicable two years after the Act enters into force, around 2026. However, provisions related to prohibited AI systems will apply after six months, while those regarding generative AI will take effect after 12 months.

The European Commission has also launched the AI Pact, a voluntary initiative that seeks to support the future implementation of the AI Act by inviting AI developers from Europe and beyond to comply with the key obligations of the Act ahead of time. This proactive approach underscores the EU's commitment to fostering a safe and responsible AI ecosystem.

The EU AI Act represents a significant milestone in the global effort to regulate artificial intelligence. By imposing strict requirements on high-risk AI systems, including the CE marking and conformity assessments, the Act aims to strike a balance between encouraging innovation and protecting against potential harms. As the Act enters into force and its provisions take effect, it will undoubtedly shape the future of AI development and deployment not only in Europe but also around the world.

Key Takeaways

  • EU's AI Act sets a global precedent for regulating AI, with impact beyond Europe.
  • High-risk AI systems must bear CE marking and undergo conformity assessment.
  • AI Act categorizes systems into 4 risk levels: Unacceptable, High, Limited, and Minimal.
  • Non-compliance may result in fines of up to 30 million euros or 6% of global turnover.
  • AI Act expected to enter into force in May/June 2024, with most provisions applicable by 2026.