What is the Artificial Intelligence Act (AI Act)?

Do you have questions about the AI Act? We are happy to help!
With the Artificial Intelligence Act (AI Act), the EU Commission is creating the legal framework for the use of AI systems in business, administration and research.


The regulation classifies AI applications into different risk classes, each with their own requirements and obligations. Find out who is affected by the AI Act, what requirements it entails and what the current status is.

Who is affected by the AI Act?

The regulations of the AI Act apply to:

  • Providers (also from third countries) who place their developed AI systems on the market or put them into operation in the EU
  • Importers
  • Operators
  • Distributors
  • Users of AI systems located within the EU
  • Providers & users located in a third country if the output generated by an AI system is used in the EU

Which risk classes does the AI Act contain?

The AI Act divides AI systems into 4 different risk classes, which are associated with different legal requirements:

  1. Unacceptable risk
  2. High risk (high-risk AI systems)
  3. Limited risk
  4. Low risk

The higher the potential risk associated with the use of an AI system, the more extensive are the associated requirements, such as risk assessments, documentation obligations, EU declarations of conformity or monitoring on the part of the operator.
 

The AI Act & other regulations

In addition to the general regulation by the AI Act, there are already – and will be – sector-specific regulations that require additional or different risk-based measures, such as the Machinery Directive for industrial plants, the UN ECE Cybersecurity Automotive Guidelines for motor vehicles or the MDR for AI-supported machines in the healthcare sector.

AI systems with unacceptable risk

AI systems that fall into this risk class will be banned in future under the AI Act. The reason for this is that they have a considerable potential to violate human rights or fundamental principles.
 

These include applications that

  • manipulate people through subliminal techniques or harm them physically or mentally,
  • exploit weaknesses of certain groups of people due to their age or physical or mental impairments in order to deliberately influence them,
  • make assessments or classifications of people's trustworthiness over a certain period of time based on their social behavior or personality traits that could lead to a negative social evaluation, or
  • allow biometric identification of people in publicly accessible spaces in real time from a distance (exception: law enforcement)

AI systems with high risk

An application is considered a high-risk AI system if it poses a potentially high risk to the health, safety or fundamental rights of individuals. Systems that fall under this category are, for example:

  • AI systems that are used for the biometric identification of individuals,
  • AI systems that draw conclusions about personal characteristics of individuals, including emotion recognition systems
  • Systems used for the management & operation of critical infrastructure,
  • AI systems for education or training related to assessment & evaluation of exams & educational attainment, and
  • AI systems relating to the screening or filtering of job applications or changes in employment or task assignment
     
Obligations for high-risk AI systems

Obligations associated with high-risk AI systems include:

  • Establishment, implementation & documentation of a risk management system
  • Compliance with data administration & data management requirements, particularly with regard to training, validation and test data sets
  • Creation & regular updating of technical documentation (technical documentation)
  • Obligation to record events ("logs") & logging
  • Transparency & provision of information for users
  • Monitoring by human personnel
  • Fulfillment of an appropriate level of accuracy, robustness & cybersecurity. In case of use of private data or use in critical infrastructure environments according to NIS2, state of the art organizational and technical cybersecurity measures must be implemented.

AI systems with limited risk

Applications with a limited risk include AI systems that interact with humans. Examples of this are emotion recognition systems or biometric categorization systems. With regard to these, providers must ensure that natural persons are informed that they are interacting with an AI system or that this must be apparent to users due to the context.

AI systems with low risk

There are currently no legal requirements for applications that fall into the "low" risk category. These include, for example, spam filters or predictive maintenance systems.

However, technical documentation and a risk assessment are required in order to be classified as a low-risk application.

Penalties for violations

Non-compliance with the prohibition of an AI system (Article 5) or non-compliance with the defined requirements (Article 10) is punishable by fines of up to EUR 35 million or up to 7% of the total annual global turnover.

Violation of the requirements and obligations set out in the AI Act - with the exception of Articles 5 & 10 - is punishable by fines of up to EUR 15 million or up to 3% of the total annual worldwide turnover.

The provision of false, incomplete or misleading information to notified bodies and competent national authorities is punishable by fines of up to €7.5 million or up to 1.5% of total annual worldwide turnover.

What is the current status of the AI Act?

The AI Act was adopted by the EU Parliament on 14.06.2023 The EU member states approved the law on 21.05.2024. 

After publication in the EU Official Journal, a transition period of up to 2 years is to be expected.

How can companies prepare for the AI Act today?

Designate responsibilities

Even if the AI Act has not yet been officially adopted, it makes sense for companies to deal with the upcoming requirements at an early stage. It is advisable to define corresponding responsibilities from the outset. In this way, precautionary arrangements can already be made as to how new developments or the purchase of AI applications or their use should be handled in the future.
 

Gap analysis & action plan

In order to determine the current status quo with regard to the implementation of the AI Act, it is advisable to carry out a gap analysis. Based on this, an action plan can be developed in the next step. In this context, it is also important for companies to find out which AI applications and solutions are actually in use and which of the various risk categories they belong to. This categorization then results in the basic requirements that need to be observed
 

Thinking about AI security right from the start

TÜVIT's AI experts provide companies with support during the development of AI systems in order to take safety, robustness and transparency into account right from the start. They also carry out gap analyses and other detailed analyses (e.g. in relation to the model design) and offer security tests using special test software. The best way for companies to prepare for the upcoming requirements is to have the security properties of their AI solution tested and/or certified by an independent third party. This is currently possible, for example, according to TÜVIT's own Trusted Product Scheme.
 

Continuous monitoring

It is also advisable to continuously monitor whether new AI applications are added or legal changes occur.
  

Final draft (2024) of the AI Act

The information on this page is based on the final draft of the AI Act. You can find it on the official website of the European Parliament.
 

 

You have questions? I am happy to help!

  

Eric Behrendt

+49 160 8880296
e.behrendt@tuvit.de