AI Act - EU
The AI Act in the EU
The AI Act in the EU regulates the development and use of AI models within the Union. The objective of the Act is to create an ethically sustainable and safe environment for innovation in AI while protecting the rights and freedoms of citizens.
Timetable for the entry into force of the Act
- 1 August 2024: Parts of the AI Act become applicable.
- May 2025: Rules of practice are being applied.
- 2 February 2025: The development or use of AI models with unacceptable risk is no longer allowed.
- 2 August 2025: The general-purpose requirements for AI models are starting to apply, including the need to comply with transparency requirements.
- 2 August 2026: The entire AI Act comes into force and is applicable.
Information on the AI Act in the EU
The AI Act in the EU has become an important part of everyday life for many, especially businesses, as there are many benefits to using AI models. EU has therefore chosen to regulate the area, in order to create a good environment for both innovation and citizens.
Are there guidelines on the AI Act?
Yes, you can find various guidelines that the European Commission publishes continuously on their website. However, there is no direct collection page to be able to find them in an easy way. Here are some of the guidelines they have published:
The relationship between the GDPR and the AI Act
The AI Act applies in parallel with the GDPR. If an AI model processes personal data, whether during use or development, the GDPR applies. If an AI model does not process personal data, only the AI Act applies. In addition, there may be more laws and regulations to consider, depending on the type of AI model being developed or used.
The different levels of risk under the AI Act in the EU
The AI Act in the EU regulates and distinguishes between four different levels of risk. It is thus based on a risk-based approach. Below you can read more about these risk levels, which consist of the following: unacceptable risk, high risk, limited risk, and minimal or no risk.
Unacceptable risk
The development or use of AI models presenting an unacceptable risk is not allowed under the AI Act. However, there are exceptions. For example, it may be permitted for law enforcement purposes. The following eight (8) methods meet the requirements to be considered as posing an unacceptable risk:
- Social scoring;
- Harmful manipulation and deception based on AI.
- Harmful exploitation of AI-based vulnerabilities.
- Assess or anticipate the risk of individual offences.
- Emotional recognition in workplaces or educational institutions.
- Create or expand facial recognition databases, by scraping internet or CCTV material via undirected scraping.
- Biometric categorisation, to derive certain properties that are protected.
- Real-time biometric remote identification for law enforcement purposes in publicly accessible location.
High risk
There are two categories of AI models that are considered high-risk under the AI Act. This may be allowed, but requires, among other things, that companies carry out a risk assessment.
1. The first category concerns AI models regulated by EU product safety legislation. For example:
- Toys.
- Cars.
- Medical devices.
- Flying.
2. The second category refers to the AI models that need to be registered in an EU database and falls into one of these seven areas:
- Education/vocational training.
- Both the management and operation of critical infrastructure.
- Law enforcement.
- Asylum, migration and border control management;
- Employment, management of workers and access to self-employment;
- Help with legal interpretation and application of the law.
- Access to and ownership of essential private property;
Limited risk
There are rules also for AI models that have a limited risk. For example, certain transparency obligations. The requirements are to:
- The content must be noticed that it has been generated by AI.
- The AI model shall be designed in such a way that it does not generate illegal content.
- If copyrighted data has been used in training the AI model, summaries of it must be made public.
Minimum or no risk
In addition, the EU AI Act lays down rules for AI models that are considered to have a minimal or no risk. Many AI models fall into this category. Examples of AI models with minimal risk:
- Spam filters
- AI-enabled video games.
Who assesses the risk of the AI model?
It is for the provider of the AI model to assess whether which of the four levels of risk in the AI Act applies to the AI model in question. The risks can relate both to how the system is designed and how it is used.
AI regulatory sandbox
A provider of an AI model shall be able to test and evaluate the system before it is deployed or placed on the market in the EU. It is up to EU countries to ensure that regulatory sandboxes are in place.
Responsible authority for supervising the AI Act
Several authorities in one Member State may be responsible for market surveillance of the AI Act, also known as supervisory tasks. However, there must be an authority with overall responsibility. There will be the possibility for citizens to lodge complaints with national authorities regarding AI models.
Learn more
Personal data responsibility in the development and use of AI models
It is important to know which company is the data controller when developing and using AI models. The company that determines the purpose and has the ultimate responsibility for the processing of personal data, is the data controller. In some cases, two or more companies may be joint controllers. For example, two companies developing an AI model together to share development costs. In addition, it is possible to transfer the processing of personal data to a data processor, but it is not possible to transfer the responsibility itself.