
AI Act: how the EU wants to put artificial intelligence in its place
The AI Act is due to come into force in the EU in June. It regulates various aspects of the development and use of artificial intelligence – primarily by companies, authorities and research institutions. But what does that mean exactly?
Artificial intelligence (AI) is to be regulated in the European Union. And so, the AI Act – a law recently passed by the Council and Parliament – is set to come into force in June. The rules are intended to keep the dangers of AI in check.
It’ll take up to two years before the law is implemented in its entirety. However, there are shorter and longer transitional periods for some exceptional cases.
The AI Act in a nutshell
This law is based on a risk-based approach. In other words, the higher the risk that certain AI could harm people, the more stringent the regulations. However, there are certain exceptions, such as law enforcement. Data protection and human rights groups aren’t at all happy about this.
How does the risk approach work?
There are three risk groups (actually four, with the two lowest being combined):
Unacceptable risk
AI practices are considered unacceptably risky under article 5, point 1 if they:
- manipulate human behaviour
- assign a value to a person’s behaviour or characteristics (social credit/social scoring)
- exploit human weakness or vulnerability
- recognise public systems in real time and can thus biometrically identify a person.
Some have differing opinions about the last point for several reasons. Firstly, both counterterrorism and serious crime prosecution are excluded from this legislation. Secondly, the definition of the term «real time» isn’t clear. Those critical of the new act fear that even fractions of a second after a moment can no longer be defined as «real time» – and that the ban on biometric identification will be compromised.
High risk
High-risk AI practices pose a significant threat to health, safety or fundamental rights. For instance, systems used in road traffic, hospitals, law enforcement or banking (to name just a few).

Source: Shutterstock
If it’s classified as high-risk AI, special rules apply. Those who provide such systems are liable for ensuring that they comply and will be subject to checks. For example, you need to set up what’s called a risk management system and a quality system (article 9. You also need to explain the impact of AI practices on users and provide training beforehand, amongst other things. This is regulated in article 8 (ff.).
Limited and minimal risk
You can find out how AI practices with limited and minimal risk are defined in the AI Act under article 52. But generally speaking, these are systems that interact with people or generate content,
including chatbots, in-game AI, spam filters, search algorithms and deep fakes. However, the latter must be labelled as such. If systems like this interact with people, they need to be made aware of this. For example, if an online shop uses ChatGPT for customer support, the chat window must clearly state that you are talking to AI rather than a human.
Incidentally, if you’re working on AI for a company, you can find out which category it’d come under using this questionnaire. You’ll then see what obligations are involved and how soon after the introduction of the AI Act you’ll have to comply with them.
How exactly does the EU define artificial intelligence?
The EU’s definition of AI is quite broad. It doesn’t just apply to large language models, such as ChatGPT and Gemini. An 18-page document explains in detail what it encompasses.
To save you reading through, here are the defining characteristics. Software is considered artificial intelligence if it:
- uses concepts of machine learning or deep learning OR
- uses logic and knowledge-based concepts OR
- masters statistical approaches or estimation, search and optimisation methods
in order to
- pursue goals set by people OR
- predict or recommend results or content OR
- influence their environment.
Who needs to comply with this AI Act?
Basically anyone who develops, provides or uses an AI system. So primarily companies, research institutes and authorities. But there are exceptions here too. For example, the AI Act doesn’t apply as outlined above if the AI system is used exclusively for military purposes. Bear in mind that this affects companies in Switzerland as well as other countries around the world as soon as an AI model is available and can be used in the EU.
Who checks for compliance?
In the first instance, individual member states are responsible for ensuring compliance with the AI Act. Countries are obliged to set up supervisory authorities for monitoring purposes. At EU level, there’s also the Office for Artificial Intelligence. This works together with the national authorities and coordinates joint action.

Source: consilium.europa.eu
Anyone not complying with the AI Act can be penalised. As with the Digital Markets Act and other EU laws, the penalties are consequential. Fines can be as high as 35 million euros for prohibited AI practices – or up to 7% of the offending company’s annual global turnover. A fine for Meta could therefore amount to up to 9.4 billion US dollars, which is 7% of its 135 billion annual turnover.
For violations in the high-risk level, the EU sets the fine limit at 15 million or 3% of annual income. Spreading false information also comes at a cost of up to 7.5 million euros or 1% of annual turnover.
What happens now?
The AI Act comes into force in the next few days. Within two years, all transition periods will have expired and the law will apply fully. But of course, there are varying grace periods too:
- Any AI classed as unacceptable risk will be banned within six months and can then no longer be used. In other words, by the end of 2024.
- For AI systems in general, including ChatGPT and Google Gemini, application rules come into force in 12 months.
- It takes longer for AI that’s classified high risk and already exists today. For these systems, the AI Act only comes into effect after 36 months because, according to the EU, adjustments take longer and are trickier.
The EU member states must have established the aforementioned AI supervisory authorities and the corresponding structure 12 months after commencement.
Every year – even after the transitional periods have expired – the EU Commission checks whether they need to make any changes to the categorisation or the law. After all, AI is developing rapidly.
33 people like this article


I've been tinkering with digital networks ever since I found out how to activate both telephone channels on the ISDN card for greater bandwidth. As for the analogue variety, I've been doing that since I learned to talk. Though Winterthur is my adoptive home city, my heart still bleeds red and blue.