The Artificial Intelligence (AI) Act, a landmark legislative framework in the European Union, is now approaching key implementation milestones. The AI Act adopted a risk-based approach, assigning different levels of risk to various technologies, encompassing both AI systems and AI components in other types of systems. Each risk tier corresponds to different sets of obligations, for different stakeholders (providers, deployers, importers, distributors).
First, we will explore the four risk tiers. Then, we will identify the key players under the AI Act and, finally, we will go over the timeline for the full enactment of the AI Act provisions.
The AI Act: Risk assignment, roles, key dates and obligations
A. Risk assignment
1. Unacceptable risk/ Prohibited AI systems
Unacceptable risk refers to AI systems that contradict European Union values of respect for human dignity, freedom, equality, democracy, and the rule of law and Union fundamental rights. Based on this, AI systems that fall under this category are prohibited.
The following AI systems are prohibited under the AI Act:
- AI systems can manipulate and persuade persons to engage in unwanted behaviours or make decisions they otherwise would not have.
- AI systems that exploit vulnerabilities of a person or a specific group of persons due to their age, disability, or a specific social or economic situation.
- Systems that are biometric categorisation systems based on individual’s biometric data e.g. using an individual person’s face or fingerprint to deduce or infer an individual’s characteristics, e.g. political opinions, religious race, or sexual orientation.
- ‘Real-time’ remote biometric identification systems used in publicly accessible spaces for the purpose of law enforcement (some exceptions apply).
- Systems that evaluate or classify natural persons or groups over a certain period based on their social behaviour or personality characteristics, commonly known as social scoring.
- AI systems that detect the emotional state of individuals in situations related to the workplace and education should be prohibited.
2. High risk
AI systems identified as high-risk should be limited to those that have a significant harmful impact on the health, safety, and fundamental rights of persons in the Union. These systems must comply with certain mandatory requirements to mitigate risks.
Providers and deployers of high-risk AI systems have a number of obligations, including risk management, human oversight, robustness, accuracy, ensuring the quality and relevance of data sets used, cybersecurity, technical documentation and record-keeping and the transparency and provision of information to deployers.
The following are deemed as High-risk AI systems:
- Biometrics, including remote biometric identification systems, biometric categorisation, and emotion recognition systems.
- Safety components in critical infrastructure, e.g. road traffic, water, gas, and electricity.
- Access to education and vocational training and evaluation of performance.
- Access to employment, recruitment and promotion and evaluation of employees in the workplace, where outputs may result in decisions that impact conditions of work.
- Essential private and public services, including credit scoring, risk assessing and pricing in health and life insurance.
Also, AI systems relevant to law enforcement, migration, and the judicial system.
It is also worth noting that the European Commission is responsible for making changes to Annex III of the AI Act and adding new systems to this list. The criteria to identify a high-risk AI system are flexible and adaptable to new circumstances.
Annex II also lists harmonizing legislation under which AI systems can also be classified as high risk, including product safety legislation, such as the medical device directive.
3. Limited / transparency risk
Limited risk refers to the lack of transparency in the use of AI. Therefore, it is important that AI systems that directly interact with people, e.g. chatbots and deepfakes, are developed to ensure that the person is informed that they are interacting with an AI system.
Note that limited risk applies if one or more of the following criteria are fulfilled where the AI system is intended:
- To perform a narrow procedural task.
- To improve the result of a previously completed human activity.
- To detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review.
- To perform a preparatory task to an assessment relevant for the use cases listed in Annex III.
General-purpose AI has also been added to the list of limited-risk AI systems and now carries a number of additional obligations under the Act. These systems will most likely be limited to the largest AI developers and in the short term at least will not be the focus of the majority of organisations.
4. General purpose AI
The AI Act regulates such models, i.e. AI models can be used for many different purposes, including when trained with a large amount of data using self-supervision at scale, that display significant generality and are capable to competently perform a wide range of distinct tasks and that can be integrated into a variety of downstream systems or applications. All such models will have to comply with specific requirements.
A subset of such models, the so-called general-purpose AI models with systemic risk (determined, among others, based on the total computing power used for training), will be subject to an additional set of requirements.
Of course, those AI systems that do not fall into the three categories above are deemed as minimal or no-risk AI systems and there are no requirements to meet any obligations in those cases. However, proper governance is essential regardless of the AI system used, because the classification of each AI system is not set in stone and may change via its use and development.
B. Roles
The AI Act identifies and defines the following key players, all of which can be natural or legal persons.
Deployer means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.
For example, a private company in the EU buys a license for an AI-System to assist it with recruitment.
Provider means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Private companies will be termed a “provider” in terms of the AI Act if they:
- develop an AI system or a general-purpose AI model or
- have an AI system or a general-purpose AI model developed and
- place it on the market in the EU/EEA (defined as first making available of an AI system or a general-purpose AI model on the EU or EEA market) or
- put the AI system into service in the EU/EEA (defined as supply of an AI system for first use directly to the deployer or for own use in the EU or EEA) under its own name or trademark, whether for payment or free of charge.
Importer means a natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.
For example, the EU based subsidiary of a US corporate group placing the AI system developed by the US holding company on the EU market will be deemed an importer.
Distributor means a natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
For example, if a US based company develops an AI system, which is imported into the EU by a subsidiary based in Germany (the importer) and this German subsidiary in turns uses its own subsidiary located in Greece to market the AI system in Greece, the Greek company will be termed a distributor.
C. The AI Act timeline: Key dates and obligations
2024
- 12 July 2024: The AI Act was published in the Official Journal of the European Union, marking its formal adoption.
- 1 August 2024: The AI Act officially entered into force. While requirements do not immediately apply, this date signifies the start of a phased implementation.
- 2 November 2024: Member States designated and publicly listed authorities responsible for protecting fundamental rights under the AI Act. In Greece, the following authorities were selected:
- The Hellenic Data Protection Authority
- The Greek Ombudsman
- The Hellenic Authority for Communication Security and Privacy
- The National Commission for Human Rights
2025
- 2 February 2025: Prohibitions on certain AI practices come into effect, covering prohibited AI practices, such as manipulative systems. Providers and deployers must also enhance AI literacy for relevant personnel.
- 2 May 2025: Codes of practice must be prepared by this date.
- 2 August 2025: This critical date introduces several key provisions:
- Requirements for notified bodies, governance structures, confidentiality, and penalties take effect.
- Providers of General Purpose AI (GPAI) models must prepare for compliance by 2027.
- Member States must designate competent authorities and establish national penalties.
2026-2027
- 2 February 2026: The European Commission will release guidelines for practical implementation of monitoring plans.
- 2 August 2026: Remaining provisions of the AI Act, including obligations for high-risk AI systems and the establishment of AI sandboxes, come into force.
- 2 August 2027: Final compliance deadline for GPAI providers to meet all regulatory requirements.
Steps to ensure compliance
- Evaluate impact: Assess how the AI Act applies to your operations, focusing on high-risk systems and prohibited practices.
- Implement governance measures: Establish or update internal processes to ensure compliance with AI governance, monitoring, and documentation requirements.
- Engage with authorities: Liaise with designated national authorities to align with local enforcement measures and participate in regulatory sandboxes where applicable.
- Prepare for reporting: Develop processes to meet reporting obligations, including transparency measures and resource adequacy declarations.
Our commitment to your compliance
Our firm offers tailored support to help you navigate the complexities of the AI Act, including:
- Compliance Audits: Assessing your readiness and identifying areas for improvement.
- Training Programs: Ensuring your teams are informed about AI literacy and operational obligations.
- Regulatory Advocacy: Representing your interests in discussions with EU bodies and Member State authorities.
We encourage you to act promptly to integrate these requirements into your operational strategy. Please do not hesitate to contact us at [email protected] for further guidance or to schedule a consultation.