Artificial Intelligence Act

The AI Act aims to regulate the development, deployment, and use of artificial intelligence systems in the EU, and to ensure that they respect the fundamental rights and values of the EU. The AI Act will apply to a wide range of artificial intelligence systems, such as facial recognition, biometric identification, social scoring, and predictive policing.

Main points of the AI Act 

The AI Act introduces several new and significant changes to the regulation of AI in the EU, such as:
  • The AI Act is the EU regulation that is directly applicable in all EU member states, without the need for national legislation to transpose it. However, member states will have to designate national competent authorities and notify them to the European Commission, as well as set up effective, proportionate, and dissuasive penalties for infringements of the regulation.
  • The regulation will also have extraterritorial effects, as it will apply to providers and users of AI systems that are located outside the EU, but whose systems affect people or markets in the EU.
  • A risk-based approach that categorizes AI systems into four levels:

Unacceptable. Unacceptable risk AI systems are those that violate fundamental rights or pose a clear threat to safety or security, such as social scoring or mass surveillance. These are banned or prohibited in the EU.

High-risk. High-risk AI systems are those that have a significant impact on people’s lives or the environment, such as health, education, transport, or law enforcement. These are subject to strict requirements, such as data quality, transparency, human oversight, and accuracy.

Limited-risk. Limited-risk AI systems are those that pose some risks to users or third parties, such as chatbots or deepfakes. These are subject to transparency obligations, such as informing users that they are interacting with an AI system.

Minimal risk. Minimal-risk AI systems are those that pose no or negligible risks, such as video games or spam filters. These are subject to no or minimal obligations but are encouraged to follow voluntary codes of conduct.

  • A conformity assessment system that requires providers of high-risk AI systems to conduct a self-assessment or a third-party assessment of their compliance with the AI Act before placing their products or services on the market. The providers must also register their high-risk AI systems in a public database maintained by the European Commission.
  • A horizontal governance framework that involves various actors and bodies at the EU and national levels, such as the European Artificial Intelligence Board, the national supervisory authorities, the notified bodies, and the market surveillance authorities. The governance framework aims to ensure a consistent and effective implementation, monitoring, and enforcement of the AI Act across the EU.
  • A set of sanctions and penalties for non-compliance with the AI Act, ranging from administrative fines to criminal sanctions. The fines can reach up to 7% of the annual worldwide turnover of the provider or user of the AI system, depending on the severity and duration of the infringement.

Timeline of the implementation

Here’s a concise summary of the EU AI Act’s timeline:
  • Entry into Force: August 1, 2024
– The EU AI Act officially comes into force.
  • 6 Months After Entry: February 2, 2025
– Applicability of Chapters I (General Provisions) and Chapters II (Prohibited AI Practices), including bans on certain AI practices like manipulative AI, biometric categorization, and social scoring systems.
  • Months After Entry: May 2, 2025
– Codes of practice shall be ready at the latest by this date.
  • 12 Months After Entry: August 2, 2025
– Applicability of Chapter III, Section 4 (Notifying authorities and notified bodies), Chapter V (General-purpose AI models), Chapter VII (Governance), Chapter XII (Penalties (), and Article 78 (Confidentiality),
– Except for Article 101 (Fines for general-purpose AI providers).
  • 24 Months After Entry: August 2, 2026
– The entire Act becomes applicable across the EU, except Article 6(1) and the corresponding obligation which concerns one category of high-risk AI systems.
– High-risk AI systems that have been placed on the market or put into service before this date must comply if they undergo significant changes.
  • 36 Months After Entry: August 2, 2027
– General-purpose AI models, such as GPT-3 and GPT-4, that were on the market before August 2, 2025, must comply with the Act by this date.
– Article 6(1) and its corresponding obligations apply from this date.
  • August 2, 2028, and Every 4 Years After
– The Commission will report on the progress of developing standards for energy-efficient AI models and assess the need for additional measures.
  • August 2, 2028, and Every 3 Years After
– Evaluation of the effectiveness of voluntary codes of conduct for non-high-risk AI systems, including environmental sustainability considerations.
  • 6 Years After Entry: August 2, 2030
– The providers and deployers of high-risk AI systems intended to be used by public authorities must comply with the Act by this date.
  • December 31, 2030
– Compliance by this date is required for AI systems that are components of large-scale IT systems established by legal acts listed in Annex X of the AI act, like the Schengen Information System and Visa Information System, which were placed on the market or put into service before August 2, 2027.

 

Services provided by ECOVIS ProventusLaw

  • Assessing your current used artificial intelligence systems and compliance level with the AI Act
  • Identifying the gaps and risks in used artificial intelligence systems, and providing recommendations for improvement
  • Designing and implementing artificial intelligence policies, procedures, and controls that meet the AI Act requirements
  • Providing training and awareness programs for your staff and stakeholders on the AI Act obligations and best practices
  • Assisting you in reporting on your artificial intelligence systems to the relevant authorities and stakeholders

Loreta Andziulytė

Attorney at law, Partner of the Law Firm, Certified Data Protection Expert, Lawyer

Contact person



    News

    Knowledge without experience is of little use. Therefore we are proud of having our own valuable experience to share with you.

    More news