Artificial Intelligence Act
Main points of the AI Act
- The AI Act is the EU regulation that is directly applicable in all EU member states, without the need for national legislation to transpose it. However, member states will have to designate national competent authorities and notify them to the European Commission, as well as set up effective, proportionate, and dissuasive penalties for infringements of the regulation.
- The regulation will also have extraterritorial effects, as it will apply to providers and users of AI systems that are located outside the EU, but whose systems affect people or markets in the EU.
- A risk-based approach that categorizes AI systems into four levels:
– Unacceptable. Unacceptable risk AI systems are those that violate fundamental rights or pose a clear threat to safety or security, such as social scoring or mass surveillance. These are banned or prohibited in the EU.
– High-risk. High-risk AI systems are those that have a significant impact on people’s lives or the environment, such as health, education, transport, or law enforcement. These are subject to strict requirements, such as data quality, transparency, human oversight, and accuracy.
– Limited-risk. Limited-risk AI systems are those that pose some risks to users or third parties, such as chatbots or deepfakes. These are subject to transparency obligations, such as informing users that they are interacting with an AI system.
– Minimal risk. Minimal-risk AI systems are those that pose no or negligible risks, such as video games or spam filters. These are subject to no or minimal obligations but are encouraged to follow voluntary codes of conduct.
- A conformity assessment system that requires providers of high-risk AI systems to conduct a self-assessment or a third-party assessment of their compliance with the AI Act before placing their products or services on the market. The providers must also register their high-risk AI systems in a public database maintained by the European Commission.
- A horizontal governance framework that involves various actors and bodies at the EU and national levels, such as the European Artificial Intelligence Board, the national supervisory authorities, the notified bodies, and the market surveillance authorities. The governance framework aims to ensure a consistent and effective implementation, monitoring, and enforcement of the AI Act across the EU.
- A set of sanctions and penalties for non-compliance with the AI Act, ranging from administrative fines to criminal sanctions. The fines can reach up to 7% of the annual worldwide turnover of the provider or user of the AI system, depending on the severity and duration of the infringement.
Timeline of the implementation
- Entry into Force: August 1, 2024
- 6 Months After Entry: February 2, 2025
- Months After Entry: May 2, 2025
- 12 Months After Entry: August 2, 2025
- 24 Months After Entry: August 2, 2026
- 36 Months After Entry: August 2, 2027
- August 2, 2028, and Every 4 Years After
- August 2, 2028, and Every 3 Years After
- 6 Years After Entry: August 2, 2030
- December 31, 2030
Services provided by ECOVIS ProventusLaw
- Assessing your current used artificial intelligence systems and compliance level with the AI Act
- Identifying the gaps and risks in used artificial intelligence systems, and providing recommendations for improvement
- Designing and implementing artificial intelligence policies, procedures, and controls that meet the AI Act requirements
- Providing training and awareness programs for your staff and stakeholders on the AI Act obligations and best practices
- Assisting you in reporting on your artificial intelligence systems to the relevant authorities and stakeholders
News
Knowledge without experience is of little use. Therefore we are proud of having our own valuable experience to share with you.