On 19 February 2020, the European Commission has unveiled it’s plans on the EU’s vision for digital policies, focusing on data and artificial intelligence governance and published a White Paper aiming to foster a European ecosystem of excellence and trust in AI and a Report on the safety and liability aspects of AI.
The commission wants to establish new binding requirements for the development and use of “high-risk” AI applications – which might pose risks for citizens or be related to specific sectors such as healthcare, policing or transport. Additionally, the commission wants to create a common single market for data.
“AI is developing fast, which is why Europe needs to maintain and increase its level of investment,” said the European Commission in a press briefing
The White Paper proposes:
- Measures that will streamline research, foster collaboration between Member States and increase investment into AI development and deployment;
- Policy options for a future EU regulatory framework that would determine the types of legal requirements that would apply to relevant actors, with a particular focus on high-risk applications.
The White Paper will undergo an open public consultation where European citizens, Member States and relevant stakeholders (including civil society, industry and academics) can provide their opinion on the White Paper and contribute to a European approach for AI.
Some AI applications may raise new ethical and legal questions, related to liability or fairness of decision-making. The General Data Protection Regulation (GDPR) is a major step for building trust and the Commission wants to move a step forward on ensuring legal clarity in AI-based applications.
Artificial intelligence sees machines use intelligence typically associated with humans, such as learning from mistakes and adjusting to new inputs.
Examples include self-driving cars, voice-powered personal assistants like Amazon’s Alexa and the latest e-mail spam filters.