30/01/2020
Policy-makers across the world are looking at ways to tackle the risks associated with the development of artificial intelligence (AI). The discussion around (AI) technologies and their impact on society is increasingly focused on the question of whether AI should be regulated.
EU is considered a front-runner with regard to establishing a framework on ethical rules for AI.
Following the call from the European Parliament to update and complement the EU legal framework with guiding ethical principles related the AI, the EU has carved out a ‘human-centric’ approach to AI that is respectful of European values and principles. As part of this approach, the EU published its guidelines on ethics in AI in April 2019.
The EU has carved out a ‘human-centric’ approach to AI that is respectful of European values and principles.
The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Expert Group on Artificial Intelligence (AI HLEG). This independent expert group was set up by the European Commission in June 2018, as part of the AI strategy announced earlier that year.
In September 2019 the European Parliament Research Service (EPRS) released a paper, European Union (EU) Guidelines on Ethics in Artificial Intelligence (AI): Context and Implementation (“Paper”), to shed light on the ethical rules that were established under the Ethics Guidelines for Trustworthy Artificial Intelligence (“Guidelines”). The Paper aims to provide guidance on the key ethical requirements that are recommended in the Guidelines when designing, developing, implementing or using AI products and services to promote trustworthy, ethical and robust AI systems.
European Commission puts forward a European approach to AI based on three pillars:
- Being ahead of technological developments and encouraging uptake by the public and private sectors;
- Prepare for socio-economic changes brought about by AI
- Ensure an appropriate ethical and legal framework.
According to the Guidelines, trustworthy AI should be:
- lawful – respecting all applicable laws and regulations
- ethical – respecting ethical principles and values
- robust – both from a technical perspective while taking into account its social environment
The Guidelines put forward a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy:
- Human agency and oversight;
- Technical Robustness and safety;
- Privacy and data protection;
- Transparency;
- Diversity, non-discrimination and fairness;
- Societal and environmental well-being;
Aiming to operationalize these requirements, the Guidelines present an assessment list that offers guidance on each requirement’s practical implementation.
Of note, the Guidelines highlight that all AI stakeholders must comply with the General Data Protection Regulation (GDPR) principles and advise the AI community to guarantee that privacy and personal data are protected, both when building and when running AI systems to afford citizens full control over their data. Thereupon, AI developers are directed to apply specific design techniques, such as data encryption and data anonymization. Additionally, AI developers should implement the proper oversight mechanisms to ensure socially constructed bias or inaccuracies do not corrupt the quality of the data sets or AI systems.
The chief of the European Commision Ursula von der Leyen has already announced that commission is panning AI-focused legislation similar to the General Data Protection Regulation that came into effect in 2018. It should set rules that define how to handle data responsibility and to protect a person’s digital identity is the overriding priority.
The Commission is likely to draw on the work of its high-level expert group on AI. The rules, developed by a committee of academics and industry representatives, form part of the EU’s plan to increase public and private investment in AI to €20bn a year.