How the European Union’s GDPR Rules Impact Artificial Intelligence?

Recently the European Parliamentary Research Service had issued study “The impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence” (the GDPR Study). This study addresses the relationship between the GDPR and Artificial intelligence (AI) technologies. More specifically, the GDPR Study provides an analysis of how AI is regulated under the GDPR and discusses the tensions and proximities between AI and data protection principles, such as purpose limitation and data minimization. As well, the GDPR Study examines the legal bases for AI applications with regards to personal data and considers duties of information concerning AI systems, especially those involving profiling and automated decision-making.

As it was mentioned in the introduction of GDPR study, it concludes by observing that AI can be deployed in a way that is consistent with the GDPR, but also that the GDPR does not provide sufficient guidance for controllers, and that its prescriptions need to be expanded and concretized.

Many AI applications process personal data, therefore there is no doubt that AI applications shall follow the requirements set forth by GDPR and many provisions in the GDPR are relevant to AI. The GDPR study pointed out that the controllers engaging in AI-based processing shall follow the principles of the GDPR, as well as to adopt a responsible and risk-oriented approach. Moreover, controllers should be able to do so in a way that is compatible with the available technologies and with economic profitability or the sustainable achievement of public interest.

Obviously, that AI has gone through rapid development and it provides opportunities for economic, social, and cultural development, energy sustainability, better health care and the spread of knowledge, however all the opportunities are facing with risks. In case of AI there are many serious risks such as unemployment, inequality, discrimination, social exclusion, surveillance, and manipulation. There is no doubt that the use of AI requires debates which may ensure that the use of AI is in line not only with data protection law, but at the same time in line with consumer protection law, competition law, labour law, administrative law, civil liability, etc.

Taking into consideration what is mentioned above, the GDPR study says that data protection authorities shall to promote a broad social debate on AI applications and as well as to provide high-level guidance. The data protection authorities need to actively engage in a dialogue with all stakeholders, including controllers, processors and civil society, to develop appropriate responses based on shared values and effective technologies.

The GDRP study provides recommendations, some of them might be highlighted as follows:

  • GDPR does not seem to require any major change in order to address AI;
  • Controllers and data subjects should be provided with guidance on how AI can be applied to personal data consistently with the GDPR, and on the available technologies for doing so. This can prevent costs linked to legal uncertainty while enhancing compliance;
  • A broad debate is needed, involving not only political and administrative authorities but also civil society and academia. This debate needs to address the issues of determining what standards should apply to AI processing of personal data, particularly to ensure the acceptability, fairness and reasonableness of decisions on individuals;
  • Data protection authorities should provide controllers with guidance on the many issues for which no precise answer can be found in the GDPR, which could also take the form of soft law instruments designed with a dual legal and technical competence;
  • The fundamental data protection principles – especially purpose limitation and minimization – should be interpreted in such a way that they do not exclude the use of personal data for machine learning purposes;
  • Guidance is needed on profiling and automated decision-making;
  • The content of the controllers’ obligation to provide information (and the corresponding rights of data subjects) about the ‘logic’ of an AI system need to be specified, with appropriate examples, with regard to different technologies;
  • It needs to be ensured that the right to opt out of profiling and data transfers can easily be exercised through appropriate user interfaces, possibly in standardized formats;
  • Collective enforcement in the data protection domain should be enabled and facilitated.

The GDPR study is finished with very correct thought that the consistent application of data protection principles, when combined with the ability to use AI technology efficiently, can contribute to the success of AI applications by generating trust and preventing risks.

Please find more information about GDPR Study here.

The article was prepared by Milda Šlekytė, ECOVIS ProventusLaw assistant attorney at law

Newsletter SubscriptionGet in touch