The EU’s Way To ThinkDigital

The ThinkDigital Summit addressed the digitalisation efforts of the EU in several industrial sectors, and provided an outlook for the use of Artificial Intelligence in the near future. The event featured three panel discussions and a final presentation of an EU-funded project.

Didier Reynders, the elect-Commissioner for Justice (as of Dec 01, 2019) opened the conference in addressing the European Commission’s future plans for the digital sphere. The upcoming legislation on Artificial Intelligence is one the key parts for the newly composed European Commission. Reynders revealed that the legislation would follow firstly a horizontal approach to cover all ethical aspects, and secondly a vertical (eg. sectoral) approach to oversee AI systems in Europe. When it comes to the real responsibilities of the EU institutions, AI and liability is a key aspect. Now, as Reynders set out, the challenge is how already existing legislation and fairly abstract terms such as fairness, accountability and non-discrimination could be set into practice.

As it is already known, AI bears profound implications for the European society at large. When an algorithm determines if people get considered for a job, or receive a loan, fundamental questions around equality, rights, but also freedom of expression and human dignity arise.

In that respect, the European Parliament already adopted a resolutions on AI and robotics, which will now be adapted for a more coordinated approach within the EU institutions. Next to the EP resolutions, already binding legislation which aims at ensuring AI systems are ethically compliant are the General Data Protection Regulation (GDPR) and the European Convention on Human Rights (ECHR). Moreover, several secondary legislation such as the Consumer Liability Directive, the Financial Services Directive, the ePrivacy regulation and other EU and national legislation are relevant for the use of AI.

Reynders concluded his speech with an optimistic outlook: “Innovation in the digital field and protection of fundamental values and rights are not contradicting. The EU can become a role model for international standards and other authorities.”

Following Reynders’ speech, the introductory interview addressed digital solutions for improving healthcare. This topic is relevant because data sharing in the healthcare sector is less harmonised on a national level, let alone cross-border wise. Nevertheless, data sharing and larger compilations of patients’ information withholds promising opportunities when it comes to predictive medicine, evaluation of diseases and personalised healthcare. Nevertheless, not every patient is willing to share data, or only specific kinds of data for specific purposes. Deep and reinforcement learning systems and applications – as a part of AI – are indeed used in a variety of healthcare domains, such as dynamic treatment regimes in chronic diseases and critical care. But this kind of automated medical diagnosis is based on both unstructured and structured clinical data.

This unstructured and structured clinical data is considered as part of personal data according to the GDPR. The EU regulation can be considered as the most relevant legislation in relation to both healthcare sharing as well as Artificial Intelligence systems: It came into force in May 2018, and was designed to modernise existing legislation which protects personal information of individuals. The GDPR was explicitly designed to be technologically neutral and therefore does not mention AI explicitly, but is highly applicable and relevant to the use of AI. This became clear during the first panel discussion at ThinkDigital: As such, the “Digital for Better Access to Healthcare” panel frequently mentioned the updated rules for companies and rights for consumers. Particularly health data and large-scale datasets can be used by the healthcare industry sector for improving treatments. Also facilitated data exchange processes on an EU level bear substantial opportunities for EU citizens, for instance when travelling or to detect and treat rare diseases.

In this sense, the GDPR already aims to strike a balance between rigid consumer protection while enabling responsible information sharing for healthcare institutions. Nevertheless, the increasing use of AI demands for an update for especially automated decision-making systems and data inferences. Especially the characteristics of AI to easily mirror existing biases based on which data is available and which quality the dataset has may cause risks. And since AI systems base decision-making processes on data gathered in the past, it is likely to miss future-proof analysis and trends, especially for predictive analytical AI systems. At the core, EU-based industries and consumers will face the challenge of safeguarding European values together. This issue was also addressed by the last presentation at the ThinkDigital Summit. The question “what are the ethical and human rights implications of Smart Information Systems?” is at the core of the EU-funded SHERPA project. It consists of 11 academic project partners which analyse how AI and big data analytics impact ethics and human rights. The SHERPA project directly works with stakeholders to develop novel ways to understand and address ethical challenges for the deployment and meaningful use of digital technologies. Finally, the partners and stakeholders seek desirable and sustainable solutions to the benefit of both innovators and society in Europe and beyond.

Rosanna Fani

Rosanna Fanni is a digital science researcher. She currently pursues an Erasmus Mundus MA in Digital Communication Leadership in Brussels. Her main research interests are centered around the dissemination of information with a particular focus on political communication. Rosanna is German-Italian and holds a degree from FU Berlin and UCL London. She was a YEL delegate to the 2019 Digital Solar & Storage event organized by SolarPower Europe. 


You Might Also Like