Consent and AI: a perspective from the Italian Supreme Court of Cassation

With a judgment symbolically delivered on the day of the third anniversary of the entry into force of the GDPR, the Italian Supreme Court of Cassation agrees with the national Data Protection Authority about an automatic reputation system deemed illegitimate.

The consent given, according to the DPA, was not informed and “the system [is] likely to heavily affect the economic and social representation of a wide category of subjects, with rating repercussions on the private life of the individuals listed”.

In its appeal against the decision of the Court of Rome, the DPA, through the Avvocatura dello Stato, challenged “the failure to examine the decisive fact represented by the alleged ignorance of the algorithm used to assign the rating score, with the consequent lack of the necessary requirement of transparency of the automated system to make the consent given by the person concerned informed”.

The facts The so-called Mevalaute system is a web platform (with related computer archive) “preordained to the elaboration of reputational profiles concerning physical and juridical persons, with the purpose to contrast phenomena based on the creation of false or untrue profiles and to calculate, instead, in an impartial way the so-called “reputational rating” of the subjects listed, to allow any third party to verify the real credibility”.

The case and the decision are based on the regime prior to the GDPR, but in fact the Court confirms the dictate of the GDPR itself, relaunching it as the Polar star for all activities now defined as AI under the proposed Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts.

To put it more clearly, the decision is relevant for each “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;” (art. 3 proposed AI Act). That is, for any software produced using “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimisation methods.

The consent to the use of one’s personal data by the platform’s algorithm had been considered invalid by the Italian Data Protection Authority because it was not informed about the logic used by the algorithm. The decision was quashed by the Court of Rome, but the Supreme Court of Cassation thinks otherwise. After all, on other occasions (see Court of Cassation n. 17278-18, Court of Cassation n. 16358-1) the Supreme Court had clarified already that consent to processing as such was not sufficient, it also had to be valid!

Even if based on the notion present in the previous legislation (incidentally, made even more explicit and hardened by the GDPR in the direction indicated today by the Court), the reference to the fact that “consent must be previously informed in relation to a processing well defined in its essential elements, so that it can be said to have been expressed, in that perspective, freely and specifically.” remains very topical.

Indeed, based on today’s principle of accountability, “it is the burden of the data controller to provide evidence that the access and processing challenged are attributable to the purposes for which appropriate consent has been validly requested – and validly obtained.”

The conclusion is as sharp as enlightening: “The problem, for the lawfulness of the processing, [is] the validity … of the consent that is assumed to have been given at the time of consenting. And it cannot be logically affirmed that the adhesion to a platform by members also includes the acceptance of an automated system, which makes use of an algorithm, for the objective evaluation of personal data, where the executive scheme in which the algorithm is expressed, and the elements considered for that purpose are not made known.”

It should be noted that this is a decision that goes beyond the limits of art. 22 GDPR because it opens an interpretation of Articles 13(2)f and 14(2)g that goes beyond the “solely automated” requirement for the automated decision-making mechanism by placing a clear emphasis on the need for transparency of the logic used by the algorithm used.

Algorithms are learning from our behaviour: How must we teach them

Algorithms are learning from our behaviour: How must we teach them

by Daniel Zingaro

Have you ever wondered about why the online suggestions on videos, products, services or special offers you receive fits so perfectly into your preferences and interests? Why your social media feed only shows certain content, but filters out the rest? Or why you get certain results on an internet search on your smartphone, but you can’t get the same results from another device? And why does a map application suggest a certain route over another? Or why you are always matched with cat lovers on dating apps?

Did you just click away and thought that your phone mysteriously understands you? And although you may have wondered about this, you may not have found out why.

How these systems work to suggest specific content or courses of actions is generally invisible.  The input, output and processes of its algorithms are never disclosed to users, nor are they made public. But still such automated systems increasingly inform many aspects of our lives such as the online content we interact with, the people we connect with, the places we travel too, the jobs we apply for, the financial investments we make, and the love interests we pursue. As we experience a new realm of digital possibilities, our vulnerability to the influence of inscrutable algorithms increases.

Some of the decisions taken by algorithms may create seriously unfair outcomes that unjustifiably privilege certain groups over others. Because machine-learning algorithms learn from the data that we feed them with, they inevitably also learn the biases reflected in the data. For example, the algorithm that Amazon employed between 2014 and 2017 to automatize the screening of job applicants reportedly penalised words such as ‘women’ (e.g., the names of women’s colleges) on applicants’ resumes. The recruiting tool learned patterns in the data composed of the previous 10 years of candidates’ resumes and therefore learned that Amazon preferred men to women, as they were hired more often as engineers and developers. This means that women were blatantly discriminated against purely based on their gender with regards to obtaining employment at Amazon.

To avoid a world in which algorithms unconsciously guide us towards unfair or unreasonable choices because they are inherently biased or manipulated, we need to fully understand and appreciate the ways in which we teach these algorithms to function. A growing number of researchers and practitioners already engages in explainable AI that entails that they design processes and methods allowing humans to understand and trust the results of machine learning algorithms. Legally the European Data Protection Regulation (GDPR) requires and spells out specific levels of fairness and transparency that must be adhered to when using personal data, especially when such data is used to make automated decisions about individuals. This imports the principle of accountability for the impact or consequences that automated decisions have on human lives. In a nutshell, this domain development is called algorithmic transparency.

However, there are many questions, concerns and uncertainties that need in depth investigation. For example: 1) how can the complex statistical functioning of a machine learning algorithm be explained in a comprehensible way; 2) to what extent transparency builds, or hampers, trust; 3) to what extent it is fair to influence people’s choices through automated decision-making; 4) who is liable for unfair decisions; … and many more.

These questions need answers if we wish to teach algorithms well to allow for a co-existence between humans and machine to be productive and ethical.

 

Authors:

Dr Arianna Rossi – Research Associate at the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, LinkedIn: https://www.linkedin.com/in/arianna-rossi-aa321374/ , Twitter: @arionair89

Dr Marietjie Botes – Research Associate at the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, LinkedIn:  https://www.linkedin.com/in/dr-marietjie-botes-71151b55/ , Twitter: @Dr_WM_Botes