santanna

Call Open – SSSA Pisa – Italy – ESRs 1-3-5-14

Application for ESRs 1-3-5-14

host by Sant’Anna School of Advanced Studies – Pisa – Italy

CALL FOR APPLICATION: www.legalityattentivedatascientists.eu/wp-content/uploads/2021/06/CAll-for-application-LeADS_SSSA.pdf 

APPLICATION: sssup.esse3.cineca.it/Home.do 

DEADLINE: September 6th2021

JOB DESCRIPTION:

We are looking for 4 Early Stage Researchers (ESR)/PhD Researchers. They will be working under the framework of LeADS project. Their main task will be to collaborate to the Research and Training program LeADS and to eventually prepare their doctoral thesis in the same framework. The PhD thesis work will be undertaken at Sant’Anna School of Advanced Studies (Pisa, Italy) in the national PhD program on Artificial Intelligence administered by the University of Pisa.
As doctoral students, the ESRs will be jointly-supervised under the direction of the LeADS consortium and will spend also secondment(s) at Consortium members.

POSITIONS:

  • ESR 1 Project Title: Reciprocal interplay between competition law and privacy in the digital revolution
    Objectives: Data are more and more important resources in the so-called Digital Revolution: the impact on competition law is increasingly relevant and so are the implications of data protection law on competition law. The researcher will address these implications, analysing some relevant topics: the impact of data portability and the requirements in terms of interoperability in the new GDPR compared to the barriers to entry and to market dominance; how customer data can be “assessed” as an index of market dominance for the big information providers (Google, Apple, Facebook, Amazon); and how SMEs can benefit from data protection law and competition law in order to increase their volume in the market
  • ESR 3 Project Title: Unchaining data portability potentials in a lawful digital economy
    Objectives: Empirically test the potentials of the right to data portability. The research in the framework of LeADS will relate data portability not only to data protection law, but also to competition law and unfair business practices (e.g., offer or price discrimination between groups of consumers through profiling operations), setting the scene for their regulatory interplay in line with current and forthcoming technologies. In doing so specific attention will be offered to the possible technical solutions to guarantee effective portability. Additionally, the technical, statistical, and privacy implications of the new right will be evaluated, such as the need for standard formats for personal data, and the exception in Article 20.2 of the GDPR, according to which the personal data, upon request by the data subject, should be transmitted from one controller to another “where technically feasible”.
  • ESR 5 Project Title: Differential privacy and differential explainability in the data sphere: the use case of predictive jurisprudence
    Objectives: Human life and economy are exponentially data driven. The switch from residential to cloud based data storage is making increasingly difficult to reap the maximum from data while minimizing the chances of identifying individuals in datasets. Researcher will explore the interplay between differential privacy technologies and the data protection regulatory framework in search of effective mixes.
  • ESR 14 Project Title: Neuromarketing and mental integrity between market and human rights
    Objectives: ESR’s research question is whether and how neuromarketing can affect human rights of individuals, considering in particular recent interpretations of rights contained in the European Convention of Human Rights and in the EU Charter of Fundamental Rights, in particular “mental privacy”, “cognitive freedom”, and “psychological continuity”. Indeed, advanced data analytics provide a very high level of understanding of users’ behaviour, sometimes even beyond the conscious self-understanding of the users themselves exploiting all user’s idiosyncrasies, including user’s vulnerabilities harming the exercise of free decision making

RESPONSIBILITIES:

All ESRs recruited will be expected to carry out the following roles:

  • To manage and carry out their research project within 36 months
  • To write a PhD dissertation within the theme and objectives proposed above
  • To participate in research and training activities within the LeADS network
  • To participate in meetings of the LeADS projects
  • To disseminate their research to the non-scientific community, by outreach and public engagement
  • To liaise with the other research staff and students working in broad areas of relevance to the research project and partner institutions
  • To write progress reports and prepare results and articles for publication and dissemination via journals, presentations, videos and the web
  • To attend progress and management meetings as required and network with the other research groups

ELIGIBILITY CRITERIA:

  • Master of Science (MSc) degree or equivalent
  • Fluent written and spoken English
  • Excellent communication and writing skills.
  • In order to fulfill the eligibility criteria of the Marie Curie ITN at the date of recruitment, applicants must not have resided or carried out their main activity (work, studies, etc.) in Italy for more than 12 months in the 3 years immediately prior to their recruitment. Compulsory national service and/or short stays such as holidays are not considered. Italian candidates can apply if they have resided in another country for more than 2 years of the last 3 years.
  • At the time of recruitment, the candidate cannot have already obtained a doctoral degree and must be in the first 4 years (full-time equivalent) of his research career

OFFER:

  • Fixed Term Contract 36 Month
  • Work Hours: Full Time
  • Location: Pisa
  • Employee and Phd student status
  • Travel opportunities for training and learning
  • yearly gross salary: Living allowance of € 40.966,56, Mobility allowance up to € 7.200, Family allowance € 3.000

APPLICATION PROCEDURE: Please apply ONLINE and include:

  • A detailed Curriculum Vitae et studiorum (in English)
  • a motivation letter (Max 1,000 words in English)
  • a copy of your official academic degree(s)
  • proof of English proficiency (self-assessment)
  • the names and contacts of two referees
  • scan of a valid identification document (e.g., passport)
  • a non-binding research plan of a maximum of 3500 words which must include (in English): 1. the title of the research; 2. the scientific premises and relevant bibliography; 3. the aim and expected results of the research project; The non-binding research plan need to be aligned to one of the research descriptions for the LEADS project.

Consent and AI: a perspective from the Italian Supreme Court of Cassation

With a judgment symbolically delivered on the day of the third anniversary of the entry into force of the GDPR, the Italian Supreme Court of Cassation agrees with the national Data Protection Authority about an automatic reputation system deemed illegitimate.

The consent given, according to the DPA, was not informed and “the system [is] likely to heavily affect the economic and social representation of a wide category of subjects, with rating repercussions on the private life of the individuals listed”.

In its appeal against the decision of the Court of Rome, the DPA, through the Avvocatura dello Stato, challenged “the failure to examine the decisive fact represented by the alleged ignorance of the algorithm used to assign the rating score, with the consequent lack of the necessary requirement of transparency of the automated system to make the consent given by the person concerned informed”.

The facts The so-called Mevalaute system is a web platform (with related computer archive) “preordained to the elaboration of reputational profiles concerning physical and juridical persons, with the purpose to contrast phenomena based on the creation of false or untrue profiles and to calculate, instead, in an impartial way the so-called “reputational rating” of the subjects listed, to allow any third party to verify the real credibility”.

The case and the decision are based on the regime prior to the GDPR, but in fact the Court confirms the dictate of the GDPR itself, relaunching it as the Polar star for all activities now defined as AI under the proposed Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts.

To put it more clearly, the decision is relevant for each “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;” (art. 3 proposed AI Act). That is, for any software produced using “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimisation methods.

The consent to the use of one’s personal data by the platform’s algorithm had been considered invalid by the Italian Data Protection Authority because it was not informed about the logic used by the algorithm. The decision was quashed by the Court of Rome, but the Supreme Court of Cassation thinks otherwise. After all, on other occasions (see Court of Cassation n. 17278-18, Court of Cassation n. 16358-1) the Supreme Court had clarified already that consent to processing as such was not sufficient, it also had to be valid!

Even if based on the notion present in the previous legislation (incidentally, made even more explicit and hardened by the GDPR in the direction indicated today by the Court), the reference to the fact that “consent must be previously informed in relation to a processing well defined in its essential elements, so that it can be said to have been expressed, in that perspective, freely and specifically.” remains very topical.

Indeed, based on today’s principle of accountability, “it is the burden of the data controller to provide evidence that the access and processing challenged are attributable to the purposes for which appropriate consent has been validly requested – and validly obtained.”

The conclusion is as sharp as enlightening: “The problem, for the lawfulness of the processing, [is] the validity … of the consent that is assumed to have been given at the time of consenting. And it cannot be logically affirmed that the adhesion to a platform by members also includes the acceptance of an automated system, which makes use of an algorithm, for the objective evaluation of personal data, where the executive scheme in which the algorithm is expressed, and the elements considered for that purpose are not made known.”

It should be noted that this is a decision that goes beyond the limits of art. 22 GDPR because it opens an interpretation of Articles 13(2)f and 14(2)g that goes beyond the “solely automated” requirement for the automated decision-making mechanism by placing a clear emphasis on the need for transparency of the logic used by the algorithm used.

Algorithms are learning from our behaviour: How must we teach them

Algorithms are learning from our behaviour: How must we teach them

by Daniel Zingaro

Have you ever wondered about why the online suggestions on videos, products, services or special offers you receive fits so perfectly into your preferences and interests? Why your social media feed only shows certain content, but filters out the rest? Or why you get certain results on an internet search on your smartphone, but you can’t get the same results from another device? And why does a map application suggest a certain route over another? Or why you are always matched with cat lovers on dating apps?

Did you just click away and thought that your phone mysteriously understands you? And although you may have wondered about this, you may not have found out why.

How these systems work to suggest specific content or courses of actions is generally invisible.  The input, output and processes of its algorithms are never disclosed to users, nor are they made public. But still such automated systems increasingly inform many aspects of our lives such as the online content we interact with, the people we connect with, the places we travel too, the jobs we apply for, the financial investments we make, and the love interests we pursue. As we experience a new realm of digital possibilities, our vulnerability to the influence of inscrutable algorithms increases.

Some of the decisions taken by algorithms may create seriously unfair outcomes that unjustifiably privilege certain groups over others. Because machine-learning algorithms learn from the data that we feed them with, they inevitably also learn the biases reflected in the data. For example, the algorithm that Amazon employed between 2014 and 2017 to automatize the screening of job applicants reportedly penalised words such as ‘women’ (e.g., the names of women’s colleges) on applicants’ resumes. The recruiting tool learned patterns in the data composed of the previous 10 years of candidates’ resumes and therefore learned that Amazon preferred men to women, as they were hired more often as engineers and developers. This means that women were blatantly discriminated against purely based on their gender with regards to obtaining employment at Amazon.

To avoid a world in which algorithms unconsciously guide us towards unfair or unreasonable choices because they are inherently biased or manipulated, we need to fully understand and appreciate the ways in which we teach these algorithms to function. A growing number of researchers and practitioners already engages in explainable AI that entails that they design processes and methods allowing humans to understand and trust the results of machine learning algorithms. Legally the European Data Protection Regulation (GDPR) requires and spells out specific levels of fairness and transparency that must be adhered to when using personal data, especially when such data is used to make automated decisions about individuals. This imports the principle of accountability for the impact or consequences that automated decisions have on human lives. In a nutshell, this domain development is called algorithmic transparency.

However, there are many questions, concerns and uncertainties that need in depth investigation. For example: 1) how can the complex statistical functioning of a machine learning algorithm be explained in a comprehensible way; 2) to what extent transparency builds, or hampers, trust; 3) to what extent it is fair to influence people’s choices through automated decision-making; 4) who is liable for unfair decisions; … and many more.

These questions need answers if we wish to teach algorithms well to allow for a co-existence between humans and machine to be productive and ethical.

 

Authors:

Dr Arianna Rossi – Research Associate at the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, LinkedIn: https://www.linkedin.com/in/arianna-rossi-aa321374/ , Twitter: @arionair89

Dr Marietjie Botes – Research Associate at the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, LinkedIn:  https://www.linkedin.com/in/dr-marietjie-botes-71151b55/ , Twitter: @Dr_WM_Botes

The beginning of the LeADS era

On January 1st 2021 LeADS (Legality Attentive Data Scientists) started its journey. A Consortium of 7 prominent European universities and research centres along with 6 important industrial partners and 2 Supervisory Authorities is exploring ways to create a new generation of LEgality Attentive Data Scientists while investigating the interplay between and across many sciences.

LeADS envisages a research and training programme that will blend ground-breaking applied research and pragmatic problem-solving from the involved industries, regulators, and policy makers. The skills produced by LeADS and tested by the ESR will be able to tackle the confusion created by the blurred borders between personal and commercial information and between personality and property rights typical of the big data environment. Both processes constitute a silent revolution—developed by new digital business models, industrial standards, and customs—that is already embedded in soft law instruments (such as stakeholders’ agreements) and emerging in case law and legislation (Regulation EU 2016/679 and the e-privacy directive to begin with), while data scientists are mostly unaware of them. They cut across the emergence of the Digital Transformation, and call for a more comprehensive and innovative regulatory framework. Against this background, LeADS is animated by the idea that in the digital economy data protection holds the keys for both protecting fundamental rights and fostering the kind of competition that will sustain the growth and “completion” of the “Digital Single Market” and the competitive ability of European businesses outside the EU. Under LeADS, the General Data Protection Regulation (GDPR) and other EU rules will dictate the transnational standard for the global data economy while training researchers able to drive the process and set an example

The data economy or better way the data society we increasingly live is our explorative target under many angles (from the technological to the legal and ethics one). This new generation is needed to better answer to the challenges of the data economy and the unfolding of the digital transformation scoping. Our Early Stage Researchers (ESRs) will come from many experiences and backgrounds (law, computer science, economics, statistics, management, engineering, policy studies, and mathematics,..).

ESRs will find an enthusiastic transnational, interdisciplinary team of teams tackling the relevant issues from their many angles. Their research will be supported by these research teams in setting the theoretical framework and the practical implementation template of a common language.

LeADS research plan, although already envisages 15 specific topics to be interdisciplinary investigated, remain open-ended.

It is natural in the fields we have selected for which we identified crossover concepts in need of a common understanding of concepts useful for future researchers, policy makers, software developers, lawyers and market actors.

LeADS research strives to create, share cross disciplinary languages and integrate the respective background domain knowledge of its participants in one shared idiolect that it wants to share with a wider audience.

It is LeADS understanding that regulatory issues in data science and AI development and deployment are often perceived (and sometimes are) hurdles to innovation, markets and above all research. Our unwritten goal is to contribute to turn regulatory and ethical constraints which are needed in opportunities for better developments.

LADS aims at nurturing a data science capable of maintaining its innovative solutions within the borders of law – by design and by default – and of helping expand the legal frontiers in line with innovation needs, preventing the enactments of legal rules technologically unattainable.

By Giovanni Comandé