Posts

Algorithms are learning from our behaviour: How must we teach them

Algorithms are learning from our behaviour: How must we teach them

by Daniel Zingaro

Have you ever wondered about why the online suggestions on videos, products, services or special offers you receive fits so perfectly into your preferences and interests? Why your social media feed only shows certain content, but filters out the rest? Or why you get certain results on an internet search on your smartphone, but you can’t get the same results from another device? And why does a map application suggest a certain route over another? Or why you are always matched with cat lovers on dating apps?

Did you just click away and thought that your phone mysteriously understands you? And although you may have wondered about this, you may not have found out why.

How these systems work to suggest specific content or courses of actions is generally invisible.  The input, output and processes of its algorithms are never disclosed to users, nor are they made public. But still such automated systems increasingly inform many aspects of our lives such as the online content we interact with, the people we connect with, the places we travel too, the jobs we apply for, the financial investments we make, and the love interests we pursue. As we experience a new realm of digital possibilities, our vulnerability to the influence of inscrutable algorithms increases.

Some of the decisions taken by algorithms may create seriously unfair outcomes that unjustifiably privilege certain groups over others. Because machine-learning algorithms learn from the data that we feed them with, they inevitably also learn the biases reflected in the data. For example, the algorithm that Amazon employed between 2014 and 2017 to automatize the screening of job applicants reportedly penalised words such as ‘women’ (e.g., the names of women’s colleges) on applicants’ resumes. The recruiting tool learned patterns in the data composed of the previous 10 years of candidates’ resumes and therefore learned that Amazon preferred men to women, as they were hired more often as engineers and developers. This means that women were blatantly discriminated against purely based on their gender with regards to obtaining employment at Amazon.

To avoid a world in which algorithms unconsciously guide us towards unfair or unreasonable choices because they are inherently biased or manipulated, we need to fully understand and appreciate the ways in which we teach these algorithms to function. A growing number of researchers and practitioners already engages in explainable AI that entails that they design processes and methods allowing humans to understand and trust the results of machine learning algorithms. Legally the European Data Protection Regulation (GDPR) requires and spells out specific levels of fairness and transparency that must be adhered to when using personal data, especially when such data is used to make automated decisions about individuals. This imports the principle of accountability for the impact or consequences that automated decisions have on human lives. In a nutshell, this domain development is called algorithmic transparency.

However, there are many questions, concerns and uncertainties that need in depth investigation. For example: 1) how can the complex statistical functioning of a machine learning algorithm be explained in a comprehensible way; 2) to what extent transparency builds, or hampers, trust; 3) to what extent it is fair to influence people’s choices through automated decision-making; 4) who is liable for unfair decisions; … and many more.

These questions need answers if we wish to teach algorithms well to allow for a co-existence between humans and machine to be productive and ethical.

 

Authors:

Dr Arianna Rossi – Research Associate at the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, LinkedIn: https://www.linkedin.com/in/arianna-rossi-aa321374/ , Twitter: @arionair89

Dr Marietjie Botes – Research Associate at the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg, LinkedIn:  https://www.linkedin.com/in/dr-marietjie-botes-71151b55/ , Twitter: @Dr_WM_Botes