Implicit biases in human and artificial intelligence

21 April 2021
VUB & ULB

Roots of discrimination lie on automatic processes, this seminar brings together social and cognitive psychology studies with artificial intelligence appications.

The focus of the talk is to investigate the presence of the so-called “implicit biases” (Devine et al. 2009) in both human and artificial intelligence, especially encoded in natural language: i.e. how they structure our cognitive system thus influencing our choices as well as the technological products we design, and their social impact. Those aspects are discussed in many disciplines. Therefore, the aim is to survey the main debates from two complementary perspectives both highlighting how the roots of discrimination lie on automatic processes: on one side, the social and cognitive psychology studies and, on the other side, the artificial intelligence (AI) applications. 

From the cognitive psychology perspective, biases are defined as “short-cuts” or “mental helpers”  that the mind needs to make decisions in response to the stimuli of the complex external world (Greenwald et al. 2002). Furthermore, biases are involved in our way of classifying things: our expectations are automated so that the mere presence of a clue related to the category, “category linked-cue”, is able to activate a series of automatic associations without conscious awareness or intention (Devine et al. 2009). This automatic cognitive process plays a role not only in categories of objects but also in social categories, including stereotypes. Automatic associations also occur in those who do not share or even repudiate the content of the representations, such as racial biases. Therefore, they are also called “implicit biases”, which are not the exception, but the rule in the information processing. Only at the end of it, one can decide to activate mechanisms contrasting the automatic associations which, as measured by the Implicit Association Test (IAT Corp 1998), have various types of “epistemic costs” for the subject’s mind (Gendler 2011). 
 

From the artificial intelligence perspective, a recent and powerful machine learning technique, i.e. word embeddings – vector representations of a particular word, based on the idea that contextual information alone constitutes a viable representation of linguistic item (Firth 1957, Wittgenstein 1953) – have shown to carry biases mirroring those present in our societies and thus encoded in our languages. The many attempts at reducing bias, either via post-processing (Bolukbasi et al. 2016) or directly in training (Zhao et al. 2018) have nevertheless left two research problems: (i) biases are still encoded implicitly in language: the actual effect is mostly hiding, not removing them. Therefore, existing bias removal techniques are insufficient, and should not be trusted (Gonen et al. 2019); and (ii) it is debatable whether we should aim at removal or rather at transparency and awareness (Swinger et al. 2019), carrying out a fair analysis of human biases present in word embeddings (Nissim et al. 2019). Great attention will be paid to those authors that by replicating known biases measured by the psychological IAT demonstrated how word embeddings do not only track gender or ethnic stereotypes but the whole spectrum of human biases placed in language: since “bias is meaning” (Caliskan et al. 2017, p. 12) it would be impossible to use language significantly without incorporating them.

The analysis of those studies and applications led to a first attempt at a formal model of implicit bias, as a possible symbolic approach to represent and explain the issue and the consequences of biases. At least, to achieve greater awareness the underlying question is: what can AI tell us about the way we conceive of ourselves as thinking and acting beings?

The speaker

Ludovica Marinucci is a Post-Doctoral Researcher at the Semantic Technology Laboratory (STLab) of the National Research Council (CNR) in Rome, Italy, working on projects involving the analysis of the social and cognitive aspects of the use of semantic technologies. From 2014, she is adjunct professor in Philosophy of Science at Tor Vergata University of Rome, Faculty of Medicine. In 2017 she received her PhD in Philosophy, Epistemology and History of Culture at University of Cagliari (Italy) during which she began to address the theoretical possibilities and challenges offered by the computational analysis of historical and philosophical texts and, more generally, by the interaction of computer science and humanities.

Considered the current situation related to the spread of Covid-19, for 2020 the seminar will take the form of a webinar, hosted on the platform Teams. The participation is free but registration is needed: please send an email to sebastien.de.valeriola@ulb.be  and andrea.penso@vub.be before April 16. A link will then be sent to access the Teams meeting.

Practical

Registration:

  • Registration until 16 April 2021
  • Price: free, but registration is requested

Ready to get started?

All practical information can be found at the Brussels platform for digital humanities.