Sense & Sensibility of AI

Seminar series, 2021
Flemish AI Academy

AI systems carry the risk of creating filter bubbles and polarization. While AI is being rolled out into society, the discussion on how AI-based systems may align with and even affect our values is pushed to the forefront. We gave the computer senses, but how can we give it sensibility?

AI has an increasing influence on our daily lives, examples include automated decision-making for high-stake decisions such as mortgages and loans, automated risk assessments for bail, or recommenders on the internet. But can AI-systems also take into account human values? This debate requires a multi-disciplinary view, where both technical and non-technical perspectives have a prominent place.

In our lecture series ‘Sense & Sensibility of AI’ we aim for researchers to learn about the different aspects of ethics in AI, not only to become aware of them but also to learn about the impact of AI on society and about methodologies to identify, assess and possibly address ethical issues. These monthly seminars tackle subjects such as bias and fairness, privacy, trustworthiness, balancing technical, social, and regulatory perspectives.

The series is targeted towards researchers working in the broad field of AI and data science. To understand the lectures in full, it may be required to have a background in the technical aspects of AI/machine learning.

Sense & Sensilibity of AI is a seminar series developed by Flemish AI Academy in collaboration and with the support of all our partners, all universities in Flanders and Knowledge Center Data & Society.

A collaboration of all the universities and universities for applied sciences in Flanders


  • monthly in 2021
    starting 28 May
  • Location: online
  • Contact: Laura Alonso
  • Language: English
  • Price: free for researchers
    • subscribing is mandatory
    • register for each seminar separately

Seminars ‘Sense & Sensibility of AI’

When is data ‘personal’? When do you perform a DPIA? Why do some organizations have a Data Protection Officer and when should you contact him/her? Ellen Wauters (CiTiP, imec) and Brahim Bénichou (Knowledge Centre Data & Society & KU Leuven) explain the basic principles of GDPR and unlock them with some handy tips and tricks. In preparation of the seminar, you can consult the privacy guidelines/policies applicable in your organization.

AI systems in the broad sense of the term increasingly inform everyday life, from smart energy grids to connected cars and from social security fraud detection to search engines, recommender systems and other types of behavioural micro targeting. In this lecture I will discuss how the legal framework of fundamental rights in the EU combines with data protection law and the upcoming legal framework for AI systems. At their core, these legal architectures require that legal protection is built into the development and deployment of AI systems, requiring acuity and foresight from those who put these systems on the market. I will explain that this is not about good intentions or laudable ethical inclinations, but about developers, providers and deployers operating under the rule of law, following by the norms agreed by the democratic legislature. Finally, I will explain why law is not about obstructing innovation but about enabling innovation that addresses and enhances human agency. We don’t want ‘humans in the loop’, we need humans in charge. 

Previous Seminars

People often act inconsistently and do not always take account of all the relevant information necessary to arrive at a well-considered moral judgment. To make up for these shortcomings, researchers propose to be assisted by AI in moral decision-making. Unlike humans, AI can collect, analyze and process huge amounts of data in a very short period of time. To what extent is it realistic that AI will assist us in the future? Katleen Gabriels discusses different types of Artificial Moral Agents (AMAs) and the technical, conceptual and ethical challenges of moral judgment made by AI.

We live in a time when information about most of our movements and actions is collected and stored in real-time. The availability of large-scale behavioral data dramatically increases our capacity to understand and potentially affect the behavior of individuals and collectives.

The use of this data, however, raises legitimate privacy concerns. Anonymization is meant to address these concerns: allowing data to be fully used while preserving individuals’ privacy. In this talk, Prof. de Montjoye will first discuss how traditional data protection mechanisms fail to protect people’s privacy in the age of big data. More specifically, he will show how the mere absence of obvious identifiers such as name or phone number or the addition of noise are not enough to prevent re-identification. Second, de Montjoye will describe what he sees as a necessary evolution of the notion of data anonymization towards an anonymous use of data. He will then conclude by discussing some of the modern privacy engineering techniques currently developed to allow large-scale behavioral data to be used while giving individual strong privacy guarantees.

Artificial intelligence is more and more responsible for decisions that have a huge impact on our lives. But predictions, made using data mining and algorithms, can affect population subgroups differently. Academic researchers and journalists have shown that decisions, taken by predictive algorithms, sometimes lead to biased outcomes, reproducing inequalities already present in society.

Is it possible to make a fairness-aware data mining process? Are algorithms biased because people are as well? Or is it how machine learning works at its most fundamental level?

way, at the right time.” However, people in current Information Retrieval systems are not only the ones issuing search queries, but increasingly they are also the ones being searched. Professor Castillo explains and expands on how this raises several new problems in Information Retrieval that have been addressed in recent research, particularly regarding fairness/non-discrimination, accountability, and transparency.

Building on his paper with dr. Guerses ‘Privacy after the Agile Turn’ and what Julie Cohen has called ‘Turning Privacy Inside Out’, this talk will look at the conditions for protecting fundamental rights in relation to data-intensive services and information systems. To make this turn, it is necessary to look at the particular ways and logics of production of services and information systems that are shaped by larger trends like the emergence of cloud and mobile computing and the platform economy and the particular challenges and opportunities of protecting fundamental rights in these environments.