Sense & Sensibility of AI

Seminar series, 2021
Flemish AI Academy

AI systems carry the risk of creating filter bubbles and polarization. While AI is being rolled out into society, the discussion on how AI-based systems may align with and even affect our values is pushed to the forefront. We gave the computer senses, but how can we give it sensibility?

AI has an increasing influence on our daily lives, examples include automated decision-making for high-stake decisions such as mortgages and loans, automated risk assessments for bail, or recommenders on the internet. But can AI-systems also take into account human values? This debate requires a multi-disciplinary view, where both technical and non-technical perspectives have a prominent place.

In our lecture series ‘Sense & Sensibility of AI’ we aim for researchers to learn about the different aspects of ethics in AI, not only to become aware of them but also to learn about the impact of AI on society and about methodologies to identify, assess and possibly address ethical issues. These monthly seminars tackle subjects such as bias and fairness, privacy, trustworthiness, balancing technical, social, and regulatory perspectives.

The series is targeted towards researchers working in the broad field of AI and data science. To understand the lectures in full, it may be required to have a background in the technical aspects of AI/machine learning.

Sense & Sensilibity of AI is a seminar series developed by Flemish AI Academy in collaboration and with the support of all our partners, all universities in Flanders and Knowledge Center Data & Society.

a collaboration between all universities
in Flanders

Practical

  • monthly in 2021
    starting 28 May
  • Location: online
  • Contact: Laura Alonso
    laura.alonso@vlaamse-ai-academie.be
  • Language: English
  • Price: free for researchers
    • subscribing is mandatory
    • register for each seminar separately

Seminars ‘Sense & Sensibility of AI’

People often act inconsistently and do not always take account of all the relevant information necessary to arrive at a well-considered moral judgment. To make up for these shortcomings, researchers propose to be assisted by AI in moral decision-making. Unlike humans, AI can collect, analyze and process huge amounts of data in a very short period of time. To what extent is it realistic that AI will assist us in the future? Katleen Gabriels discusses different types of Artificial Moral Agents (AMAs) and the technical, conceptual and ethical challenges of moral judgment made by AI.

Ranking in Information Retrieval has been traditionally evaluated from the perspective of the relevance of search engine results to people searching for information, i.e., the extent to which the system provides “the right information, to the right people, in the right way, at the right time.” However, people in current Information Retrieval systems are not only the ones issuing search queries, but increasingly they are also the ones being searched. Professor Castillo explains and expands on how this raises several new problems in Information Retrieval that have been addressed in recent research, particularly regarding fairness/non-discrimination, accountability, and transparency.

more information will follow after the summer

When is data ‘personal’? When do you perform a DPIA? Why do some organizations have a Data Protection Officer and when should you contact him/her? Ellen Wauters (CiTiP, imec) and Brahim Bénichou (Knowledge Centre Data & Society & KU Leuven) explain the basic principles of GDPR and unlock them with some handy tips and tricks. They will also briefly discuss the Ethical Guidelines for Trustworthy AI and the seven key requirements. The development, deployment and use of AI systems should meet these requirements in order to be considered Trustworthy AI, privacy and data governance being one of them. In preparation of the seminar, you can consult the privacy guidelines/policies applicable in your organization.

We live in a time when information about most of our movements and actions is collected and stored in real-time. The availability of large-scale behavioral data dramatically increases our capacity to understand and potentially affect the behavior of individuals and collectives.

The use of this data, however, raises legitimate privacy concerns. Anonymization is meant to address these concerns: allowing data to be fully used while preserving individuals’ privacy. In this talk, Prof. de Montjoye will first discuss how traditional data protection mechanisms fail to protect people’s privacy in the age of big data. More specifically, he will show how the mere absence of obvious identifiers such as name or phone number or the addition of noise are not enough to prevent re-identification. Second, de Montjoye will describe what he sees as a necessary evolution of the notion of data anonymization towards an anonymous use of data. He will then conclude by discussing some of the modern privacy engineering techniques currently developed to allow large-scale behavioral data to be used while giving individual strong privacy guarantees.

Artificial intelligence is more and more responsible for decisions that have a huge impact on our lives. But predictions, made using data mining and algorithms, can affect population subgroups differently. Academic researchers and journalists have shown that decisions, taken by predictive algorithms, sometimes lead to biased outcomes, reproducing inequalities already present in society.

Is it possible to make a fairness-aware data mining process? Are algorithms biased because people are as well? Or is it how machine learning works at its most fundamental level?