Explainable & Trustworthy AI

21 March – 20 June 2022, Ghent
training course
VAIA & UGent UGAIN

Artificial Intelligence consists of complex and sophisticated algorithms that sometimes make it difficult for humans to understand and interpret the decisions or suggestions of the AI system. This course on Explainable AI looks into the different aspects related to (creating) trust in AI.

Artificial Intelligence (AI) has come a long way since its first use and application many decades ago. The use of AI and Machine Learning have seen an immense uptake in the 21st century and the techniques are successfully applied to a wide variety of problems, both in academia, private and public industry. As this domain became more and more established in recent years, new challenges arose because the rationale and outcomes cannot always be followed by people.

Explainable AI puts the following properties of trust in AI on the foreground:

  • Gaining trust by explaining for example the characteristics of AI output.
  • By explaining an AI technique understanding will increase, allowing to investigate if the technique can be transferred to another domain or problem.
  • Informing a user about the workings of an AI model so that there is no misinterpretation.
  • Confidence of users can be established by using AI models that are explainable, stable but also robust.
  • When explaining AI models issues concerning privacy awareness come into play. Private data should not be exposed by the models.
  • It is important that actions can be explained. How have we come to specific outcomes and how could we change them? 
  • Nowadays, a wide variety of people from different background come into contact AI, it is important that they all understand why the system is behaving in such a manner and offer explanations tailored to their needs.

Each module within this course focuses on a different topic within the domain of explainable and trustworthy AI, ranging from white box models over hybrid techniques to aspects of fairness and bias.

Programme

Introduction, 21 March 2022

In this first module, we give a short recap of the basics, followed by the explanation of some general terms that are used in the domain of explainable and trustworthy Artificial Intelligence. This introduction will end with the definition of the challenges within this domain.

  • Recap the basics: AI, ML and statistics
  • Different types of ML: white-box & black-box
  • Interpretability vs Explainability
  • Human Uncertainty vs Model Uncertainty
  • Challenges

Teacher: prof. dr. ir. Sofie Van Hoecke & prof. dr. Femke Ongenae

White box models, 28 March 2022

In this module, focus is given to white box models. While black box models offer higher accuracy, white box models are easier to explain and to interpret, unfortunately this leads to a lesser predictive capacity. In the area of white box models, several different approaches will be highlighted:

  • Linear Regression
  • Decision Trees and Rule Sets
  • Generalized Additive Models (GAMs)

Teacher: dr. Daniel Peralta Cámara

Global vs Local Interpretability, 25 April 2022

Machine learning systems help us connect cause and effect in complex data sets. How we can make those interpretations depends on the type of algorithm used. The level of interpretability we can reach will also differ depending on whether we look at how the model predicts in general, versus how a specific prediction of the model was computed. When interpretability is good enough, we can use hypothetical questions like “What if instead this would be the case?”, adding to the usefulness of your system. This module explains how to increase and measure those interpretabilities.

  • Global Interpretability
  • Local Interpretability
  • Counterfactuals
  • Model distillation
  • Dependency plots
  • Evaluation methods

Teacher: Arne Gevaert

Saliency mapping, 2 May 2022

To help explain how a neural network reached a certain conclusion, certain visualizations of its reasoning can be useful. One type of visualization is a heatmap showing which areas of a photo contribute most to how a system has labeled it. These visualizations are explained in this module.

  • Convolutional Neural Networks (CNNs) & interpretation
  • Backpropagation
  • Other gradient-based methods

Teacher: Arne Gevaert

Hybrid AI, 9 May 2022

The oldest forms of machine learning entail rule engines that were hand programmed. Newer forms entail algorithms searching for connections themselves. The first are great in explaining how they reach their conclusions. The latter sometimes give superior predictions, being a lot less brittle, but lack that explainability. To get the best of both worlds, these approaches are sometimes combined. Moreover, allowing an expert to guide a machine learning system can sometimes lead to yet again superior predictions. This module explains how.

  • Data-driven vs expert-based approaches
  • Finding synergies in data-driven and expert-based approaches
  • Combining expert knowledge and machine learning

Teacher: prof. dr. ir. Sofie Van Hoecke & prof. dr. Femke Ongenae

Robustness, 16 May 2022

The output of a machine learning system depends on the data used as input. Often the needed amount and structure of that data is overlooked. However, machine learning systems can be combined to generate additional data or to finetune each other. Nevertheless, malicious additions to your training data can corrupt your system and even a well-trained system can be deceived. This module explains these issues and what you can do about them.

  • Data quality & quantity
  • Adversarial learning
  • Poisoning attacks & defenses
  • Learning theory

Teacher: Jonathan Peck

Online & Transfer Learning, 23 May 2022

Training machine learning systems can be done before use, i.e. when training it on a stack of pictures first and asking it to make sense of new pictures later. However, it can also be done during use. In the latter scenario the system gets updated whilst it is being used. Sometimes this is necessary because training data is (partially) becoming available after commissioning of the system. Sometimes a system is pretrained on one dataset and the developer wants to retrain the system in order to solve another but related problem, i.e. using a machine vision system that is trained to detect cats to now detect dogs. The developer thus leverages the effort put into the training of the earlier system, hence requiring less training time for the novel system. These and other relations between datasets, their application in training models and the problems we solve with those will be explained in this module.

  • Online learning
  • Change detection
  • Transfer learning & domain adaptation (foundation models)

Teacher: Jan Van Looy & Matthias Feys

Bias & Fairness, 30 May 2022

When training machine learning systems, the training data can be biased, leading to unwanted outcomes, i.e. an HR system trained on old hospital personnel data now stating that women might be unlikely good candidates for doctor positions. This module will explain these issues, how to avoid them, how to measure bias and what the limitations of avoiding it are.

  • Various notions of fairness & impossibility theorem
  • Different types of bias & methods to debias
  • Ethical guidelines
  • Learning fair models
  • Uncovering model bias

Teacher: group Tijl De Bie

Privacy, 13 June 2022

Sometimes the quality of machine learning system outputs and privacy are at odds and need to be balanced. However, there are techniques that allow the training of machine learning systems on privacy sensitive data, without exposing the data itself. Those techniques and relevant regulation on these practices are explained in this module.

  • Pseudonimization
  • K-anonymity
  • Differential privacy
  • Regulation

Teacher: group Tijl De Bie

Use cases, 20 June 2022

During this module, some specific use cases in the domain of Explainable and Trustworthy AI will be discussed.

Exam, 12 September 2022

in samenwerking met:

Practical

  • 21 March – 20 June, exam on 12 September 2022
  • Location: Ghent
  • Language: English
  • Contact: Femke De Backere
  • Target audience: anyone who would like to get more insight in techniques to achieve explainable & trustworthy AI.

Registration

  • prerequisites:
    • higher education in computer science or equivalent experience
    • programming experience with Python or related programming language
  • Certification: a certificate is awarded to students who attend all lessons and succeed for the final exam (on 12 September 2022).
  • The number of participants is limited to 25.

Ready to get started?

All practical information can be found on the website of UGAIN: