Fairness and Bias in Natural Language Processing

15 & 16 March 2022, Leuven
VAIA – KU Leuven

How to use Natural Language Processing and set it up with fair algorithms, avoiding a repetition of (historical) prejudices?

Natural Language Processing, or NLP, is rapidly gaining popularity. It appears more and more in everyday life, most notably in the chatbots that assist you on online shops. Meanwhile, developers continue to struggle with the development of NLP applications because the algorithms learn from existing, historical texts and decisions which include mistakes. So how can they make the systems ‘fair’ if they don’t learn from fair input? And how can they ensure that analyses are neutral and unbiased if their source input consists of prejudiced content? In two half days Prof. Tim Van de Cruys (Dep. Linguistics) and Pieter Delobelle (Dep. Computer Science) from KU Leuven will teach you the most important techniques to recognize and avoid bias in NLP.


Day 1: Introduction to NLP

Tim Van de Cruys, KU Leuven

Introduction to NLP (14h-16h)

  • Different paradigms for NLP: symbolic, statistical, neural
  • NLP applications
  • Examples of bias in NLP applications

Neural architectures

  • Word embeddings
  • Continuous bag of words
  • Convolutional neural networks
  • Recurrent neural networks
  • Transformer architectures
  • Contextual representations and transfer learning

Practical session: word embeddings (16.30h-17.30h)

  • Training word embeddings
  • Analogy computations
  • Gender bias in word embeddings
  • Mitigating gender bias

Day 2: Fairness and Bias in NLP   

Pieter Delobelle, KU Leuven

Introduction to bias (14h-16h)

  • A classification of bias
  • Model interpretability
  • Measuring fairness
  • Debiasing methods
  • Transparent machine learning, model cards

Intrinsic and extrinsic measures of fairness

  • History
  • General fairness and definitions: stereotyping, protected groups, etc… 
  • Measuring fairness in NLP
  • Evaluations in language models (focus because SOTA)
    • Case study on evaluating fairness in RobBERT
    • Differences with English
    • WEAT/SEAT and PCA-based measures 
    • Issues with evaluations (“bad seeds”, “nordic salmon”, etc…)

Mitigating stereotypes in language models

  • Overview of different methods
  • Retraining, adapters, projections, …
  • Limitations

Practical session: transformer models (16.30h-17.30h)

  • Masked word prediction with BERT
  • Biased prediction in BERT
  • Finetuning a transformer model for classification
  • Biased classification


  • 15 & 16 March 2022, 14-17.30h
  • Location: KU Leuven – ESAT: Kasteelpark Arenberg 10, 3001 Leuven. Lokaal B00.35
  • Language: English
  • Contact: laura.alonsopadula@vaia.be
  • Target audience: researchers and professionals active in AI/Data Sciences with focus on language processing
  • Bring your own laptop for the practical sessions.


  • Prerequisites: good knowledge of machine learning
  • Coffee, tea and water are available for the participants
  • Price
    • €140 professionals
    • €70 researchers at Flemish universities
  • Certificate: attendance certificate
  • The number of participants is limited to 25

Registration form

Fill in the form and confirm your registration via the e-mail sent to you. Don’t forget to click the link!
We will send you the invoice once your registration has been approved.

If you prefer to pay immediately and don’t need/ want to receive an invoice, follow these steps:

  1. Fill in the form (don’t forget!) 👉
  2. Pay by bank transfer:

Account owner: KU Leuven
IBAN-n°: BE 09 4320 0000 1157
Structured Message: 400/0021/79105

Cancellation policy: Cancellation free of charge (5% administrative cost) is only possible till the 4th of March. After that date cancellation free of charge will only be possible with a valid reason. If no valid reason presented, no reimboursement will apply.