Legal Protection by Design for AI Systems

Mireille Hildebrandt, University of Brussels & Radboud University Nijmegen
26 January 2022, online
Seminar Series ‘Sense & Sensibility of AI

How to build in legal protection into the development and deployment of AI Systems, constructing them with more than good intentions, yet based on norms agreed by a democratic legislature.

AI systems in the broad sense of the term increasingly inform everyday life, from smart energy grids to connected cars and from social security fraud detection to search engines, recommender systems and other types of behavioural micro targeting. In this lecture Prof. Hildebrandt will discuss how the legal framework of fundamental rights in the EU combines with data protection law and the upcoming legal framework for AI systems. At their core, these legal architectures require that legal protection is built into the development and deployment of AI systems, requiring acuity and foresight from those who put these systems on the market. I will explain that this is not about good intentions or laudable ethical inclinations, but about developers, providers and deployers operating under the rule of law, following the norms agreed by the democratic legislature. Finally, I will explain why law is not about obstructing innovation, but about enabling innovation that addresses and enhances human agency. We don’t want ‘humans in the loop’, we need humans in charge. 

Mireille Hildebrandt

A tenured research professor on ‘Interfacing Law and Technology’ at the University of Brussels, Mireille Hildebrandt‘s research focuses on the implications of automated decisions, machine learning and mindless artificial agency, notably concerning the implications for law and the rule of law in constitutional democracies. Working on the cusp of law and computer science is core to her research.

At the University of Brussels (VUB), Professor Hildebrandt works with the research group on Law Science Technology and Society Studies (LSTS) at the Faculty of law and Criminology. She also holds the part-time Chair of ‘Smart Environments, Data Protection and the Rule of Law’ at the Institute for Computing and Information Sciences (iCIS) of the Science Faculty of Radboud University Nijmegen since 2011.

She has published 4 scientific monographs, 22 edited volumes or special issues, and over 100 chapters and articles in scientific journals and volumes. See notably her Smart Environments and the End(s) of Law. Novel Entanglements of Law and Technologies.

In 2018 Hildebrandt has been awarded an ERC Advanced Grant for research on ‘Counting as a Human Being in Computational Law’ (COHUBICOL), on the cusp of law and computer science, philosophy of law and philosophy of technology. The project will run from 2019-2014 at Vrije Universiteit Brussels (law), partnered with Radboud University (computer science).

CoHuBiCoL is focused on foundational research into two types of computational law or ‘legal tech’:

  • the use of machine learning for argumentation mining and quantified legal prediction, and
  • the use of blockchain to develop self-executing code for regulation and contracts (‘smart regulation’, ‘smart contracts’).

Practical

  • 26 January 2022, 11-12u
  • Location: online
  • Contact: Laura Alonso
    laura.alonsopadula@vaia.be
  • Language: English
  • Target Group: researchers with insight into the technical aspects of AI & machine learning

Registration?

  • Registration before 20 January
  • Prerequisites: master’s degree
  • PRice: free
  • Please check your spam folder if you haven’t received a link two days before the start of the seminar.

You registered, but cannot attend after all? Thank you for letting us know at laura.alonsopadula@vaia.be.

Sense & Sensibility of AI

Seminar series on AI Ethics:
Fairness, Privacy, Trustworthiness

AI has an increasing influence on our daily lives, examples include automated decision-making for high-stake decisions such as mortgages and loans, automated risk assessments for bail or recommenders on the internet. These AI systems carry the risk of creating filter bubbles and polarization. While AI is being rolled out into society, the discussion on how AI-based systems may align with and even affect our values, is pushed to the forefront. We gave the computer senses, but how can we give it sensibility? It requires a multi-disciplinary view, where both technical and non-technical perspectives have a prominent place.

In our lecture series ‘Sense & Sensibility of AI,’ we aim for Ph.D. Students to learn about the different aspects of Ethics in AI, not only to become aware of them but also to learn about the impact of AI on society and about methodologies to identify, assess, and possibly address ethical issues. The monthly seminars tackle subjects such as bias and fairness, privacy, trustworthiness, balancing technical, social, and regulatory perspectives.

The series is targeted towards doctoral students working in the broad field of AI and data science. To understand the lectures in full, it may be required to have a background in the technical aspects of AI/machine learning.

Sense & Sensibility of AI is a seminar series developed by Flemish AI Academy in collaboration and with the support of all our partners, all universities in Flanders, and Knowledge Center Data & Society.

Register

We gebruiken deze informatie alleen voor je inschrijving, zoals bepaald in de Privacyverklaring & disclaimer.