We develop AI technology that highlights the inequality of opportunity in society and that actively increases the prospect of equality of opportunity in education, well-being, environment, mobility and health. We also serve as an information point for residents and businesses who have questions about new AI technologies and the ethical and inclusive use of them.
This project focuses on the development and deployment of new tools for fair distribution of educational finances. The tools rely on machine learning algorithms that allow professionals that are not experts in machine learning and statistics, to define unwanted machine behavior, such as unequal treatment of certain population groups by automatic decision systems.
In this project we develop machine learning models for differentially fair classification and (causal) prediction tasks in healthcare, that take into account social factors and fundamental human rights such as non-discrimination and equality and privacy. Unlike most other fair machine learning models in healthcare, which focus mathematical notions of fairness, models developed in this work package build on intersectionality theory.
This project develops models whose predictive outcomes are explainable and adaptable at the same time. These so-called actionable explanations make it possible for end users, such as well-being professionals, to identify useful variables (such as income) that can be used to steer towards certain desired outcomes (such as weight reduction). We combine the appeal of counterfactuals explanations with the properties of actionable features to generate actionable explanations.
This project develops algorithms for uncovering and explaining (the emergence of) inequality in the city based on panorama images of the city. Explainable deep vision models will be combined with case-based-learning models to predict traditional inequality scores, to detect and explain image elements predictive of inequality and to prevent the emergence of urban spaces with an increased risk of inequality.
The project focusses on machine learning algorithms to 1) extract and combine mobility patterns from various data sources such as citizen data, household information, and other publicly accessible geodata such as OV data and Taxi data, 2) uncover economic as well as social and cultural drivers of spatial divisions, segregation and inequality in mobility, 3) to recommend equality-enhancing and poverty-reducing mobility flows.
Assessing Errors and Supporting Fair AI in the Public Sector
The focus of this research is to develop practical tools and methods for error assessment to support the fair use of AI in the public sector and is conducted in collaboration with the Dutch Ministry of Interior Affairs.
Ethics and Law
Inclusive and rights-based norm design for AI systems
Striking the right balance between artificial intelligence (AI) and algorithmic systems development and protecting and promoting human rights and freedoms is a key concern of the current debate. There is a significant gap between envisioning rights-based AI systems and their practical application. This research aims to address that gap.
This project plans to develop an opinion dynamics model to infer how link-recommendation algorithms might impact the possibility that different groups widely spread their opinion.
Many scalable, automated hiring systems are prone to harming and discriminating against certain groups; this project addresses this issue by tackling it at its core: the synthesis of fair, unbiased training data.
The project seeks to improve fairness in the lottery matching system used for secondary school admission in Amsterdam, by generating realistic synthetic data about Dutch students’ school choice preferences which are essential for training the matching system.
This project investigated to what extent environmental inequalities in Amsterdam are measurable from panoramic street imagery by replicating the study of Suel et al. (2019) and study their model transferability to data of Amsterdam. Model transferability was studied by applying pre-trained weights from Suel et al. (2019) and by training their model solely on Amsterdam data. Performance was evaluated by comparing outcome values to data from official statistics.