We develop AI technology that highlights the inequality of opportunity in society and that actively increases the prospect of equality of opportunity in education, well-being, environment, mobility and health. We also serve as an information point for residents and businesses who have questions about new AI technologies and the ethical and inclusive use of them.
This project focuses on the development and deployment of new tools for fair distribution of educational finances. The tools rely on machine learning algorithms that allow professionals that are not experts in machine learning and statistics, to define unwanted machine behavior, such as unequal treatment of certain population groups by automatic decision systems.
In this project we develop machine learning models for differentially fair classification and (causal) prediction tasks in healthcare, that take into account social factors and fundamental human rights such as non-discrimination and equality and privacy. Unlike most other fair machine learning models in healthcare, which focus mathematical notions of fairness, models developed in this work package build on intersectionality theory.
This project develops models whose predictive outcomes are explainable and adaptable at the same time. These so-called actionable explanations make it possible for end users, such as well-being professionals, to identify useful variables (such as income) that can be used to steer towards certain desired outcomes (such as weight reduction). We combine the appeal of counterfactuals explanations with the properties of actionable features to generate actionable explanations.
This project develops algorithms for uncovering and explaining (the emergence of) inequality in the city based on panorama images of the city. Explainable deep vision models will be combined with case-based-learning models to predict traditional inequality scores, to detect and explain image elements predictive of inequality and to prevent the emergence of urban spaces with an increased risk of inequality.
The project focusses on machine learning algorithms to 1) extract and combine mobility patterns from various data sources such as citizen data, household information, and other publicly accessible geodata such as OV data and Taxi data, 2) uncover economic as well as social and cultural drivers of spatial divisions, segregation and inequality in mobility, 3) to recommend equality-enhancing and poverty-reducing mobility flows.
This project investigated to what extent environmental inequalities in Amsterdam are measurable from panoramic street imagery by replicating the study of Suel et al. (2019) and study their model transferability to data of Amsterdam. Model transferability was studied by applying pre-trained weights from Suel et al. (2019) and by training their model solely on Amsterdam data. Performance was evaluated by comparing outcome values to data from official statistics.