top of page

What do we do in Civic AI Lab-breaking down our research

Have you ever come across our page and been puzzled by technical jargon?

This blog was created especially for you! At Civic AI Lab we value the emphasis on the explainability of our research. We believe that AI can empower people (and thus you!) by allowing you to be engaged in the creation processes of AI. That is why we asked our Ph.D. students to take a few steps back while explaining their projects in this blog. Enjoy reading!

Mayesha Tasnim (AI & Education):

In Amsterdam, some high schools have more applicants than seats. For this reason, the school board uses an algorithm to decide which student should go to which school. They do this through lottery numbers: lucky students with a good lottery number go to a school that they prefer, while unlucky students end up going to a school they don’t like very much. For my thesis, I want to design an algorithm that lets more students into a school they like better, without having to increase the number of seats in the popular schools. First, I used the Hungarian algorithm which minimizes a cost function based on the students’ preferences. The results show that around 300-400 more students could have been assigned to one of their top-3 schools. However, it is possible for a student to influence the outcome of this algorithm by designing their preference list in a strategic way. So my current research direction focuses on designing a smart cost function that can detect common strategies and not give extra benefits to the strategizing student over others. The goal is for the allocation algorithm to make the most number of students happier while also being fair to all students. For financial resource allocation, the city of Amsterdam uses an algorithm that decides whether or not a student should receive money based on their background. This algorithm carries the risk of amplifying the existing biases in the data. For this problem, I plan to first study the historical impact of funding allocated by the existing algorithm. Next, I want to design an optimization approach that can maximize the beneficial impact of the funding allocation, while applying constraints to avoid unwanted biases using the Seldonian framework.

Sara Altamirano (AI & Health):

Oftentimes, societal bias is reflected in data due to past inequitable decisions. When using these data to create algorithms, there is a risk of embedding bias and even magnifying unwanted outcomes. Moreover, if an algorithm’s prediction is unfairly influenced by characteristics such as age and gender or it makes more errors for different groups, then we could say it is unfair. The concept of counterfactuals aims to answer the question: what would have happened? Counterfactual fairness in AI allows us to ask questions about an alternate reality. For example, what an algorithm would have predicted had an individual been younger. Would the prediction be the same for all ages given all other characteristics remain unchanged? Our work is concerned with potential bias and discrimination introduced in infant health risk detection algorithms. We particularly focus on counterfactual fairness assessment methods that investigate prediction error discrepancies. In collaboration with domain experts, we will identify factors that explain prediction errors and design counterfactual models that describe correct and incorrect detection of infant health risks. These counterfactual fairness assessments have limitations that we will address with new methods for generating simulated data, i.e., synthetic data. By definition, health data should remain private and confidential, so we will investigate the use of privacy-preserving synthetic data to complement existing data with representative counterfactuals. The insights of this work will be used by the City of Amsterdam to inform end-users and developers of infant health risk detection systems on incorporating fairness assessments into their machine learning pipelines.

Dimitris Michailidis (AI & Mobility):

Mobility is the foundation of the modern, fast-paced urban life. Being able to go from one place to the other quickly and cheaply is vital for participating in labor, education and social life. However, not everyone can enjoy these benefits, even in rich countries. For centuries, city planning has focused on utility and fast growth, ignoring accessibility and inclusivity, and leading to the phenomenon of mobility inequality. This disproportionately affects disadvantaged groups of people, who are constantly being forced further away from central amenities, making their daily commute more expensive and time-consuming. Recently, Artificial Intelligence (AI) solutions have been proposed for several urban design problems, from creating new metro lines to deciding where to build workplaces. But these solutions rely on historical data, and therefore not only do not address present inequalities but can even enhance them. In this research, we argue that the power of AI can be used to address mobility inequality, by creating tools that design alternative urban systems by learning to compromise between utility and inclusivity. In doing so, we consider how people currently move around, as well as changes those alternative systems might induce. Specifically, we use reinforcement learning to design fair transportation systems under different notions of inclusivity, and agent-based modeling to learn behavioral representations and simulate how different communities would adapt to different urban designs.

Ilse van der Linden (AI & Well-being):

Healthcare professionals and policymakers are interested in the pattern-finding capabilities of AI models. However, there is a difference between the limited instructions we can give a model and all factors that might be relevant in a real-world application. For instance, when we instruct a model to predict a health outcome from data, we cannot assume it will only consider relevant and accurate information. Thus, the model will inevitably make some mistakes, which can lead us to misjudge the needs of certain patients. This means we often need a human decision-maker to interpret and weigh the information provided by the AI model. The challenge is that advanced AI models are opaque models, meaning a human cannot directly interpret the reasoning of the model. AI researchers have developed a range of methods to explain the predictions of an opaque model. But what is the meaning of the information that these methods provide? "To explain", means "to explain to someone". Therefore, many factors that make for a good explanation (i.e. usefulness, relevance, actionability) can only be assessed from a stakeholder perspective. In our research, we explore interdisciplinary collaborations on use cases that will benefit public well-being. Specifically, we examine how explainable AI can provide end-users, such as healthcare professionals or policymakers, with a means to verify their models and steer towards desired outcomes (such as healthy lifestyle and well-being). Through user studies, we assess the usefulness of explainability methods in the context of the objectives of our public partners.

Mirthe Dankloff (AI & Public Governance):

Algorithms are increasingly being used by municipalities and other government bodies. Examples are predicting which citizen is at risk of poverty or detecting fraudsters. However, algorithms can make mistakes in their predictions and this can happen more often for minority groups. The outcomes can be unfair when some citizens have a bigger chance to receive a loan or of being accused of fraud. Therefore, it is necessary that the people who use and design algorithms understand more about the uncertainty of algorithmic predictions. This research explores how information about algorithmic predictions is explained between people working with algorithms in the public sector and is done in collaboration with the Dutch Ministry of Interior Affairs. More specifically, it is researched how information about algorithmic predictions is explained between partners from varying technical backgrounds such as engineers and policymakers. In the first phase of this research, interviews are conducted with people who work on fraud detection, risk prevention, or resource allocation algorithms. The goal of the interviews is to map out who is (in)directly involved in the design of the algorithm and what decisions they find important. Questions will also be asked about their roles, responsibilities, perceived challenges, and communication gaps. In the second phase, the goal is to design the tools that will help to make informed public policy decisions for those with little technical knowledge, such as policymakers, program managers, and citizens. The answers given in the interviews will serve as a reference. Examples of tools include simplified visualizations with accessible language to interpret uncertainty in algorithmic predictions, model cards for policymakers, or other forms of multi-stakeholder communication support.

Vanja Skoric (Ethics & Law):

AI implementation is becoming deeply rooted in our lives. Its benefits range from taking over tedious repetitive tasks to being able to analyze bigger quantities of data over a short period far more efficiently than we humans can. But, today we already know that the use of AI has a real-life impact on people and society, for example, deciding on where you work, whether you get social benefits or a mortgage, or if you stay in prison. And what about different institutions collecting numerous personal data from people without their conscious approval? Sometimes, AI's impact is positive, other times harmful. How could we know in advance, before damage is done? How to balance gaining the best out of the new technology, with reducing or avoiding negative effects? And who needs to be involved in these conversations? Without a doubt, there is a lot of catching up to do in the protection of existing human rights and designing the most useful AI for all. This research will examine human rights protection in AI, to ensure governing the development and use of AI in a just, fair manner, particularly for public use. It will also look at methods to assess the impact of AI systems on individuals and communities, including the processes that encourage broad people’s participation.


bottom of page