Original article by InnovationsOrigin can be found here
Research has shown that diverse teams perform better. This is particularly the case in innovation technology. After all, people with different backgrounds and views provide creative insights. Nevertheless, in the business world in 2020 there is still room for improvement in this area, especially when it comes to cultural-ethnic diversity. In a brief series, Innovation Origins searches for answers to the question of why this is a major social problem, and above all: How can we fix it?
Despite all the negative stories that surround AI, Sennay Ghebreab, a neuroinformatician at the University of Amsterdam and founder of the Civic AI Lab, firmly believes in the positive power of algorithms. However, it is important that the general public understands that Artificial Intelligence (AI) also serves their interests. As well as ensuring that algorithms are representative of all groups within society.
Ghebreab came to the Netherlands as a refugee from Eritrea together with his parents when he was six. He was committed to fighting social injustice from an early age. Combating inequality of opportunity is the common thread running throughout his career. Whether it is about civil rights in general or something more specific concerning discrimination and racial oppression. “Some people in society unfairly have more opportunities than others. I want to help increase equal opportunities for all civilians. Since I am in the technical corner and because my field of expertise is in AI, I happen to do this through the use of technology.”
Inequality of opportunity
One such example of inequality of opportunity in the application of algorithms that Ghebreab mentions is access to healthcare in the US. “For many Americans, this is determined by an algorithm that calculates health risks. High-risk people are referred to a GP or hospital. But it turned out that black people – who were just as sick as white people – were assigned a lower health risk. And therefore had less access to healthcare. The algorithm happened to calculate health risks on the basis of healthcare costs incurred in the past. These costs were lower for the black population due to segregation and discrimination. As a result, they had less access to healthcare.”
Yet similar discriminatory practices also occur closer to home. Such as the recent dramatic benefits affair at the Netherlands Tax and Customs Administration (‘Belastingdienst’). Although these kinds of errors do occur across the board, according to Ghebreab. From the financial field to the legal field. “This is not so much because of the algorithms themselves, but because of their application. For example, when it comes to crime recidivism figures linked to certain population groups and recidivism predicted on the basis of historical data. If there is a bias contained in this, it means that the same bias is factored into policy decisions.”
Recognizing and preventing bias
This is also what the research at the Civic AI Lab focuses on, Ghebreab states. “By uncovering such things, you can go a step further and do something about this.” So, how can you prevent these biases? Ghebreab: “You can do that at all decision-making levels on the basis of how you collect new data. You can include specific ‘equity metrics’ that take into account all kinds of different aspects such as gender, age, and ethnicity. These sorts of things are still not built into algorithms enough. Another problem is that algorithms are still being developed without any oversight of the way in which they are developed or used. Whereas this is precisely what is needed to see whether errors are being made.”
What also seems to be going wrong, according to Ghebreab, is that nowadays some sections of society do have access to the knowledge of how algorithms work while others do not. “This creates a level of digital inequality that, if you do nothing about it, will only increase in the future. You can compare it to reading and writing: In a digital world, you need knowledge of AI in order to be able to participate.”
Consequently, in Ghebreab’s view, it is up to the government to assume its responsibility in this respect. By investing in focusing attention on AI as part of the basic education curriculum. “Consider, for example, the subject of ‘citizenship and digital literacy.’ Within Europe, Finland is leading the way here. A national AI course has been in place there since 2015, which everyone can take part in. Meanwhile, the Netherlands also has similar initiatives. But for the time being, these have been set up by the science community and a few companies such as TechLeap.”
“We are currently at a crossroads when it comes to the utilization of AI,” Ghebreab explains. On the one hand, a top-down approach whereby the government wants something from you but the citizens themselves are not contributing to it, and where major technology companies are looking to extract as much data as possible so that they can then use that data to make as much profit as possible. On the other hand, we are dealing with a very positive, bottom-up trend where civilians themselves are benefiting from the opportunities offered by AI. The latter is all about ownership where algorithmic thinking is concerned.”
This is also the idea behind the Civic AI Lab, with which Gehreab hopes to turn the doom and gloom mentality around AI into a ‘do and dare’ mentality. “I really believe in ‘AI for all‘. But in order for minority groups, who are not currently represented in algorithms, to have a place within AI, they must also take on some personal responsibility themselves. Because it is only when you participate that you can also make sure that algorithms are used in your best interests.”