top of page

How Can AI Help Tackle Health Inequality?

Updated: Nov 25, 2021

by Sara Altamirano

photo by Pexels


Artificial Intelligence (AI) provides powerful tools to help tackle societal challenges, particularly when there is abundant historic data at hand. The healthcare industry represents a unique use case since electronic health records (EHR) are usually kept for a number of years. In the Netherlands, medical records must be kept for 20 years, which represents an exciting opportunity to explore the data, find insights, and design tailored solutions to help patients. The possibilities sound promising; however, there are several caveats regarding the validity of said solutions as there are concerns that AI-assisted systems may behave unfairly across populations.


What is the problem?


Fairness in AI is essentially the concern that algorithms may potentially exhibit unfair or discriminatory behavior, typically because biases inherent in data can lead to machine learning algorithms discriminating against certain populations among the lines of gender, race, disability status, religion, ethnicity, etc. Unfortunately, societal bias and discrimination are reflected in historical datasets due to inequitable decisions previously made by humans. By training AI models with these datasets to simulate human decision-making, we risk unintentionally embedding human biases and perpetuating existing inequalities in healthcare. Moreover, we could jeopardize propagating and magnifying unwanted outcomes in the form of algorithmic discrimination. A clear example is algorithms used to detect skin cancer. One quick Google Images search and we will find most, if not all, pictures of melanoma on light skin. If the datasets used to train melanoma-detection algorithms are widely representing lighter skin tones, how could we responsibly use them to accurately diagnose skin cancer for all skin tones?


What are the challenges?


Decision-makers in the clinical setting should have a sufficient understanding of the recommendations made by algorithms, including potential sources of bias. Therefore, algorithms that provide a reasonable explanation of the outcomes are an important part of the puzzle.


The societal cost of inaction is incalculable. For instance, by not including representative samples of patients from ethnic minorities, the repercussions could range from misdiagnosis to significantly widening the advantage gap. In another example, wrongly labeling a person with a low risk of skin cancer has enormous consequences on the individual and the group they represent. The considerable inconsistencies in the accuracy of classification and prediction models in healthcare need urgent close inspection if we are to develop truly fair, reliable, consistent, transparent, and accountable AI health support systems.


Ultimately, improvements in AI healthcare should positively affect all patients, regardless of their background. We could go even further and suggest personalized solutions given their backgrounds and diversity.


How can AI help overcome these challenges?


The AI community is actively exploring and designing approaches for mitigating persistent sources of unfairness. Most of this work can be summarized as an approach where a fairness metric is used to test whether an algorithm is being unfair or making more errors towards certain groups; if so, mitigation strategies are embedded into the redesigned algorithm.


An example of fairness criteria is Differential Fairness. From a wider perspective, there is a need for an interdisciplinary approach to AI fairness, which has become more evident as AI pervades many societal issues, from civil rights to the environment to health and everything in between. With Differential Fairness, we can address the specific challenges of fairness in AI that are motivated by Intersectionality. Intersectionality emphasizes that systems of oppression built into society lead to systematic disadvantages along intersecting dimensions, such as race and gender.


Previous fairness mechanisms protect minorities from a one-dimensional approach (e.g., using ‘gender’ as the protected attribute), downplaying the considerations of multi-attribute minorities, but Intersectionality takes into account people's overlapping identities in order to understand the increasing complexity of prejudices that they face.


In essence, an intersectional definition of fairness would have multiple protected attributes like gender and race, and it should consider all of the intersecting values of the attributes we are aiming to protect. Systematic differences assumed by intersectionality due to structural oppression should be rectified rather than codified by the algorithm. We would especially aim to protect minority groups because these are often the groups that are subjected to the most oppression, disadvantages, and marginalization in society. In summary, regardless of the combination of protected attributes, the probabilities of the outcomes should be similar for the entire population, ensuring equal treatment and providing a degree of protection to minority groups thus counteracting bias.


Conclusion


All things considered, while there is no one-size-fits-all solution to algorithmic unfairness, we believe the aforementioned challenges can be lessened by raising awareness, increasing accountability, promoting regulation, and the democratization of AI, as we have come to expect that AI systems mimic current societal values.







bottom of page