by Katerina Makrogamvraki
Timnit Gebru is a prominent researcher, who has focused her work on highlighting biases and ethical risks concerning AI. She is also a firm advocate of equity, and she has spoken about previous experiences of facing sexism and racism in academia and the industry. In an ecosystem where AI development is predominantly dominated by Big Tech companies with a focus on their profits, Timnit is a leading advocate of giving ownership of AI developments back to the people.
Changing the narrative of AI research
After being fired from Google for raising her concerns about how Big Tech companies in their competition to grow do not think about the kinds of biases being built into the AI systems they design, she decided to create her own research institute DAIR. The Distributed Artificial Intelligence Research Institute is an institute where its employees do research from their desired location and get paid equally. Fighting brain drain and ensuring the involvement of communities in the development of AI solutions is vital.
For Timnit and the institute, it is important to clarify that AI is not always the solution. For example, face surveillance should be something to be removed. So even when they work with communities to solve a problem they do not directly want to think that the end to the problem is an AI solution. But if AI could be used as a tool that provides mitigates or fixes a problem, then DAIR aims to promote AI development that is born from a healthy research lifestyle, where people are not forced to work until they reach their limits and get burned out. Also DAIR has comprehensible principles that will help them evaluate their work and its impact on society.
In a world where AI development and research is mostly funded by large tech companies, tech billionaires, and the military, raising funding for an institute like DAIR is not trivial or easy. In Civic AI Lab, we see that to make change we need to rethink our perspectives and incentive structures. This is a value that DAIR also shares. DAIR has raised funds from the Ford Foundation, the MacArthur Foundation, the Kapor Center, and the Open Society Foundation and are further looking into multiple funding sources that will keep them independent.
Giving ownership back
AI is part of our life, we interact constantly with systems such as digital assistants and social media algorithms. The Big Tech companies that are developing these algorithms affect the world, but the world does not have the same power or opportunity to affect them. With a distributed team, Timnit tries to give a chance to different voices from every place of the world to develop AI empowering their communities.
DAIR wants to focus on research methods that empower communities and not exploit them. To illustrate Timnit’s support for this non-exploitatory, non-traditional thinking in research in our conversation, she elaborated on some examples. We discussed how the Māori descendants, through collecting speech data, created speech recognition technology, which helped them in their quest to prevent their language from dying out. She also pointed out that she thought that that community made the right decision to reject when they were approached to sell the data to an American company.
She supported the idea that the only people who should profit from the Māori language are the Māori people themselves. Another example Timnit gave in which AI can benefit society is through research in Uganda on the detection of diseases on Cassava leaves. Cassava is important nutrition in Africa as well as a stable income stream for (small) farmers as it can withstand harsh conditions. Yet, viral diseases are the major source of a poor field in cassava. Through developing AI this research was able to transfer knowledge of the diseases into an app that the farmers could use to diagnose their harvest through their smartphones.
We should not be indifferent to the fact that large tech companies hold the research agenda in their hands. As Timnit highlighted “ we have never been able to create something equatable from a position of privilege, we haven’t even achieved a way to handle equitable basic resources like water”. It is time to give a chance to grass roots initiatives. Looking for bottom-up approaches that include communities and especially marginalized groups in the process of designing technology can lead to a more equitable society. This is also what Civic AI Lab tries to achieve.
Sharing academic experiences-tips
AI research should benefit everyone, especially those that are hard to reach, at the margins of society. This requires reframing how we design and develop AI technology and involves looking at the current ways of viewing data, methods and outcomes from another perspective and putting society in the centre of our research. This change of perspective is vital also for the academic world. Because the issue is that even though some cases could present significant interest and have a positive influence on a community, like the Casava project, the academic world does not recognize them as novel. That means that scientists sometimes have to actively make a choice between research that could potentially lead to a tenure position or projects that hold more value for the community.
In line with Timnits vision and DAIRs mission, Civic AI Lab develops AI technology to address inequality issues in various domain-specific and domain- overarching projects. From mobility, to health, wellbeing, environment, education as well as ethics and law and public governance. During our conversation we dived into questions and dilemmas that our researchers phase.
We converged on the importance for the academic world to re-assign value to different components of research. We discussed specific use-cases and technologies and doing science in general. From the currentent limited availability of spaces where someone can publish their work, the lack of interdisciplinarity, to the way we review publications and value academic work. For example, there are cases that graduate students need to collect data for their project, a time consuming effort but their work is viewed as not academic.
It is hard to think and feel the pressure of publishing because it can lead you to choose topics that maybe do not reflect your interests but are easier to be accepted. This is a struggle that Timnit as well as many researchers has undergone. As she shared with the students, it is important to feel confident to follow their path even if it can feel “lonely”. It is important for the students to “ease the pressure on themselves” and take one step at the time. Research can be a rollercoaster and unfortunately a lot people do not talk about. Graduate students should be able to maintain their well being during this period.
Our students had the opportunity to discuss further with her about their work and get feedback. Timnit took time to discuss with every student about their case study providing ideas for articles that can be relevant to their work, to tips on how to phase the difficult steps of a graduate program.
Looking at the future
In our meet up we exchanged experiences regarding academia, development of AI and our common goal to involve and empower communities in and through technology. The AI field is changing and people like Timnit are in the fronsteps of this change.
This change can start within academia where we need more interdisciplinary conferences and collaborations and extend to taking ownership back from big Tech companies by involving the communities. To democratise AI we need to work together and allow the space to different disciples and people from different paths of life to provide their view on AI design.
As our scientific director Sennay Ghebreab pointed out “authenticity, integrity to yourself and to the environment and dignity are the values we need to bring in AI. By doing so we will be able to create AI for people, with people and in the end by people”. This is our core mission within Civic AI Lab and something that Timnit values and fights for as well. Thus we are looking forward to future conversations and collaborations!