Do you speak my AI language?

By Vanja Skoric

Image by GettyImages


Discussions on AI impact on our rights and freedoms is often limited to sometimes abstract notions of fairness and non-discrimination. There is rarely a mention of the full range of issues and concerns, based on lived experiences of individuals and communities. What we need is to anchor such discussions into coherent and robust processes and meanings. There is a growing recognition that engineering alone cannot solve key concerns surrounding the potential technology impact. We need to incorporate the knowledge of sociologists, philosophers, change makers, lawyers, communities, to come up with solutions that are fair and equitable while complying with the law and avoiding harm. Emerging practices of AI assessments and audits are fostering an ecosystem of managing potential harms and benefits. But before identifying what these include and who do they involve, we need to be able to understand what they mean and speak the same language.


Currently, a variety of terms and frameworks exist on processes that essentially provide organizational, policy, or technical instruments to ensure preventing harm and enhancing benefits of AI. Difference between those seems to be not so much in the goal and scope, but in terminology and methods used. To date, there are no clear definitions or scope of algorithmic audit or assessment. There is a need to find an overarching point(s) of reference to assess the impacts of AI and to guide its development. This cannot be done solely at a general level, or on the basis of principles, but must be practically embedded into all phases of development and deployment of AI system.


Smaller AI developers and AI users reach out to a growing number of startups offering services to develop, monitor, and repair AI models, from bias-mitigation tools to explainability platforms. The responsibility issue is framed around AI risk mitigation models or processes. It requires an AI company to build internal impact assessment questions about its systems, offering a corresponding set of mitigation actions. Larger AI companies typically develop in-house organizational set up to address issues of responsibility, impact and harm mitigation. These vary, from operationalizing responsible AI principles through a specialized department and internal policy; establishing responsible innovation teams to help anticipate and mitigate harm in the product development lifecycle; engaging external human rights impact assessment (HRIA) to review company processes and validate its human rights risks; forming teams dedicated to studying services and products for bias. Although using different approaches and terminology, from risk management, responsible innovation or ethic assessments, companies are increasingly looking to address issues of potential harmful impact of AI. This is followed by the understanding that accountability should be attributable to AI-involved actors corresponding to their role in the life cycle of the AI system.


Algorithmic auditing is a framework developed to guide the ethical assessment of an algorithm in highly context-dependent process. It provides assessment of an algorithm’s negative impact on the rights and interests of stakeholders, with the corresponding identification of situations and features that cause negative impacts. Additionally, a practical algorithmic impact assessments can be used in public agencies. It recommends a self-assessment of existing and proposed AI systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities. The agencies should develop meaningful external review processes to discover, measure, or track impacts over time as well as provide notice to the public. An assessment model for a Human Rights Impact Assessment (HRIA) provides a human rights management tool intended for AI development and design with specific, measurable and comparable evidence on potential impacts, their probability, extension, and severity. It can also facilitate comparison between alternative design options, based on risk assessment and mitigation. Similarly, ECP AI Impact Assessment offers concrete steps to understand the relevant legal and ethical standards and considerations when making decisions on the use of AI. It also offers a framework to engage in a dialogue with stakeholders in and outside of the developer organization. The Algorithmic Audit Framework technology ensures that the people who are using algorithmic decisions understand how they work, how they should incorporate them in their decision-making process, in order to avoid situations where bias is reincorporated into the process. Finally, designing for human rights in AI, based on the methodologies of Value Sensitive Design and Participatory Design, helps translate human rights into context-dependent design requirements. These include four values - human dignity, freedom, equality, and solidarity - as top-level requirements that should guide the design process.


Along with research and business practices, international standard setting and regulatory efforts are slowly catching up. The OECD developed a systematic risk management approach as a set of principles and recommendations to promote an AI-powered development that is trustworthy and respects human-centered and democratic values. These include applying a risk management approach to entire AI system lifecycle, including privacy, digital security, safety and bias. The UNESCO is developing Recommendation on the Ethics of AI that include a call for conducting ethical impact assessment which embeds human rights and fundamental freedoms. The framework should identify and assess benefits, concerns and risks of AI systems, as well as appropriate risk prevention, mitigation and monitoring measures for human rights and fundamental freedoms, in particular the rights of marginalised and vulnerable people. The Council of Europe (CoE) is setting a uniform model for a human rights, democracy and rule of law impact assessment (HRDRIA), attempting to define a methodology as well as the assessment based on relevant CoE standards. In a recent wide consultation exercise involving countries, business community and civil society, a significant majority (81%) indicated “Human rights, rule of law and democracy impact assessments”, “Audits and intersectional audits” (70%) and mechanisms of “Certification and quality labelling” (51%) as most appropriate mechanisms to efficiently protect human rights, democracy and the rule of law in AI application. The European Union AI draft Act would require AI providers to undergo a conformity assessment based on a set of essential requirements for high-risk AI systems.


When discussing inclusion and emancipation of people and communities in the AI context, the key question arises: who exactly needs to be consulted, how and in what stage of the AI lifecycle to ensure preventing harm and boosting benefits? Apart from general calls for participation, and a few research models indicating pathways to address it, there is little practical guidance or examples of clear methodologies, efforts or co-creation practices. Recent discussions of AI developers at different conferences detect a clear lack of guidance and terminology disparity on practical, meaningful and comprehensive inclusion in design and development of AI systems. Challenges include danger of “participation washing” and coopting the groups or communities for the purpose of box-ticking exercises, lack of resources for stakeholders to engage meaningfully in robust processes, lack of motivation and technical knowledge of stakeholders to fully participate in impact assessments, unclear meaning of terms. Some research models suggest an integrated assessment, based on broader fieldwork, citizen engagement, and a co-design process, arguing it can evaluate the overall impact of an entire AI-based environment in a way that is closer to traditional HRIA models. The algorithmic auditing framework proposes a relevancy matrix between metrics and stakeholder intereststating understanding the context is crucial to enumerate stakeholder interests, to identify particular sub-categories of stakeholders facing particular harms that need to be involved. The “power shifting approach” proposes a counter-method to merely allowing participation or input into shifting the allocation of power toward the most marginalized members of society, giving them direct influence on policy outcomes. Designing for human rights in AI offers value scenarios, which are nuanced imagined scenarios involving the proposed technology and various direct and indirect stakeholders, to help uncover harmful and unusual uses, value tensions, and longer-term societal implications.


On the other hand, multilateral efforts to promote inclusion have been surprisingly scarce. The UNESCO calls for ensuring protection and promotion of diversity and inclusiveness throughout the life cycle of AI systems, by promoting active participation of all individuals or groups without discrimination, different stakeholders involvement for inclusive AI governance. In the CoE consultation exercise, many respondents recommended participatory inclusion of all stakeholders, especially groups underrepresented in public institutions and in AI policymaking, by establishing a platform to facilitate the sharing of good practices, identification of trends in the development of AI and the anticipation of ethical and legal issues. The European Union AI draft Act, where many rights and freedoms are at stake, remains unclear on including stakeholder representation and if these will be meaningful enough to engage affected communities. Much like the issue of assessing impact, the inclusion of stakeholders suffers from lacking common methodological and framework understanding that can serve as clear guidance and reference point.


So what do we need to speak the common language? An overarching, mature approach to fully inclusive, highly consultative impact assessments, where public values, human rights, democracy, and security considerations are properly weighted and fully respected. We need to arrive at a clear definition of what independent and meaningful assessment means in the context of automated decision-making systems, algorithms, and A.I. What do we audit for? What will make it successful and what have we learned from existing frameworks from other industries? Can we challenge the assumptions baked into the technology and question whether it should exist in the first place?


Key is to include diverse forms of expertise, living experience and lessons, and translate these into concrete descriptions of impacts. Such processes should not be seen as a burden or obligation on the side of developers, but an integral part of the normal AI lifecycle and a chance to design effective human-centric and high quality technology in challenging contexts. They should be used as an opportunity to emancipate people and communities, especially disadvantage ones, and celebrate diversity. This is not only a matter of regulations, but a mix of social, institutional and organizational processes and of technologies, in need of truly interdisciplinary practice.