Coded Bias — All faces matter

Chamberlain Mbah
6 min readNov 4, 2022

--

Coded bias

Is it because I am black? Being black and living in the west, you must have asked this question or at least thought about it. Well, AI enthusiast will tell you they will come a time when humans will not longer make important decisions (e.g., on who gets a loan, who gets the job, who gets paid higher, who gets an organ donation).

They claim AI-powered machines will make these decisions for us — phew, at least AI will not care about my skin colour. Is this true? Let’s take facial recognition as an AI machine.

ILLUSTRATION: JACKY ALCINÉ AND TWITTER

Black programmer Jacky Alciné said on Twitter that the new Google Photos app had tagged photos of him and a friend as gorillas. It is not the first time we hear these stories, have these AI algorithms improved? Is anyone looking into it?

The NIST( National institute of standards and Technology) of AI in America found out that even the best facial recognition tech (FRT) algorithms still displayed a higher false positive rate among West and East African individuals, while Eastern Europeans had the lowest false positive rate.

Putting it in simple terms, the best FRT tended to mis-identify photos of Africans while at the other end of the spectrum, Caucasians had the least mis-identification.

In this blog post, I will discuss some of the potential biases that may be present within current facial recognition algorithms and ways that we can address them in the future.

What Is Bias?

Bias is the inclination or prejudice for or against one person or group, especially in a way considered to be unfair. It can also refer to a belief that a specific group of people is inferior in some way.

Where does the bias in FRT come from?

Many facial recognition algorithms use historical training data to determine how to identify faces in a particular image. This data may come from social media sites such as Facebook and LinkedIn, which automatically tag pictures of people’s friends and associates to help users quickly identify and connect with others.

These images are then used to train the software to recognize new images and individuals. However, these images do not necessarily reflect the demographics of the general population and often contain an overrepresentation of white people and an underrepresentation of people of color.

As a result, some facial recognition systems may be more likely to misidentify black people or people of other races. This data imbalance phenomenon is already well known but there are even more serious sources of bias.

Have you heard about the cross-race effect? — that is tendency for individuals correctly identify faces from people of their own race rather than others. In law enforcement, an analysis was carried out in 2001 where crime victims were asked to identify the criminal from a line-up made of individuals from another race. Only 46% of these cross-race identification were correct — far below the performance of state of the art FRT.

Here is why this information matters, AI algorithms in general learn from labelled data. I.e., for AI to learn from data, a human (gold standard) must have already labelled the data “correctly”. AI will then learn the patterns inherent in the data.

Data containing faces that was labelled by Caucasians, will be mostly correct for Caucasian faces and will contain significant errors for other races due to the cross-race effect. FRT will not be able to survive errors in labeling.

Researchers have also noticed darker skin tones reflect less light, and therefore provide less detail for facial recognition algorithms to analyse. The light bouncing effect.

Now even if the data is balance, i.e., equal amount of data from all races, the cross-race and the light bouncing effects will still cause FRT to be biased.

Why Does This Matter?

Apart from a FRT AI labelling your face as a gorilla, there are other painful consequences of bias in AI as a whole. We might not notice bias in our everyday lives, however, it may have a significant impact on our social interactions, mental health, and overall well-being.

For example, research has shown that having a high threshold for anxiety is associated with racial discrimination and that people who suffer from stress are more likely to be targeted by police officers during traffic stops.

As more companies use facial recognition software for security systems such as security cameras and alarm systems, we may also be at risk of heightened surveillance and racial profiling by the police and other authorities.

Taken together, these findings suggest that automated facial recognition may have significant negative consequences for minority populations and may perpetuate existing racial disparities in our society.

What are policymakers/governments doing about it?

In 2019, San Francisco became the first US city to ban facial recognition technology (FRT), specifically vetoing its use by police and other agencies. Since then, several other American cities have implemented their own similar FRT bans, with Boston’s city councillors explicitly highlighting one particular issue: the technology’s bias.

The EU parliament’s views are towards privacy concerns rather than bias. In the EU, opponents of live facial recognition tech argue that such tools are favored by authoritarian governments in places like Russia and China to track dissidents or vulnerable minorities, and are ultimately dangerous for civil liberties.

They also point to risks of racial profiling and invasion of privacy, which led large companies including IBM, Amazon and Microsoft to suspend the sale of facial recognition tools to governments.

What’s happing in the corporate world?

In addition, moves to prohibit FRT have not come only from public officials, but the corporate world as well. In a widely circulated open letter in June 2020, IBM CEO Arvind Krishna outlined several proposals to promote racial justice, including the fact that “IBM no longer offers general-purpose IBM facial recognition or analysis software”.

Emphasising his point, Krishna added: “Vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, and that such bias testing is audited and reported.”

How then do we proceed?

Like all of AI, collecting more diverse datasets can help to ensure that face recognition algorithms are functioning properly and identify individuals accurately regardless of their race.

Data audits (preferably external) making sure the data is labelled correctly. When applying filters especially on faces, the same filter can’t applied to white and black faces, simple measure like this can go a long way to make FRT adoptable easily, a technology that people can trust.

Researchers on the other hand have devised several strategies to reduce the impact of bias in facial recognition software. One technique involves using so-called “ensemble methods” to combine the results of multiple different algorithms into a single final score.

Another method involves running the algorithm on “pre-computed face databases” instead of live video streams. These databases contain images of faces from diverse racial and ethnic backgrounds that have already been identified using traditional human classification systems and are therefore less likely to be biased.

Finally, some researchers are developing algorithms that explicitly take into consideration factors such as skin tone and gender in an attempt to reduce bias and improve the accuracy of their predictions.

While these approaches may help reduce bias to some degree, they are unlikely to eliminate the need for human observers in the future. As a result, it is important for policymakers to address the biases inherent in current facial recognition systems by strengthening transparency measures and promoting more inclusive data collection practices.

Many people must come together to decide how to navigate this future: ethicists, technologists, politicians, and sociologists, to name just a few. Regulation is likely to be key and we have already seen example of bans being successful implemented in public settings.

The ethics of facial recognition software is complex. It begins with the collection of datasets often without explicit consent from the people concerned. And it then ranges over many issues, from its use on vulnerable populations like the Uyghurs in China, to the dilemma of a technology with both good and bad uses.

Face recognition may be one of the first uses of AI to trouble us greatly. But it will not be the last. Ultimately, this is about the world we will invent. And all of society must be engaged in this debate.

What are your thoughts?

--

--

Chamberlain Mbah
Chamberlain Mbah

Written by Chamberlain Mbah

Dr. Chamberlain Mbah is a lead data scientist known for his expertise in big data, machine learning algorithms, and creating advanced AI applications.

No responses yet