Contact a Mobian

We’d love to hear from you!
Drop us a message and we’ll get back to you soon.

Editshare - Enhancing EditShare’s flow media asset management with Automated AI
Nomad

Editshare - Enhancing EditShare’s flow media asset management with Automated AI

Thought leadership

Addressing biases in AI: An ongoing process

February 16, 2023
November 18, 2021
Clock Icon
4
min
Introduction

In 2015, Jacky Alcine, a web developer in Brooklyn, noticed that Google Photos had introduced a new automatic tagging feature: all his photos came up with tags like ‘bikes’ or ‘planes’ when those objects were present in the images. On coming across photographs of himself and his friend, Alcine (shockingly) noticed that Google Photos had generated tags which said “Gorillas”! Alcine and his friend are both African-American, and Google had managed to zero in on one of the most racial epithets that exist and labelled the two with the same.  

This mis-labelling people based on their race is not something that only Google is guilty of. Joy Buolamwini, a researcher and coder, faced discrimination straight from a machine multiple times. As an undergraduate student at Georgia Institute of Technology, facial recognition systems would work on her white classmates, but would fail to recognise Joy’s face. She dismissed it as a flaw, and was sure that this would be solved soon. However, she encountered the same bias a few years later again, this time at MIT’s Media Lab. The facial analysis software that Buolamwini was using for her project failed to detect her face again, while it detected the faces of Buolamwini’s colleagues, who had a lighter skin colour. Buolamwini had to complete her research wearing a white mask over her face in order for it to be detected by the software. Joy went on to complete her MIT thesis on the topic of ‘Gender Shades’, where she examined the facial recognition systems used by IBM, Amazon and Microsoft and discovered the biases that they promote.

These biases are a result of the kind of data-sets that machine learning models are trained on. Many open source data-sets consist heavily of Caucasian, male faces; the algorithms that are developed on top of these data-sets usually have inherent biases where the results of the underrepresented group are significantly inconsistent, wrong, and often offensive. The models can accurately detect faces when it comes to white people, but fail when they encounter faces of people from other races. This is an indirect result of the workplaces being dominated by male, white engineers who fail to see the problem with such data-sets.

Reducing the bias in classification

It is important to point out that the decision to review our data-sets and machine learning models was made because we discovered through our client conversations and internal team discussions that our face analysis was underperforming when it came to detecting different races.

As a first step to mitigating these biases, the data-sets that machine learning models are trained on should be more balanced and inclusive when it comes to representing different races. Compared to other publicly available data-sets, the one used for training models at Mobius Labs consists of a much more balanced racial representation.


Proportions of different races represented in various data-sets

If machine learning models are trained on data-sets like LFWA+, CelebA, COCO, IMDBWIKI, VGG2, which consist of a significantly higher percentage of white faces, there is bound to be a heavy bias when these models are then deployed on unseen images. The models definitely fail to detect different races; furthermore, when it comes to detecting the age of people in images, models trained on the aforementioned data-sets perform fairly well when it comes to white faces, but fail miserably when it comes to that of other races simply because they were not trained with the appropriate data to begin with.

For each of the races, the training data-set used to train our models at Mobius Labs consists of roughly 10,000 images, thus ensuring a more distributed representation which consequently makes models more accurate when it comes to detecting faces of varied races*.

The result

In order to evaluate the performance of our classification model, we use a (unseen) test dataset of around 10’000 images, roughly balanced across the seven races defined above (~1500 samples per race)*. We run our classification on the faces of the test set and compute what is called a confusion matrix, which is a simple yet very effective way of visualizing the mistakes (“confusions”) a classification model makes.



The vertical axis (y-axis) of the confusion matrix represents the true labels of the test data, and the horizontal axis (x-axis) shows what the trained classification model predicted. The value in each box represents the number of samples that fall into each configuration of true label and predicted label. If this is confusing, consider the following example: the first row represents all the samples with the true label ‘Southeast Asian’. Summing up the values of all boxes in the first row, we see that the test set contains 1415 faces with the true label ‘Southeast Asian’. Out of these, our model predicted  965 faces with the (correct) label ‘Southeast Asian’, and confused 60 ‘Southeast Asian’ faces as ‘Latino Hispanic’, 325 as ‘East Asian’, and so on. The ideal confusion matrix is one where there are non-zero numbers only on the diagonal (highlighted in a lighter colour in the figure), and the rest of the boxes have a zero.

Looking at the off-diagonal boxes thus allows us to see the most common mistakes the model makes. For example, 325 Southeast Asian faces were predicted by our model as East Asian. Similarly, 315 East Asian faces were predicted as Southeast Asian.

While still far from perfect, we can see that a lot of the mistakes the model makes are understandable, and it is not uncommon that even human annotators who create the ‘true labels’ make mistakes. For example, below we visualize the 10 Southeast Asian faces that were predicted as ‘White’.


Out of these, perhaps half are actually mislabelled, and we can see that the model has more difficulty classifying the race for faces that are sideways. Such insights can then be used to either further improve the model by providing more samples that are sideways, or, at least not predict the race for faces that are too sideways.


While our models are not perfect yet, we are constantly revisiting and reevaluating them in order to make them as accurate as possible. This is an ongoing process with inputs and feedback not only from the scientists in the team, but also other members from the rest of the departments at Mobius Labs. By taking into account different opinions and perspectives, we aim to create a technology that is sensitive to any possible mislabelling of visual data. This circles back to one of the core values of our company, which focuses on inclusivity in both our team and the technology that we build.

*The reasons for training our models on these specific races are elaborated on in this paper.


References:

https://www.ajl.org/about

https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html

https://arxiv.org/pdf/1908.04913.pdf

Thompson, Clive. (2019) Coders: Who they are, what they think and how they are changing our world. Pan Macmillan.



Customer details
Solution
Benefits
Written by
Dominic Ruefenacht
|
Edited by
Trisha Mandal
|
Illustrated by
Xana Ramos

Discover more articles

Announcements
Mobius Labs Unveils Breakthrough Multimodal AI Video Search Empowering Organizations to “Chat” with Their Content Libraries
Features
Turn Back Time with Visual DNA
Thought leadership
Unlocking the True Potential of Content with Visual AI
By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Mobius Labs GmbH is receiving additional funding by the ProFIT program of the Investment Bank of Berlin. The goal of the ProFIT project “Superhuman Vision 2.0 for every application- no code, customizable, on- premise  AI solutions ” is to revolutionize the work with technical images. (f.e.) This project is co-financed by the European Fund for Regional Development (EFRE).

In the ProFIT project, we are exploring models that can recognize various objects and keywords in images and can also detect and segment these objects into specific pixel locations. Furthermore, we are investigating the application of ML algorithms on edge devices, including satellites, mobile phones, and tablets. Additionally, we will explore the combination of multiple modalities, such as audio embedded in videos and the language extracted from that audio.