top of page

Facial Recognition Technology falsely identifies women more than men; find out why.

Artificial Intelligence (AI) has yet to minimize human biases, and can often perpetuate existing racial and sexist biases on a much larger scale. This article will discuss how much more likely women are to be falsely identified with the current facial recognition algorithms (FRA), why this is continuing to happen, and potential solutions.



Understanding Sexism in FR Tech and Why It Is A Problem

WatchGuard conducted research to analyze the gender bias in Facial Recognition Technology (FRT) and found that women were misidentified 18% more than men. They analyzed two FR software, the first, Amazon Rekognition and the second Dlib.


The research found that while Amazon’s software could recognize white men at a 99.06% accuracy, white women at 92.9% accuracy while Women of Colour (WOC) were recognized at a 68.9% accuracy.


What does this mean? According to WatchGuard Technologies, it “essentially means a female face not found in the database is more likely to provide a false match. Also, because of the lower similarity in female faces, our team was confident that we’d see more errors in identifying female faces over males if given enough images with faces”. So the more data collected, the more errors would occur in accurately identifying women.


While this may not be a significant problem when matching faces to tag people in pictures on Facebook or Instagram photos, it is can cause harm when used by government agencies and law enforcement.


Not only are there disparities and inaccuracies in gender identification, but also in racial identification. An article by FR company Facedapter titled Racial Bias in FR Technology, details the real-life effects of this in modern-day society.


To further emphasise the existence of sexism in FRT, the University of Washington, “revealed that a facial recognition tool was 68% more likely to predict that an image of a person cooking in the kitchen is that of a woman”.


Some might say that these results do not necessarily mean that FRT is sexist but are simply a reflection of society, and that is exactly correct. AI did not invent sexism or racism but is collecting and using data that, if not corrected, could perpetuate the already dangerous and potentially discrimination, especially as it is rolled out to the public.


FRA today rely on what is called machine learning (ML). ML is “a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention”.


This is a training process that requires significant training but unfortunately, can result in biases and the perpetuation of these biases in AI. The biases that are reflected in the data occur if there is not an accurate representation of the given demographic a FR scanner is programmed to identify.


For example, using a FRA with collected data from Poland, in Nigeria is bound to have more than a few pitfalls. ML will not be able to accurately identify the given population.


Bearing that in mind, it is important that the public and FR service providers understand that until the day that AI works to eliminate human biases, its results cannot be considered any more trustworthy than a human’s. So what is the solution?

The Solution

Contrary to popular belief, there are several solutions, the first being diversifying data. Data scientists at the MIT Media Lab recognise that in instances where they have diversified the data used to train AI machines, the resulting data has been more accurate and less discriminatory. This makes it an important starting point in reversing and preventing the perpetuation of biases rooted in misogyny and racism.


Companies that place profit over people are quite common, and these companies are less likely to be transparent with the public, which allows them to take advantage of ignorance. However, transparency can also benefit FR service providers and hold them accountable regarding the promises they make to their users.


Startup FR company Facedapter, is an example of a company that is looking to build digital trust one face at a time. “Our goal is to be the simplest, fastest and most cost-effective multimodal facial recognition software on the market; Being that is what enables us to, also, be the most demographically sensitive software on the market”.


A company that understands the “-isms” that heavily influence society (sexism, racism, etc) and is looking to eliminate these biases is a company that sees the value in integrity and equal treatment.


Facedapter working to become the most demographically sensitive software on the market will make it a FRS that eliminates inaccuracies in gender identification, as well as racial discrimination.


This will make it the best option for public spaces, as well as government and law enforcement agencies.


389 views0 comments
bottom of page