Facial Recognition Crosses a Line with Mask Removal Features

In 2020, masks became a large part of Western culture, becoming the only way many people felt safe venturing out in public. Major clothing companies started offering them, and people coordinated their masks to match their outfits. 

However, face masks also present a problem to facial recognition software, blocking several facial characteristics that the software would otherwise use to make an ID. Clearview AI, a company that creates facial recognition software aimed at law enforcement agencies and boasts a photo database of over 10 billion images, says it has solved this problem. The company pulls photos from news media, mugshot websites, and social media profiles.

In reality, mask removal and enhancement features on facial recognition software cross a line, and businesses should think twice before using them. Because so many of Clearview AI’s customers are law enforcement agencies, it’s likely that these new features will be used to make arrests.

Facial recognition’s mask removal

What Does Mask Removal Do?

Facial recognition uses artificial intelligence (AI) to analyze the geometry of a person’s face, including features like the distance between their eyes and the shape of their chin. The new mask removal tools would basically use other photos in the AI’s database to guess at what a person might look like under their mask. The model takes data points from the part of the face it can see (the eyes, forehead, and possibly the ears), and then attempts to match those to other images using statistical patterns to determine possible facial characteristics that the mask is hiding.

These features could be helpful for emotion recognition and advertising, as long as organizations use them with permission. For example, medical staff could determine a patient’s level of pain or discomfort in a waiting room without being able to see their whole face, allowing them to determine which patients they need to see first. However, many of Clearview’s clients seem to be law enforcement agencies.

What’s the Problem with Using Facial Recognition AI to “Remove” Masks?

Considering how much of a person’s face a mask actually blocks, that means about two-thirds of a facial recognition match using these features would be strictly guesswork. We already know that facial recognition has some major issues with accuracy, especially when it comes to identifying women of color, so adding guesswork on top of that is just asking for trouble. 

“I would expect accuracy to be quite bad, and even beyond accuracy, without careful control over the data set and training process, I would expect a plethora of unintended bias to creep in,” said MIT professor Aleksander Madry in an interview with Wired. Facial recognition models already don’t get enough training with people of color, so the likelihood of the model accurately identifying a non-white person with a mask on is extremely low.

Carlos Anchia, CEO of Plainsight, explains how this technology would work. “Attempting to apply the technology to facial feature prediction is fraught with complexity and potential for inaccuracy,” he says. “In one approach to automating a prediction of features hidden by masks, the model would first remove the mask in the image and then create a void. This void would need to have that portion of the face replaced with predicted facial features resulting from the matching images. In cases like this, confidence in the predictive (altered) image would likely be low and would require an enormous amount of data for each image/person.”

Also read: The Struggles & Solutions to Bias in AI

The Dangers of Increasing Facial Recognition Use

One of the issues with increasing facial recognition use is that many users, especially those in law enforcement, don’t really seem to be addressing how inaccurate the technology is. Also, as we learned from the recent Facebook hearings, AI algorithms require human oversight for the best results, but understaffed organizations may not provide this, especially if it won’t help their bottom line.

“My intention with this technology is always to have it under human control. When AI gets it wrong it is checked by a person,” Clearview AI co-founder and CEO Hoan Ton-That told Wired. As great as that sounds, we know that organizations don’t always use technology exactly the way it was originally intended. After all, facial recognition isn’t the only problematic “science” law enforcement agencies use to catch criminals, so there’s no guarantee that they won’t use this incorrectly as well. 

Businesses Must Be Cautious About Using this Technology

While there’s an obvious demand for accurate facial recognition technology, businesses have to be careful about using it, especially in its current iteration. Anchia says, “With the new Clearview AI technology, the only data points that are common from image-to-image of individuals would be the exposed (unmasked) images. To perform at operational accuracy with a high degree of robustness, machine models often require additional data points to bolster the confidence in the predictions. In these cases, the large number of data points required to achieve high-accuracy prediction quality is not present.”

Facial recognition AI is, unfortunately, not accurate enough to make life-changing decisions. Instead, businesses can use it to improve their product lines or give employees passwordless access to devices. Using facial recognition in these ways helps avoid some of the bias issues that the technology brings with it, while still giving it a chance to improve its accuracy.

Read next: Edge AI: The Future of Artificial Intelligence and Edge Computing

Leave a Reply

Your email address will not be published. Required fields are marked *