Have you ever watched Terminator two, Judgement day? In the 1991 James Cameron classic, a killer robot is sent to the past to kill John Connor, the future leader of the resistance against the future world-dominating robots.

I always thought that one of the coolest (and scariest) traits of Terminator was facial recognition. The robot was able to identify its target from miles away, with 100% accuracy.

Today we don’t have killer robots, but we do have face recognition that is more accurate than humans. What we don’t have is an idea of what to do with it.

In this article, I want to explain to you how face recognition works, why Facebook banned it, and how we should look at these technologies going forward.

The tech

If I showed you a picture of your mother’s face, you’d probably recognize her. How do you do that though? You can probably describe the color of her eyes and hair, but there are a few million people with the same colors. So what is it that allows you to recognize her? Is it the shape of her nose? What’s that shape though? How’s that shape different from my nose?

Recognizing people is one of the most important skills for a human, yet we can’t really explain how we do it.

AI has a pretty clever way of doing it though. Face detection algorithms are trained to transform faces into a list of numbers so that two pictures of the same person are turned into similar numbers.

This list of numbers is like a fingerprint of my face. It represents some abstract concepts like the shape of my nose or the width of my forehead, and is unique to me. The next image is from my course AI expert, and it shows a simplified version of this tech where my face is turned into just 3 numbers [2, 8, 5] (in real-world applications, these would be anything from hundreds to thousands of numbers).

Screen Shot 2021-11-04 at 10.46.30.png

When a face recognition algorithm sees any other face, it can also turn that face into a list of numbers. If these numbers are very similar to the ones generated from my face, it means that that person is likely to be me. If they’re different, it’s someone else.

In this toy example from AI expert you can see how an algorithm would turn Brad Pitt’s face into a list of numbers that is vastly different from mines, suggesting that we (unfortunately) look very different.

Screen Shot 2021-11-04 at 10.53.22.png

This is easier than you think…

Let’s assume you want to build a face recognition algorithm from scratch. This is what you’d have to do:

  1. Get the data. Specifically, you need people’s faces, and at least two different pictures of the same person.
  2. Develop an algorithm.
  3. Train an algorithm (this requires lots of computing power)
  4. Run the algorithm on a database that matches a person’s face to his/her name. This allows you to save the “face fingerprint” of each person. This doesn’t take much computing power to do (training algorithms is expensive, using them isn’t that much).
  5. From now on, you can recognize all the people in the database.

It sounds complicated, right? Today, it’s not.

  1. There are tons of datasets that are open-sourced. An example is the dataset “Labelled faces in the wild” (13.000 faces).
  2. Most algorithms are open-source. An example is FaceNet, which has 99.63% accuracy on the dataset above (humans have 97.53%).
  3. You don’t really need to train algorithms yourself. There are pre-trained models you can use for free (example).
  4. You can build a database of name-picture pairings pretty easily. Want to recognize your friends? Get their picture from Facebook.

As you can see, face recognition isn’t really a special asset that a few tech companies have. Anyone with basic AI knowledge can use it. The real question is, what should we do with it?

The news of the week: Facebook’s ban

Two days ago (2nd of November 2021) Facebook (Meta) announced:

We’re shutting down the Face Recognition system on Facebook. People who’ve opted in will no longer be automatically recognized in photos and videos and we will delete more than a billion people’s individual facial recognition templates.

If you go up to the phases to use face detection, Facebook basically deleted #4 (the database of people’s faces numerical fingerprints, that they call “templates”), and promised not to do #5 (recognising people).

I’ve read people complaining that Facebook (Meta, still have to get used to that…) didn’t pledge to delete the models used to create this dataset (step #3).

I hope you understood from the last point that that would have been completely pointless.

All the elements you need to build face recognition algorithms are open-sourced. The genie is out of the bottle: if Facebook deleted their models they could just download another one from the internet. And you know you can’t really delete stuff from the internet…

The only thing that stops Facebook (or anyone else) is regulation, and ethics. Let’s talk about that then.

What should we do with Face Recognition?

I think there are two questions that everyone in tech should ask:

  • Why are we doing what we do?
  • How are we doing it?

Let’s talk about the first. There are some face recognition applications that are simply unacceptable. An example is border control, or for discrimination of people. These are applications that I call “unethical by design”: there’s simply no way of getting something like this right because its reasons to exist are unethical. Just don’t do them.

The problem is that there will always be evil people around. This is why we need regulation. The EU has made a first attempt with the AI act, which is not perfect but it’s a good place to start.

The second question is more subtle. There are applications that may be ethical, but how you build them and implement them matters.

I want to make two examples. The first one is a company called ClearView AI. They sold face recognition systems to the police. We can all argue that being able to recognize a rapist or child kidnapper is a good use of this technology, but ClearView had very limited control over how their tool was used. Police offers could just run these algorithms to identify anyone for any reason. To make things worse, the company collected the data they used to create their “face fingerprints” without consent, scraping Facebook and other social media apps.

Another (lighter, and this time positive) example is Apple’s use of face recognition. Apple invested resources in making all its AI algorithms run on the device, rather than in the cloud. It sounds like a simple difference, but it’s huge.

If you have an iPhone, you may use FaceID to unlock it or use the photo app’s search feature to look for pictures of your friends. All the “fingerprints” of these faces (the list of numbers representing how they look like) exist just on the iPhone. Even if your iPhone can recognize you and your kids, neither Apple nor any other Apple device knows your face’s “fingerprint”.

Even Facebook recognizes the superiority of this approach. From their blog post on their face recognition ban: “Facial recognition can be particularly valuable when the technology operates privately on a person’s own devices”.


Tech has made so much progress, and open source has made a lot of tech democratic. This means that the answer to the question "can we build this?" is very often “yes”. However, the answer to "should we build this?" is very often “no”.

The problem is those tech companies aren’t used to ask “should we build this?”. Facebook’s motto is “move fast and break things”, and that has been Silicon Valley’s mantra for decades. This is the mindset that enabled Netflix, Google, Airbnb, and all these giants to change our lives.

I believe that very often they changed our lives for the better. But more technology doesn’t make the world better by default. Tech is good when you have good answers to why you do what you do, and to how will you do it.

Face recognition is another piece of tech in search of a why, and a how. Except for unlocking phones, I haven’t seen a good answer to both questions, yet.

Want to go deeper?