Machine learning and face recognition

Image of PXL Vision
PXL Vision June 07, 2022
Reading-Time: 4 min Tags: PXL Ident

Machine learning is a rapidly growing field and has been used for a variety of tasks, including facial recognition. Facial recognition technology is everywhere these days, even if people are not that aware of it. Many people use facial recognition technology to log onto their smartphones effortlessly. With advanced face detection software, surveillance operators can pick criminal faces out of crowds.
What is less well known is the technique and processes behind face recognition. This article provides a look into the fields of machine learning and explains how it has made facial recognition technology, as we use in our product PXL Ident, possible.

What is machine learning?

Machine learning is a branch of artificial intelligence that deals with the design and development of algorithms. It is a process of teaching computers to learn from data and involves developing algorithms that can automatically detect patterns in data and then make predictions based on those patterns. This contrasts with traditional programming, where the programmer writes code that explicitly tells the machine what to do.

Machine learning algorithms learn from data to solve problems that are too complex to solve with conventional programming.

Deep learning is a subset of machine learning which is derived from running multiple layers of machine learning algorithms together at the same time. Note: The terms machine learning and deep learning are often used interchangeably. Most machine learning today is conceived at the deep learning level.

How machine learning is used today

Machine learning is growing with many applications in areas such as computer vision, natural language processing, and predictive analytics. Probably, you already use multiple products or services in your everyday life that employ machine learning technologies as a growing number of companies are leveraging machine learning over an exceedingly wide variety of industry verticals. 

Netflix-1


Netflix for instance, uses machine learning in several ways. One way is through their recommendation system. The recommendation system is what provides you with those tailored suggestions of what to watch next. It relies on algorithms that take into account your watching history, what you have rated, and what is popular on Netflix. Another way is, that the company has invested from the beginning in several seasons of new shows that they were sure would be a success based on the predictions of the algorithms.
Other streamed and social media also rely heavily on machine learning algorithms to deliver content that matches user’s likes. And also online shopping portals such as Amazon leverage machine learning algorithms to recommend other things that you might want to buy based on your past searches. 

Less frequently in use than the examples given are use cases such as machine learning based face recognition. So let's focus on that now. 

What is face recognition?

Face recognition is a biometric identification technique that uses unique characteristics of an individual's face to identify them. Most facial recognition systems work by comparing the face print to a database of known faces. If there's a match, the system can identify the individual. However, if the face print isn't in the database, the system can't identify the individual. 
Facial recognition technology is often used for security purposes, such as identifying criminals or preventing identity theft. It can also be used for more mundane tasks, such as finding a lost child in a crowded place or identifying VIPs at an event. 
Some facial recognition systems are equipped with artificial intelligence that can learn to identify individuals even if their appearance has changed, such as if they've grown a beard or gained weight.

How machine learning is used in facial recognition technology

The most common type of machine learning algorithm used for facial recognition is a deep learning Convolutional Neural Network (CNN). CNNs are a type of artificial neural network that are well-suited for image classification tasks. 

CNNs learn to extract features from images and use those features to classify the images into different categories. The depth of a CNN is important for facial recognition because it allows the CNN to learn more complex facial features. 

For example, a shallow CNN might only be able to learn to identify simple facial features, such as the shape of the nose or the position of the eyes. A deep CNN, on the other hand, can learn to identify more complex facial features, such as the texture of the skin or the shape of the chin. Once a CNN has been trained on a dataset of facial images, it can be used to identify faces in new images. This process is called facial recognition. 

The 3 steps of facial recognition

Face recognition is divided into three steps:

  1. Face Alignment and Detection – The first step is to detect faces in the input image. This can be done using a Haar Cascade classifier, which is a type of machine learning algorithm that is trained on positive and negative images. The machine must locate the face in an image or video. By now, most cameras have an in-built face detection function. Face detection is also what Snapchat, Facebook and other social media platforms use to allow users to add effects to the photos and videos that they take with their apps.

    A challenge in the context of face detection is that often the face is not directed frontally to the camera. Faces that are turned away from the focal point look totally different to a computer. An algorithm is required to normalize the face to be consistent with the faces in the database. One way to accomplish this is by using multiple generic facial landmarks. For example, the bottom of the chin, the top of the nose, the outsides of the eyes, various points around the eyes and mouth, etc. A machine learning algorithm needs to be trained to find these points on any face and turn the face towards the center.

  2. Feature Measurement and Extraction – Once faces have been aligned and detected, the next step is to extract features from them. This is where the Convolutional Neural Network (CNN) comes in. A CNN is able to extract high-level features from an image, which are then used to identify faces in a database.

  3. Face Recognition – The last step is to match the extracted features with faces in a database. This is usually done using a Euclidean distance metric, which measures the similarity between two vectors. 

How PXL Vision uses facial recognition

We use machine learning technology for facial recognition in our IDV solutions. Our high-performing machine-learning systems are constantly improved and further trained. This allows it to perform a full identity verification in just 30 seconds, whereas facial recognition itself only takes a few seconds. Here you can find out more about our technology. Here you can find out more about our technology.

Table of Contents:

Don't miss the latest blogs – subscribe to our newsletter.

Suggested Articles

PXL Vision mentioned in the Gartner Market Guide

We are pleased to announce that PXL Vision has been listed as a ‘Representative Vendor’ in the...
Image of PXL Vision
PXL Vision

Conversion rate with the Auto-Ident procedure

The conversion rate for automated identity verification is the rate at which potential users...
Image of PXL Vision
PXL Vision

Privacy laws and identity verification

In our increasingly digitalized world, companies are faced with a daunting challenge: They are...
Image of PXL Vision
PXL Vision