Machine Learning and How It Applies to Facial Recognition Technology

21, January 2021

Telecommunications Facial Authentication Machine Learning Face Verification

What is machine learning?

Machine learning (ML) is a subset of artificial intelligence (AI). While the field of AI itself covers a lot of territory, it essentially boils down to the simulation of human intelligence in machines (computers).

AI-ML-DL-explained-1

ML involves the programming of algorithms that can learn from themselves and even make their own predictions.

ML allows machines to learn from past experiences – much as humans do – by analysing their output and using it as an input for the next operation.

ML algorithms learn from data to solve problems that are too complex to solve with conventional programming.

Deep learning is a subset of machine learning which is derived from running multiple layers of ML algorithms together at the same time.

Note: The terms machine learning and deep learning are often used interchangeably. Most machine learning today is actually conceived at the deep learning level.

A short history of digital technology: from mainframes to machine learning

In order to better understand how artificial intelligence and machine learning fits into modern digital technology, it is useful to consider the technologies in a historical context.

The technological trajectory that brought us AI and by extension machine learning is best summed up in a diagram published in the “Digital Transformation Initiative” report by the World Economic Forum and Accenture. The diagram (Figure 1) outlines the combinatorial effects of technology: “where the capability of technologies working in tandem far exceed their capabilities when deployed separately”.

tech-waves-1

Notice how each new technology looks like a wave building off of the technology that came before it – this is the combinatorial effect of technology.

The birth of mainframe computers in the 1950s, led by IBM* and a handful of other companies, made way for the personal computer (PC) of the 1980s. Later, the Apple and Microsoft operating systems (OSs) further forged the home PC market which then steered the rapid scaling of the internet. The early eCommerce internet (web 1.0) preceded the mobile and cloud-computing internet of today (web 2.0) which has ushered in big data and the internet of things (IoT). This abundance of data now feeds the algorithms used in AI and machine learning.

The curve representing AI and ML has taken-off sometime around the year 2010. A question mark implies that it is anybody’s guess as to when this curve will start to come down, but if the prior technological leaps are of any indication, the cumulative capability of AI and ML technology will be immense.

* IBM is still a major player in the digital transformation and is especially active in machine learning (link to IBM’s machine learning landing page, which offers a relatively accessible, technical explanation of machine learning).

Expert systems: early forerunners to AI and ML

Expert systems are considered as the direct descendants of AI and machine learning. While most accounts date the beginning of AI research to a 1956 workshop at the ivy-league Dartmouth College, research into AI began in earnest in the 1980s when so-called “expert systems” proliferated.

Expert systems were designed to solve complex problems by reasoning through large bodies of knowledge. There were, however, a number of issues with these systems which prevented them from catching-on at the time.

First, these systems required a human expert to provide the knowledge base. In many cases, this was too costly for organizations, as it would divert their employees from their regular work. Additionally, some of these human experts felt threatened by the encroaching AI, believing that it would negatively impact the value of their expertise.

Second, these systems were based on the notion that expert knowledge consists of a collection of rules (if-then statements or conditional computing). When these systems were faced with a problem that they didn’t have the knowledge to, they were unable to solve the problem.

Third, knowledge is only part of the equation to “intelligence”, the other part relies on when and how to use it, or how to adapt it to a variety of constantly changing situations.

Things are clearly different now, the expert systems of yesteryear have essentially morphed into machine learning that can harness data from the internet and can be programmed to learn from its own data output.

Machine learning in action

Machine learning has already led to immense changes in our society. However, if you do not directly work in the technology sector or engage with the topic, the extent that this technology has changed and continues to change society might be unclear.

The chances are actually quite high that you currently use multiple products or services that employ machine learning technologies, as a growing number of companies are leveraging ML over an exceedingly wide variety of industries.

Netflix-1

Netflix for starters, uses customer data to predict what audiences want. In fact, Netflix employs ML technology so effectively that they have all but eliminated the industry standard of pilot episodes. Instead, the company will invest from the beginning in multiple seasons of new shows which they are certain will be a hit because their algorithms tell them so. Other streamed media, from Spotify to YouTube, also rely heavily on machine learning algorithms in order to deliver content that matches user’s likes.

Just as well, all of the major social media platforms from Facebook to Twitter, Instagram and TikTok employ ML algorithms to deliver more of the content that their user’s want.

Online shopping portals such as Amazon leverage ML algorithms to recommend other things that you might want to buy based on your past searches. Furthermore, the constantly changing prices of goods on Amazon and other online stores are also decided by an ML algorithm. Savvy shoppers will save items in their baskets and wait until the price lowers. Extra savvy shoppers will use services, such as camelcamelcamel, that show the price of goods over time on Amazon et al., and use this to their advantage.

Most email filtering programs employ ML in order to stop spam. Chatbots use a combination of pattern recognition and natural language processing in order to interpret a user’s query and provide suitable responses. Even Hello Barbie used a ML algorithm that was able to reply to its users from 8000 different responses. However, due to privacy concerns, the doll and the service was discontinued.

IBM’s Watson has long been famous amidst fans of Jeopardy for regularly (always?) winning against the show’s previous highest scorers. Watson is powered by an ML algorithm which enables computers to process text and voice data as well as understand human language the way people do. Watson was already introduced in 2010 and yet most are probably still unaware that ML technology was and is at work in the background. Nowadays, Watson has many more applications besides playing Jeopardy.

Another major ML project is self-driving cars which, when road-worthy, will most likely be better at driving than humans as AI does not get distracted or drunk. Self-driving cars use ML to continuously identify objects in their environment, predict how the objects will move and guide the car around the objects as well as towards the driver’s destination. Now, if we can only figure out a way to keep the hackers at bay.

The myriad digital assistants on the market, such as Apple’s Siri, Amazon’s Alexa and Google’s Assistant also make use of ML natural language processing.

The list goes on and on for AI, machine learning and its uses and it is being added to everyday as more and more use cases are dreamed up and developed.

How machine learning is used in facial recognition technology

The industry around facial recognition technology is rapidly maturing due to advances in AI, ML and deep learning technologies. Facial recognition is a technology that is capable of recognizing a person based on their face. It employs machine learning algorithms which find, capture, store and analyse facial features in order to match them with images of individuals in a pre-existing database. There are many strong use cases for the technology which you can read about in our blog here.

How facial recognition technology works is fairly difficult to grasp and a quality explanation would go far beyond the parameters of this article. For our purposes, we will consider the four overarching problems that a machine needs to solve in order to recognize a face. They are: face detection, face alignment, feature extraction, face recognition and face verification.

Face Detection – The machine must first locate the face in the image or video. By now, most cameras have an in-built face detection function. Face detection is also what Snapchat, Facebook and other social media platforms use to allow users to add effects to the photos and videos that they take with their apps.

Face Alignment – Faces that are turned away from the focal point look totally different to a computer. An algorithm is required to normalize the face to be consistent with the faces in the database. One way to accomplish this is by using multiple generic facial landmarks. For example, the bottom of the chin, the top of the nose, the outsides of the eyes, various points around the eyes and mouth, etc. The next step is to train an ML algorithm to find these points on any face and turn the face towards the centre.

Feature Measurement and Extraction – This step requires the measurement and extraction of various features from the face that will permit the algorithm to match the face to other faces in its database. However, it was at first unclear which features should be measured and extracted until researchers discovered that the best approach was to let the ML algorithm figure out which measurements to collect for itself. This process is known as embedding and it uses deep convolutional neural networks to train itself to generate multiple measurements of a face, allowing it to distinguish the face from other faces.

Face Recognition – Using the unique measurements of each face, a final ML algorithm will match the measurements of the face against known faces in a database. Whichever face in your database comes closest to the measurements of the face in question will be returned as the match.

Face Verification – Face verification compares the unique properties of a given face to another face. The ML algorithm will return a confidence value to assess whether the faces match or not.

PXL Vision’s Facial-Recognition / Verification Solution

PXL Vision provides leading solutions for the automation and enhancement of digital identity verification and customer onboarding through tailored software solutions powered by the latest developments in artificial intelligence and machine learning technologies. The team has extensive experience and expertise in building highly complex machine learning technologies and the passion and know-how to bring them to the market.

Don’t miss the latest news, trends and insights in digital identity

Related insights

PXL Vision has acquired its first supercomputer - unleashing its AI computing potential!

Read more

Related posts

Why flexible online identity verification is the key to scaling globally

Read more

How to Improve Information Security with Facial Authentication Technology

Read more

PXL Vision has acquired its first supercomputer - unleashing its AI computing potential!

Read more

Like what you read? Subscribe to stay informed!