Deepfakes and identity theft

Image of PXL Vision
PXL Vision February 29, 2024
Reading-Time: 5 min Tags: ID Verification, Fraud & Deepfakes

Deepfake technologies make it easier than ever for criminals to steal other people's identities and gain access to accounts and other financial assets. For organizations and their customers that rely on the security of digital identification and KYC processes, deepfakes are therefore a real problem. At the same time, the public debate about the dangers of the technology is often characterized by uncertainty.

Deepfakes - a definition

The fact that people manipulate images, sound, or video is nothing new. It is probably as old as the technologies themselves. Nevertheless, it is worth taking a closer look at deepfakes as a phenomenon in their own right.

There is no generally accepted definition of what kind of manipulation is considered a deepfake and what is not. However, according to the Cambridge Dictionary, a deepfake is “[...] a video or sound recording that replaces someone's face or voice with that of someone else, in a way that appears real.” Methods of machine learning, especially Deep Learning, are used here. 

The use of techniques from the field of artificial intelligence (AI) results in the special features that distinguish deepfakes from previous methods: First, the technique of feeding an AI with large amounts of data - such as hundreds of photos and videos of a person - makes it possible to create fakes of very high quality. On the other hand, deepfake programs make it much easier to create such fakes. Applications such as Faceswap, for example, allow people without extensive programming skills to create deepfakes in a short amount of time.

How deepfakes are used for identity theft and fraud

In the public eye, deepfakes are primarily associated with the politically motivated spread of misinformation and the production of unethical forms of pornography, such as 'revenge' and child pornography. However, criminals can also use deepfakes for direct financial gain.

The Identity Fraud Report 2023 from verification platform provider sumsub shows just how widespread deepfake identity theft has become. In this report, the company examined two million fraud attempts across multiple industries worldwide. According to the report, deepfakes and AI-assisted fraud attempts will be among the five most common forms of fraud in 2023, along with ID forgery, account fraud, money laundering-related acts such as money muling, and forced verification.

According to sumsub's internal statistics, the number of deepfakes identified worldwide has already increased tenfold compared to 2022. The document most often targeted by forgery attempts is the ID card. It was the target of criminals in almost 73% of identity fraud cases. Increasingly, it is not only countries whose ID documents have historically been less secure against counterfeiting that are affected. Leading industrial nations are also increasingly being targeted by criminals. In 2023, for example, around 43% of all deepfake cases identified by sumsub in Europe happened in just three countries: Germany, the UK and Spain.


Deepfakes as a threat to KYC and identification procedures

There are many ways that criminals can use deepfakes to simulate false identities and harm people and organizations:

For example, individuals can suffer life-threatening financial losses if criminals steal their identity to gain unauthorized access to their assets, such as their bank account. Organizations' financial resources are also at risk if the identities of the people who manage them are compromised. In addition, criminals can use false identities to apply for loans, overdrawing their accounts and causing high costs to banks.

In the long run, deepfakes can help to permanently damage trust in electronic identification and KYC processes. If such processes are no longer considered secure and are increasingly rejected by citizens, businesses and politicians, we will also lose the opportunity to make the identification processes necessary for a functioning society more efficient and, above all, more inclusive: Turning away from them would not only impose unwelcome costs on businesses. Consumers would also have to endure time-consuming analog procedures once again.

Curbing fraud in connection with deepfakes is therefore of great importance both socially and economically.

The regulation of deepfakes in the EU and Switzerland

The regulation of deepfake technologies is still in its infancy. It is currently handled very differently from country to country. This is partly due to their relative newness. Additionally, a general ban does not seem appropriate to many legislators. It is true that specific fraud offenses involving the use of deepfakes are already illegal under current law in many jurisdictions. However, the dual-use nature of the technology itself argues against a ban. In addition to their problematic uses, deepfakes can also be used for positive purposes, for example by filmmakers, artists, and other creative people.

In the EU, for example, the prevention of fraudulent applications of deepfakes is currently being discussed primarily as part of the planned AI Act. The draft focuses on special transparency requirements and a labeling obligation for content manipulated with the help of AI. Article 52 (3) of the draft specifically states that

"Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ('deep fake'), shall disclose that the content has been artificially generated or manipulated."

The adoption of the EU AI Act is still planned for 2024, meaning that it could probably come into force in 2026.

A similar, though probably not identical, solution can be expected in Switzerland. In November 2023, the Swiss Federal Council officially commissioned the Federal Department of the Environment, Transport, Energy and Communications (DETEC) to give an overview of possible regulatory approaches to artificial intelligence. This will build on existing Swiss law and identify possible regulatory approaches that are compatible with the EU AI Act and the Council of Europe's AI Convention. It is therefore possible that Switzerland will follow a similar path in its regulation of deepfakes.

Automated detection of deepfakes - the status quo

Companies and researchers are already working on solutions to detect deepfakes automatically and reliably. Like the regulation of deepfakes, the ways to detect them are still in their infancy. Most of the methods being discussed are still in the experimental stage. The most promising approaches seem to be those that, like the deepfake software itself, rely on the use of Deep Learning.

Vendors such as Intel have already launched the first commercial software solutions such as FakeCatcher, which is designed to detect deepfakes with nearly 100% accuracy. However, the high detection rates are almost exclusively related to controlled experimental scenarios. It remains to be seen how well these first-generation detectors will work in practice. 

Recommendations for action: What companies with digital identification and KYC processes should do

For companies that rely on digital identification and KYC processes due to their business model, there are a number of recommendations for action relating to deepfakes. They should:

  • Keep an eye on the regulation surrounding AI and deepfakes;
  • study common methods of identity fraud associated with deepfakes and invest in the necessary skills to deal with them appropriately; and
  • follow the development of detector solutions and actively cooperate with the researchers and companies working on them. In this way, they help to accelerate their market maturity.

How PXL Vision handles deepfakes

As a provider of identity verification solutions, PXL Vision is deeply involved in the issue of deepfakes and is constantly working to make its solutions even more robust. For example, PXL Ident's Screen Detection already detects many cases of deepfakes by recognizing ID documents that are not physically present as such. In addition, PXL Vision is working intensively with the Idiap research institute on the development of an innovative deepfake detection solution, which is supported by the Swiss innovation agency Innosuisse. The goal of the project is to bring the world's first robust AI-based facial and travel document recognition solution to market by the end of 2025, significantly improving the security of digital identity verification solutions. 

Would you like more information about deepfakes in the context of identity verification?Then please contact us.

Would you like to stay up to date with our research work together with Ideap? Then sign up for our newsletter.

Table of Contents:

Don't miss the latest blogs – subscribe to our newsletter.

Suggested Articles

What is a Digital Identity?

Digital identity is still a relatively recent development. Because of our world’s rapid...
Image of PXL Vision
PXL Vision

Identity verification for the insurance industry

Our soon-to-be published whitepaper provides valuable insight on the digitalization of the...
Image of PXL Vision
PXL Vision

Identity Verification using NFC

NFC technology has established itself as a promising solution for identity verification. The fields...
Image of PXL Vision
PXL Vision