Deepfakes and Identity Theft

Image of PXL Vision
PXL Vision February 29, 2024
Reading-Time: 5 min Tags: ID Verification, Fraud & Deepfakes, AI

Deepfake technologies make it easier than ever for criminals to steal other people's identities and gain access to accounts and other financial assets. For organizations and their customers who rely on the security of digital identification and KYC processes, deepfakes are a real problem. At the same time,  public debate about the dangers of this technology is often characterized by uncertainty. But what are deepfakes and how can they be detected or prevented?

What is Deepfake Technology?

The fact that people manipulate images, sound, or video is nothing new. It is probably as old as the technologies themselves. Nevertheless, it is worth taking a closer look at what deepfake technology is as its own phenomenon, as well assome deepfake examples. .

There is no generally accepted definition of what kind of manipulation is considered a deepfake and what is not. However, according to the Cambridge Dictionary, a deepfake is “[...] a video or sound recording that replaces someone's face or voice with that of someone else, in a way that appears real.” Methods of machine learning, especially Deep Learning, are used here. 

The use of techniques from the field of artificial intelligence (AI) results in the special features that distinguish deepfakes from previous methods: First, the technique of feeding an AI with large amounts of data - such as hundreds of photos and videos of a person - makes it possible to create fakes of very high quality. On the other hand, deepfake programs make it much easier to create such fakes. Applications such as Faceswap, for example, allow people without extensive programming skills to create deepfakes in a short amount of time.

How Deepfakes are used for Identity Theft and Fraud

In the public eye, deepfakes are primarily associated with the politically motivated spread of misinformation and the production of unethical forms of pornography, such as 'revenge' and child pornography. However, criminals can also use deepfake AI technology for direct financial gain.

The Identity Fraud Report 2023 from verification platform provider sumsub shows just how widespread deepfake identity theft has become. In this report, the company examined two million fraud attempts across multiple industries worldwide. According to the report, deepfakes and AI-assisted fraud attempts will be among the five most common forms of fraud in 2023, along with ID forgery, account fraud, money laundering-related acts such as money muling, and forced verification.

According to sumsub's internal statistics, the number of deepfakes identified worldwide has already increased tenfold compared to 2022. The document most often targeted by forgery attempts is the ID card. It was the target of criminals in almost 73% of identity fraud cases. Furthermore, these incidents are not only affecting countries whose ID documents have historically been less secure against counterfeiting. Leading industrial nations are also increasingly being targeted by criminals. In 2023, for example, around 43% of all deepfake cases identified by sumsub in Europe happened in just three countries: Germany, the UK, and Spain.

202211_BG_JPG_Age-Verification_Any_Hacker-ID-Computer

Deepfakes as a Threat to KYC and Identification Procedures

There are many ways that criminals can use deepfake technology to simulate false identities and harm people or organizations:

For example, individuals can suffer life-threatening financial losses if criminals steal their identity to gain unauthorized access to their assets, such as their bank account. Organizations' financial resources are also at risk if the identities of the people who manage them are compromised. In addition, criminals can use false identities to apply for loans, overdrawing their accounts and causing high costs to banks.

In the long run, deepfakes may permanently damage trust in electronic identification and KYC processes. If such processes are no longer considered secure and are being rejected by more and more citizens, businesses, and politicians, we will also lose the opportunity to make the identification processes necessary for a functioning society more efficient and, above all, more inclusive. In conclusion,turning away from them would not only impose unwelcome costs on businesses, but customers would also have to endure time-consuming analog procedures once again.

Curbing fraud with deepfake detection technology is therefore of great importance both socially and economically.

The Regulation of Deepfakes in the EU and Switzerland

The regulation of deepfake technologies is still in its infancy. It is currently handled very differently from country to country. This is partially due to their relative newness. Additionally, a general ban does not seem appropriate to many legislators. It is true that specific fraud offenses involving the use of deepfakes are already illegal under current law in many jurisdictions. However, the dual-use nature of the technology itself argues against a ban. In addition to their problematic uses, deepfakes can also be used for positive purposes, for example by filmmakers, artists, and other creative people.

In the EU, for example, implementing deepfake prevention measures against fraudulent applications of the technology is currently being discussed as part of the planned AI Act. The draft focuses on special transparency requirements and a labeling obligation for content manipulated with the help of AI. Article 52 (3) of the draft specifically states that

"Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ('deep fake'), shall disclose that the content has been artificially generated or manipulated."

The adoption of the EU AI Act is still planned for 2024, meaning that it may come into effect by 2026.

A similar, though probably not identical, solution can be expected in Switzerland. In November 2023, the Swiss Federal Council officially commissioned the Federal Department of the Environment, Transport, Energy and Communications (DETEC) to give an overview of possible regulatory approaches to artificial intelligence. This will build on existing Swiss law and identify possible regulatory approaches that are compatible with the EU AI Act and the Council of Europe's AI Convention. It is therefore possible that Switzerland will follow a similar path in its regulation of deepfakes.

Automated Deepfake Detection  - The Status Quo

Companies and researchers are already working on deepfake detection solutions that are automated and reliable. Like the regulations for deepfake detection, figuring out how to spot a deepfake is also in its  infancy. Most of the methods being discussed are still in the experimental stage. The most promising approaches seem to be those that imitate the deepfake software itself, and rely on deepfake detection using machine learning.

Vendors such as Intel have already launched the first commercial software solutions such as FakeCatcher, which is designed to detect deepfakes with nearly 100% accuracy. However, the high detection rates are almost exclusively related to controlled experimental scenarios. It remains to be seen how well these first-generation detectors will work in practice.

Recommendations for Action: What Companies with Digital Identification and KYC Processes should do

For companies that rely on digital identification and KYC processes due to their business model, there are a number of recommendations for action relating to deepfake prevention. They should:

  • Keep an eye on the regulations surrounding AI and deepfakes, such as AI deepfake voice or image;
  • study common methods of identity fraud associated with deepfakes and invest in the necessary skills to deal with them appropriately;
  • and follow the development of AI deepfake detection solutions, such as deepfake image or deepfake audio detection, and actively cooperate with the researchers and companies working on them. In this way, they help to accelerate their market maturity.

How PXL Vision handles Deepfakes

As a provider of identity verification solutions, PXL Vision is deeply involved in the issue of deepfakes and is constantly working to make its solutions even more robust. For example, PXL Ident's Screen Detection already detects many cases of deepfakes by recognizing ID documents that are not physically present as such. In addition, PXL Vision is working intensively with the Idiap research institute on the development of an innovative deepfake detection solution, which is supported by the Swiss innovation agency Innosuisse. The goal of the project is to bring the world's first robust AI-based facial and travel document recognition solution to market by the end of 2025, significantly improving the security of digital identity verification solutions.  

Would you like more information about deepfakes in the context of identity verification?Then please contact us.

Would you like to stay up to date with our research work together with Ideap? Then sign up for our newsletter.

Table of Contents:

Don't miss the latest blogs – subscribe to our newsletter.

Suggested Articles

What is a Digital Identity?

Digital identity is still a relatively recent development. Because of our world’s rapid...
Image of PXL Vision
PXL Vision

Identity verification for the insurance industry

Our soon-to-be published whitepaper provides valuable insight on the digitalization of the...
Image of PXL Vision
PXL Vision

User experience and digital identity verification

Why should you care about user experience (UX) in digital onboarding processes if you want to...
Image of PXL Vision
PXL Vision