Glossary Deepfakes
Deepfakes
Deepfakes – Fraud risk for digital processes
The quality of deepfakes is developing rapidly thanks to ever-improving AI models. This opens up more opportunities for identity theft, financial fraud and manipulation of digital identification processes. For companies that rely on secure online identification, KYC and automated identity verification, deepfakes are increasingly becoming a real operational and regulatory risk. At the same time, the number of documented cases is increasing noticeably. According to Entrust's 2025 Identity Fraud Report, there was a deepfake attack every five minutes on average in 2024, and digital document forgeries increased by 244% compared to the previous year, underscoring the growing significance of this type of fraud.
At the same time, public debate is often characterised by uncertainty: what exactly are deepfakes and how can they be recognised? And why do they pose such a serious threat to digital ecosystems?
This article highlights the most important aspects from a business perspective – from the technical definition and use in fraud cases to regulation and countermeasures.
What are Deepfakes?
The term deepfake describes content that has been created or manipulated using artificial intelligence and appears authentic, even though it is not real. This often involves:
- AI-generated videos that imitate real people – even in real time during video calls
- Manipulated or fake images, such as ID documents or faces
- Synthetic voices (voice deepfakes) used specifically to deceive in phone calls or voice messages
- AI-generated texts or messages used in combination with image or audio material to build trust and trigger certain actions
Thanks to modern tools and apps, it is now possible to create such fakes without in-depth technical knowledge. What sets them apart from traditional manipulations is the combination of high quality, automation and ease of imitation – a decisive factor when companies need to verify digital documents or check biometric features. While AI tools are clearly useful for marketing purposes, they are also increasingly being used for identity theft, social engineering and fraudulent manipulation, involving the deliberate combination of deepfake elements such as faces, voices and text.
How deepfakes enable identity theft and fraud
Deepfake fraud attacks digital identity processes at various levels: visual, auditory, or document-based. An increasing number of companies are reporting attempts to use deepfakes as a means of circumventing legitimate verification processes.
Typical attack methods include:
-
Deception of automated onboarding processes, for example by using manipulated selfies or videos during selfie ID or auto ID checks.
-
Circumventing document-based checks by using AI-generated or manipulated ID data to imitate visual security features.
-
Abuse of voice-based authentication, for example through synthetic voices in telephone or voice-controlled processes.
-
Combined attacks, in which several manipulated elements – such as faces, documents and personal data – work together to deceive identity checks specifically.
For companies, this means: Deepfakes can cause significant financial, legal and reputational damage, including compliance violations if regulated audit processes are not followed.
Deepfake fraud becomes particularly complex when attackers combine different techniques – such as fake ID photos, manipulated selfies and synthetic videos. In such cases, the boundaries between deepfakes and synthetic identity fraud, in which completely artificial identities are created, become blurred.
The regulation of deepfakes in the EU and Switzerland
Although discussions on the regulation of deepfakes are still in their infancy, political pressure is growing. The intention is not usually to prohibit the technology completely, but rather to introduce transparency requirements and labelling of manipulated content to prevent misuse.
The planned EU AI Act is often cited as a key instrument in this regard. In future, it will stipulate that AI-manipulated image, audio or video content must be clearly labelled as such to prevent deception.
It remains to be seen how national regulations and international requirements will be implemented in combination for companies based in Switzerland. However, it is foreseeable that there will be clear legal requirements for transparency and authenticity, especially with regard to online identification.
Automatically detecting deepfakes – the status quo
The technical detection of deepfake fraud poses a challenge. Although research results often demonstrate high detection rates in laboratory settings, implementation in operational systems is significantly more complex. Currently, professional AI-generated deepfakes created by fraudsters can only be detected and stopped using technology.
Modern approaches combine different levels of verification:
-
Biometric methods that analyse typical movement or reflection patterns
-
Liveness detection, which checks whether a real person is actually standing in front of the camera
-
NFC-based ID checks that access cryptographically secured authenticity data
-
Document forensic methods that identify image artefacts, microstructures or manipulation patterns
-
hybrid models combining biometrics, document analysis and anomaly detection
-
device fingerprinting, which identifies unusual access patterns based on device-specific characteristics and can thus provide additional clues about fraudulent or AI-assisted attacks
Despite these advances, detection remains challenging. Deepfake models that have been specifically trained to circumvent certain control mechanisms can still slip through individual verification processes. That is why modern security concepts rely on multi-layered approaches that combine different technologies and identify anomalies on multiple levels.
Recommendations for companies
Companies that use KYC or digital identification processes should act proactively and combine multiple layers of protection. The following are particularly important:
-
Continuous monitoring of regulatory developments
-
Raising awareness among operational staff through fraud, risk and compliance teams
-
Integration of modern biometric methods, including liveness detection
-
Use of NFC-based methods to reliably verify documents
-
Evaluation of new deepfake detector technologies and regular effectiveness tests by experts
-
Cooperation with research institutions and technology providers
How PXL Vision addresses deepfakes
PXL Vision is committed to securing digital identity checks against deepfake-based attacks and is constantly developing its technologies. PXL Ident already uses existing mechanisms to detect manipulation attempts, such as when identity documents are not physically present or have been altered.
PXL Vision has also successfully completed a joint research project with the Swiss Idiap Research Institute. This collaboration focused on detecting synthetic faces in real-life identity verification scenarios, which is an area becoming increasingly important due to the use of generative AI.
The outcome of this project was a deepfake detector, which has been incorporated into PXL Ident as a key technology and has been accessible to all customers since the start of the year. This solution is designed to identify various forms of facial manipulation, including face swapping, facial reenactments, and completely synthetic faces that cannot be attributed to any real person.
The deepfake detector is technically based on advanced deep learning architectures and has been trained using an extensive and diverse dataset of real and manipulated facial images. It complements existing fraud prevention methods, helping to make digital identity checks more robust and resilient in the face of current threat scenarios.
What deepfakes mean for digital business models
Deepfake fraud poses a threat to the security and stability of digital business models at several levels. Companies must prepare for increasingly sophisticated and difficult-to-detect attacks.
The consequences are not only financial damage and additional operational costs, but also non-compliance with regulatory requirements. Inadequate verification mechanisms can quickly lead to compliance violations, such as fake identities slipping through KYC processes unnoticed or electronic signatures being accepted without sufficient authenticity checks.
Therefore, robust security mechanisms that combine biometrics, liveness detection, NFC validation, document-based forensics and targeted deepfake detection technology are becoming a central component of a resilient, future-proof process landscape. Companies should invest in these technologies with foresight. This will not only strengthen the security of your systems, but also the trust of your customers.
FAQ about deepfakes
Deepfake videos with manipulated faces and voice deepfakes are particularly critical, as they give the impression that a real person is actually present. Equally dangerous are AI-generated documents (e.g. ID cards) that can fool normal document checks.
Not necessarily. Many modern deepfakes use advanced AI whose artefacts are barely visible to the human eye. Some of the best detection models reliably detect manipulations, but their practicality depends on quality, data availability and testing conditions. That is why we also offer manual verification of unclear cases with PXL Check. Hybrid verification is often still necessary at present.
‘Complete protection’ is currently illusory. However, the risk can be significantly reduced through a combination of technological measures (anti-spoofing, document verification, etc.), organisational risk management, manual case-by-case verification and regular awareness-raising.
Companies should combine modern biometric methods and document checks, evaluate detector technologies, raise awareness among their fraud and compliance teams, and keep an eye on regulatory developments.
Because deepfake fraud is growing — financially, technologically and regulatorily. Early investment protects against fraud, strengthens customer confidence and ensures compliance. It acts as insurance for the future of digital identity verification.