Glossary Identity fraud
Identity fraud
Synthetic identity fraud: The dark side of generative AI
Generative AI is changing our world, but it also poses dangers. One such challenge is synthetic identity fraud. What lies behind it, why is it a real threat, and how can companies protect themselves?
Generative AI - hype and risk
Generative AI is all the rage right now. Hardly any industry remains untouched by the new technology, whether it's marketing, healthcare or finance. The hope is for more efficient processes, greater creativity and greater freedom. However, there are two sides to every coin. While AI offers companies enormous potential for efficiency gains, it also enables new criminal activities, such as synthetic identity fraud. Here, the theft of personal data and the creation of sophisticated fakes go hand in hand.
In this article we explain:
- What synthetic identity fraud is and how it works
- How AI-powered fraud schemes are hurting businesses and how widespread they really are
- What this has to do with current plans of the European Union and Switzerland
- What PXL Vision is doing to protect the security of electronic identification procedures and
- What role modern security procedures such as NFC and liveness checks play in this.
What is synthetic identity fraud?
Identity fraud - the basics from a company perspective
At first glance, classic and synthetic identity theft have a lot in common. Both involve accessing relevant information, such as bank account or credit card details, and using it to carry out unauthorised transactions, such as purchases or transfers. The consequences for victims are also similar: in addition to direct financial losses, these crimes often negatively impact their creditworthiness and reputation.
Synthetic identity fraud – the methods
Unlike traditional identity theft, which involves stealing the identities of real people, synthetic identity fraud involves creating a completely new identity from a mixture of stolen, manipulated and fictitious information. AI-supported technologies make this combination possible.
Cybercriminals create fake ID documents, mixing real and fictitious data (e.g. legitimate ID numbers with false names and addresses). Next, they use artificial intelligence to create synthetic facial images that perfectly match this fake data.
Synthetic identity fraud can be divided into two methods: manufactured and manipulated identities.
Manufactured synthetic identities
Synthetic identities are created using valid — often stolen — data records and several identities are merged. For example, criminals might create a fake identity by combining the address and name of three different, but real, people. Additionally, fraudsters sometimes add invalid information to these fake identities comprised of real components. This makes it more difficult to recognise the identity as fake.
Manipulated synthetic identities
On the other hand, manipulated synthetic identities are created on the basis of a real identity. Perpetrators alter existing identity documents by manipulating the personally identifiable information on them.
These manipulated identities are typically used to conceal the perpetrator's own credit history, enabling them to present themselves as creditworthy even if this is not the case. Perpetrators with a negative credit history, for example, can create a fictitious identity to obtain mortgages, loans or credit services.
How synthetic identity fraud threatens know-your-customer systems
Synthetic identity fraud methods make it easier to deceive know-your-customer (KYC) systems. Fraud is becoming faster, easier and more scalable than ever before. This happens in two primary ways:
Declining safety level
Traditional identity fraud protection solutions are often insufficient for detecting and preventing synthetic identity fraud. It's a never-ending cycle: if criminals discover new ways to commit fraud, organisations must implement new protection mechanisms to deal with the new threat.
Declining detection rate
Unlike cases of identity theft, where a person's identity is completely stolen, synthetic fraud often has no single, specific victim. Consequently, such crimes are frequently discovered and reported less frequently or at a later stage. This allows perpetrators to remain active for longer and potentially cause greater damage.
AI-supported fraud scams on the rise
Such AI-supported fraud attempts are no longer just a theoretical concern, but have long been a reality. This is now confirmed by initial surveys on the subject.
According to a recent Signicat survey of more than 1,000 fraud experts from the financial sector, 76 percent of respondents see fraud as a bigger problem today than three years ago. 66 percent consider AI-supported identity fraud to be particularly dangerous.
The forecasts on the subject of AI-supported fraud are also alarming: a study by Deloitte shows that generative AI could cause fraud losses of 40 billion US dollars in the USA alone by 2027.
Together, these insights paint a clear picture: advances in AI will make it easier than ever for criminals to create highly realistic images and videos, contributing to a new wave of digital fraud.
eID: A new standard with risks
This new threat situation poses a problem because identification processes are increasingly moving online. The European Union has paved the way for the European digital identity (eID) through Regulation (EU) 2024/1183. In the future, citizens will be able to use EUDI wallets.
Switzerland is also planning its own wallet solution, SWIYU*. However, the transition to digital identity creates new opportunities for attack, as AI-supported fraud scams are likely to become more sophisticated in future.
* In the artificial word SWIYU, the syllable SWI stands for Switzerland, the I for I, identity and innovation and the syllable YU for you and unity.
PXL Vision: Innovative solutions against AI fraud
As experts in identity verification, we at PXL Vision take the threat of synthetic identities and deepfakes very seriously. To prevent fraud, we use advanced technologies such as Near Field Communication (NFC) and liveness checks.
But what do these two terms mean? We explain:
NFC
Near Field Communication (NFC) is an international standard for wireless data transfer over short distances. It is based on radio-frequency identification (RFID) technology, which uses electromagnetic induction to transmit data. Therefore, NFC-based verification processes make it possible to read encrypted data for user verification via an RFID chip integrated into an ID card or identity document. Users can then retrieve their biometric data from the ID card using their smartphone and validate the document's authenticity.
Interested? Find out more about our NFC module here.
Liveness Checks
Liveness checks use liveness detection to verify that the person behind the camera is who they claim to be. This prevents someone from using a printed image of another person's face, for example. They capture the person's movements or a series of selfies to validate the user, thus determining in real time whether it is a real person or an attempt at deception.
Another key component is video injection detection, which identifies videos that have been manipulated or created artificially for the purpose of deception. This involves analysing metadata, motion patterns, and digital artefacts that may indicate tampering. We verify the authenticity of documents by examining them for security features and comparing them with official standards. Additionally, we monitor confirmed suspicious activities and profiles. These proactive measures prevent fraud attempts from being scaled up and repeated, and stop attackers from using their methods on a large scale. These comprehensive protection mechanisms ensure a high level of security and strengthen trust in digital identity verification.
Since February 2024, PXL Vision has been collaborating with the Idiap Research Institute on an innovative solution for deepfake detection. The project is supported by the Swiss Innovation Agency, Innosuisse. PXL Vision aims to bring the world's first robust, AI-based solution for recognising facial images and travel documents to market by the end of 2025. The goal is to significantly enhance the security of digital identity verification processes.
Staying alert in the digital era
Although generative AI offers companies immense opportunities, the risks, such as synthetic identity fraud, should not be underestimated. Given the ongoing digital transformation and the introduction of eID solutions in Europe and Switzerland, robust security solutions are more important than ever.
PXL Vision demonstrates how innovative technologies can combat criminal use of AI and help to make the digital world a safer place.