04.08.2025Security

How to Detect Deepfakes: A Guide for Financial Institutions

Fourthline Forrester TEI thumbnailBy The Fourthline Team
Stylised hero image for Fourthline guide on how to detect deepfakes

Once upon a time, creating a convincing fake document required specialised skills, expensive equipment, and considerable time. Recently, however, advances in AI have largely stripped away those barriers. Today, anyone with a laptop and a bit of savvy can generate realistic-looking identity documents, selfies, and videos in minutes, using freely available AI tools like Midjourney, D-ID, or FaceSwap. 

These artefacts belong to a new grouping of synthetic media called "deepfakes." Deepfakes are created using artificial intelligence — typically deep learning — that manipulates or generates audio, video, or images to convincingly mimic real people or create entirely fictional personas. The term combines "deep learning" and "fake," reflecting the use of neural networks to produce content that appears authentic but is in fact not. 

For financial institutions, deepfakes pose a serious problem: traditional verification methods weren't designed to catch documents that never existed in the physical world. The question now isn't whether your organisation will encounter cases deepfake fraud — it's whether you'll be able to detect them when they inevitably arrive. 

Luckily, while AI stands to cause real harm, it can also be part of a multi-pronged detection solution. Here, we dive into the very real threat of deepfakes and what tools can detect them to prevent fraud.

Deepfakes have changed document verification forever  

Traditional document verification looked for signs of physical tampering, such as altered text, missing security features, or poor-quality reproductions. This generally worked well when fraudsters had no other choice but to manipulate real documents. But with the rise of AI-generated forgeries, the focus has shifted.   

As Konstantinos Levantis, Data Scientist at Fourthline, explains: “Instead of asking, ‘Can we see the holograms and security features as expected?’, we’re increasingly saying, ‘Sure, the holograms are there and everything looks fine — but does this document even exist in the real world, or has it been generated entirely by AI?’ It marks a real shift in how we detect fraud.”

When entire documents are digitally fabricated, polished attributes that check the traditional boxes are no longer a sure sign of authenticity. The challenge becomes less about spotting physical alterations and more about assessing whether the document has any real-world origin at all.

Why deepfake detection matters now  

The democratisation of AI tools has made sophisticated fraud accessible to most people with an internet connection. Where creating convincing fake documents once required a vast well of technical expertise, the barrier to entry is now decidedly lower. 

"A lot of what used to be hard to create from scratch is now becoming easily possible with the emerging technologies around GenAI," says Levantis. "Being able to detect such frauds is quickly becoming essential."  

This creates several new and pressing challenges:  

  • Volume: AI generates fraudulent documents faster than manual review can process them 

  • Quality: AI-generated documents often look more convincing than traditional forgeries 

  • Evolution: Deepfake tools improve rapidly, outpacing static detection methods 

  • Uniqueness: Fraudsters can create synthetic ID documents that don't exist in any reference database 

The sophistication extends beyond documents to voice and visual impersonation. "I have seen, worryingly, an uptick in the quality of computer-generated personas that basically track the motion of a person but then display a different avatar," explains Luigi Pardey, Security Engineer at Fourthline. "These personas are based on multiple different elements and do not need to represent a real person."  

When it comes to detecting these deepfakes, there are several methods that may prove helpful. These methods are not mutually exclusive but are often used together as part of a robust identity verification solution.

Methods for detecting deepfakes 

Analysing facial similarity patterns 

Counter-intuitively, one of the strongest indicators of deepfake fraud is when faces look too similar rather than too different. 

"Deep-fake documents are often created so that the person in the document photo looks very similar to the person in the selfie. Sometimes too similar," explains Levantis. "Along with patterns in other data points, this can lead to the discovery of fraud."  

Traditional identity theft typically shows natural variations between document photos and selfies — different lighting, angles, ages, and so on. But deepfakes often generate document photos specifically to match submitted selfies, creating unnaturally high similarity scores. In technical terms, abnormally high cosine similarity scores between facial embeddings can indicate synthetic pairings designed to match too perfectly. 

An effective similarity analysis examines the following: 

  • Threshold monitoring: Flagging suspiciously high similarity scores 

  • Pattern recognition: Using machine learning to identify face similarity patterns 

  • Contextual evaluation: Comparing similarity scores with other verification factors 

This process requires sophisticated algorithms that distinguish between legitimate high similarity (recent photos, family resemblance) and artificial similarity created by deepfake technology.   

Fourthline's document and biometric verification, for example, uses proprietary Face Match algorithms to match selfies to ID photos, as well as active liveness detection and photo-of-a-photo detection to ensure that users are who they claim to be.

Multi-modal AI analysis 

The most effective approaches to deepfake detection combine multiple AI techniques to analyse different aspects of a submission simultaneously.  

“One of the most powerful tools we’ve worked with as researchers is an experimental multimodal AI system,” says Levantis. “It combines at least two types of inputs and helps to uncover fraud cases we might have previously missed.”  

By comparing signals across different modes (for example, checking whether the visual appearance of a document aligns with the written content), these systems can detect subtle inconsistencies that wouldn’t be visible when analysing each input separately. 

 Multi-modal analysis typically examines:  

  • Visual elements: Computer vision analysis of documents and selfies for signs of AI generation 

  • Text consistency: Natural language processing to verify text authenticity 

  • Cross-reference patterns: Identifying relationships between data points that suggest something artificial 

  • Behavioural anomalies: Detecting unusual interaction patterns with verification systems 

While all these tools are useful, it’s important to note that AI-based detectors can degrade over time, especially as new deepfake techniques emerge.   

Many systems also struggle to reliably detect deepfakes created by models they haven’t encountered before. This means that ongoing retraining and performance evaluation are essential.

Using AI to catch AI  

"Whereas fraudsters are new to AI, fraud prevention is not,” observes Levantis. “Indeed, machine learning has proven to be a very powerful tool in the toolbox of fraud prevention, even when the challenge was to catch traditional frauds. Now, it is also being used to fight fire with fire."

AI-powered detection offers several advantages:  

  • Processing speed: Analysing thousands of cases per minute 

  • Pattern recognition: Identifying subtle indicators humans miss 

  • Continuous improvement: Learning from new deepfake techniques automatically 

  • Scalability: Handling verification volumes that wouldn't be possible with manual review 

This approach requires ongoing investment and expertise. Organisations must continuously update their AI models to keep pace with evolving deepfake technology, or work with a partner like Fourthline, who can help them remain resilient to emerging attack methods.

The staying power of traditional techniques 

While AI represents the cutting edge, traditional fraud-prevention methods may still be valuable components of a comprehensive detection strategy.   

"Even traditional computer-vision approaches (i.e. no AI) can be useful," explains Levantis. "AI is not the answer to everything...understanding the problem is the most important piece of the puzzle. Because then, it is sometimes possible to tackle modern problems with old approaches.” 

Here are a few of the more traditional methods for detecting deepfakes:  

  • Pixel analysis: Examining compression artefacts and manipulation indicators 

  • Geometric verification: Checking facial proportions and spatial relationships 

  • Lighting consistency: Detecting inconsistent lighting patterns 

  • Texture analysis: Identifying unnatural skin or hair textures 

These techniques may work well in a complementary role to AI-powered analysis, providing multiple verification layers that increase overall detection accuracy.

The challenge of deepfakes improving in quality 

The sophistication of AI-generated content continues advancing rapidly across all mediums.

When it comes to text-based fraud, Pardey notes that even the quality of writing has improved, presenting a more potent risk for even the wariest among us: “This is a risk: people were still falling for this type of fraud before, when it was badly written. Now, when the writing is more concise or better formatted, it is more likely that somebody may think that it is an actual person behind the keyboard."  

The same principle applies to visual and audio deepfakes. As quality improves, detection becomes more challenging, punctuating the need for more thorough processes.

How Fourthline helps financial institutions fight deepfakes 

With so much on the line, many organisations find that partnering with specialised identity verification providers offers better results than building detection capabilities internally.   

Fourthline's platform integrates deepfake detection with traditional verification methods, providing comprehensive protection without requiring separate systems or specialised expertise. This approach allows organisations to benefit from continuous improvements in detection technology whilst focusing resources on their core business activities.  

Deepfake technology continues evolving rapidly. Organisations that invest in adaptive, multi-layered deepfake detection capabilities will be best positioned to protect against this threat whilst maintaining efficient customer verification processes.  

Want to learn more about how Fourthline can help? Talk to one of our experts today.

Deepfake detection FAQs 

How accurate are current deepfake detection systems? 

Detection accuracy varies based on both the deepfake’s sophistication and the quality of the detection system. Leading platforms achieve high accuracy against current techniques, but this remains an ongoing technological competition. The most effective approach combines multiple detection methods, rather than relying on any single technique. 

Can deepfakes fool all verification systems?  

Sophisticated deepfakes can potentially fool basic verification systems, which is why advanced detection requires multi-modal analysis. Modern systems analyse static features as well as dynamic behaviours and contextual factors that are difficult for AI to replicate convincingly. 

This article incorporates insights from Luigi Pardey, a Security Engineer at Fourthline, and Konstantinos Levantis, a Data Scientist at Fourthline. It is for informational purposes only and does not constitute legal advice.