16.06.2025Security

What Type of Social Engineering Attack Attempts to Exploit Biometrics?

Portrait of Fourthline Security Engineer Luigi PardeyBy Luigi Pardey - Security Engineer
Stylised hero image for Fourthline guide on the role of biometrics in social engineering attacks

Biometric authentication systems — from fingerprint readers on smartphones to facial recognition in corporate offices — have become deeply embedded in our digital lives. Because these systems use unique biological traits as the basis for authentication, they’ve grown to be a popular alternative to traditional passwords and PINs. But as these systems become more widespread, they’ve created interesting new challenges for cybersecurity professionals. 

This does not mean biometrics are inherently vulnerable; they are not. The problem is how attackers manipulate human trust around biometric systems, using evolving tools like deepfakes and cloned voices. As a security engineer with nine years of experience, I have noticed some growing confusion about where the vulnerabilities really lie. Some social engineering attacks appear to exploit biometric authentication, but they are actually a classic case of impersonation — no biometrics needed. 

In this article, we will look at some places where social engineering, biometric authentication systems, and emerging tools like deepfakes intersect. We will break down the common misconceptions, examine how attackers target these systems (or appear to), and look at what organisations can do to keep pace. 

The relationship between social engineering and biometrics  

It is essential to clarify some basic concepts when talking about impersonation attacks. Terms like biometrics, deepfakes, and social engineering are often used imprecisely in popular discussions, despite meaning very different things in security practice. 

A glossary of basic concepts  

First, we need to define biometrics and related concepts in the context of authentication:  

  • Biometrics: These are your unique physical or behavioural characteristics — your face shape, facial expressions, iris pattern, fingerprints, voice patterns, etc. — and a reliable and repeatable way of measuring them. You can use the terms “biometrics” and “biometric data” interchangeably when talking about computer systems, though the latter implies a digital representation (like a stored image, audio file, or data point extracted from a scan). 

  • Biometric authentication: This is the process of verifying someone’s identity by comparing newly presented biometric data (e.g., a live face scan) with previously stored data in a biometrics data set. 

  • Biometric authentication system: This is the system that performs the capture of biometric data at the point of authentication and checks their validity. When you log into your phone using facial recognition or a fingerprint, you are using a biometric authentication system (e.g. Apple Face ID). 

Now, let’s clarify what we mean by “social engineering” and tools that may be used in social engineering attacks:   

  • Social engineering: This is the act of exploiting someone's trust in another person to obtain access to their data, credentials, or systems. Some common examples include phishing and whaling.  

  • Deepfakes: These are computer-generated media that use legitimate data sets (e.g. audio, video, images) to create fake representations of a person. Though deepfakes may look and sound like a real person to another person, they do not typically incorporate biometrics. 

Why these distinctions matter  

With that foundation, let’s talk about the actual relationship between social engineering and biometrics. 

Biometric authentication is a technical defence — a way to restrict access based on who someone is, rather than what they know (like a password). When you unlock your phone with your face or fingerprint, you’re using biometrics to authenticate. Biometrics is sometimes known as a phishing-resistant authentication factor, because a scammer would have a difficult time stealing your iris through a phone call, for example. 

Social engineering, on the other hand, is an attack on human trust. It does not directly attack biometric systems; instead, it manipulates people in ways that might enable an attack on those systems — or bypass them entirely. And while these attacks may involve biometric data or even target systems that use biometrics, they do not typically exploit the biometrics themselves. 

As biometric systems have become more common, some attackers have adapted their social engineering techniques to operate around or through them. Sometimes, this means using social engineering to get biometric data from a person, which the attacker can then turn around and use to attack a biometric authentication system.   

Notice the complexity here: a social engineering attack is the first step, and it requires no biometric data to execute. But, when executed successfully, it may harvest the kind of biometric data that can then be used to exploit a biometric authentication system.  

Understanding how these layers fit together is essential to pinning down the actual threat — and building defences that address the right risks.

Impersonation attacks using deepfakes and synthetic media 

One of the more concerning trends in recent years is the use of synthetic media — particularly deepfakes — to carry out impersonation attacks. These don't necessarily exploit biometric authentication systems directly, but they do often exploit trust in someone's visual or auditory identity (which is increasingly mediated through screens). 

While deepfakes are sometimes framed as biometric attacks, this is not the case. Deepfakes are high-quality, computer-generated representations of a person, but they are not biometric data in the traditional sense. They may be designed to resemble a person's biometric characteristics in a way that fools other people, but they’re not precise measurements of biological features that can reliably fool biometric authentication systems. 

Deepfakes could be used in many ways as an attack on facial recognition systems – for example, displaying synthetic videos that mimic a target's face. This is a complex and challenging form of a presentation attack. The common theme of such attacks is the absence of interactions with the victim, so they are their own special category and merit special or complex anti-spoofing protections – such as depth sensing and liveness detection. 

Social engineering attacks, however, are becoming more successful thanks to deepfakes – especially attacks involving video-based communication. Attackers can use synthetic avatars to impersonate trusted individuals, then manipulate targets into taking specific actions. 

Deepfakes and social engineering: A scenario 

Consider a scenario where an attacker wants to impersonate a high-profile manager, such as Fourthline’s CFO.  

The attacker first does background research on the CFO and collects a large amount of media and data. With this data, the attacker creates a deepfake avatar, complete with matching voice and facial features. The attacker then starts a video call to an employee in the finance department.  

Claiming to be on vacation in the French Alps and thus with limited access to corporate systems, the fake CFO (with the fake avatar and background to match) pressures the employee into approving an urgent invoice. The employee, seeing and hearing what looks like the actual CFO, and tired from a long day of work, goes ahead and makes the payment.   

Was this social engineering? Yes. Was it a biometrics attack? Well, not really. 

The essential thing to note is that no biometric authentication system is bypassed here. The attacker is not fooling a fingerprint scanner or facial recognition algorithm. They are fooling a real-life person. The deepfake does not even need to be all that convincing — maybe the employee has seen the CFO around the office a few times but can’t determine if a choppy video from the Alps is really him. 

So, while these attacks are sometimes framed as "biometric exploitation," it is more accurate to view them as identity deception using synthetic media — a tactic that lives at the intersection of AI, impersonation, and social engineering, rather than biometric authentication.

Voice cloning and social engineering 

One of the fastest-evolving tools in social engineering today is voice synthesis — the ability to generate convincing audio that mimics a real person's voice patterns. With just a short voice sample, modern tools can produce highly realistic speech that sounds like a specific individual. You might see, for example, a fake voice scam that convincingly impersonates a celebrity or politician.   

Synthetic voice clips may bypass weak biometric authentication systems, depending on their countermeasures. More notably, however, they can be highly effective when targeting human trust. Indeed, these deepfakes utilise biometrics (harvested voice data). However, they are weaponised not only against the system but also against human trust, which is essential in social engineering.  

A classic example of this type of attack is what security professionals sometimes call "vishing" (voice phishing), where attackers use voice synthesis to impersonate trusted individuals or organisations. The tactic is not so different from the scenario in the previous section: Imagine you get a phone call that sounds exactly like your CEO asking you to urgently authorise a wire transfer. This may be a strange request, but the familiarity of the voice can override your scepticism. Again, this isn't a hack of a biometric system — it is a trust-based social engineering attack that uses synthetic media. 

What once might have been an obviously suspicious call — from a call centre agent reading a script or an unfamiliar, robotic voice — can now sound like someone you know and trust. If a system can generate the voice of a specific person, it makes vishing much more dangerous, especially when combined with other social engineering tactics like urgency, authority, or contextual knowledge about the target.

Why biometric-based social engineering isn’t more common (yet) 

Despite growing concerns about deepfakes and voice cloning, these types of attacks are still relatively rare. Sophisticated biometrics-adjacent attacks still face significant barriers compared to good, old-fashioned tactics like mass phishing and smishing. 

Why? Because they are harder to pull off. 

Vishing, for example, is not as easy to perpetrate on a specific private individual. Put simply: with technology available today, if you want to make a high-quality synthetic voice, you need access to clean, lengthy voice samples of the target, and specialised hardware to produce the samples in a reasonable amount of time. It is feasible for public figures, but less so for an average employee at a mid-sized company.  

For many attackers, the cost-benefit calculation simply doesn't justify the investment in sophisticated biometric (or biometric-adjacent) exploitation. 

This is why SMS phishing, or "smishing," and other less-sophisticated social engineering attacks may not go away any time soon. It is still a lot easier and cheaper to send an SMS to a few hundred people than to impersonate the voice of one person. And if just one of those several hundred people clicks on the SMS link and enters their credentials, the cost-benefit calculation looks pretty good to the attacker.

Digital footprints and biometric vulnerability 

One factor that increases vulnerability to biometric-focused social engineering is an extensive digital footprint. As people share more content online, they inadvertently provide raw material that attackers can use to synthesise convincing biometric impersonations. 

Anyone who shares their likeness on social media or other platforms should be aware of the risks. It is technically possible for attackers to scrape one of these sites and get enough personal information to build a fake persona based on you. Social media sites may offer data privacy and other protections to reduce this risk, and it is always a good idea to configure these protections. But be aware that there is always a risk that the data can be lost, leaked, stolen or sold to others who could use it against you. 

This risk is particularly pronounced for individuals with a substantial public presence, including executives, celebrities, or extremely active social media users. Their widely available images, videos, and audio recordings can provide a sort of biometric data set that attackers may try to use to create deepfakes or synthetic voice models. 

Protecting against biometric social engineering 

Defending against these sophisticated attacks requires a multi-layered approach that acknowledges the evolving nature of the threat. Consider these protective measures: 

1. Establish verification protocols 

Create secondary verification methods for sensitive requests, especially those received through digital channels. For example, if you receive a video or voice call requesting sensitive information, establish protocols to verify the identity through additional channels. 

2. Limit your digital footprint 

Be mindful of how much biometric data you make publicly available. Consider limiting public videos, voice recordings, and high-resolution images that could be used to create synthetic versions of your biometric identifiers. 

3. Understand the limits of liveness detection 

Liveness detection can be good at detecting things like face masks, pictures of pictures, and replayed videos. One such method involves asking the user to perform a specific action — like blinking or turning their head. This method would help to spot low-effort spoofing techniques. But real-time deepfakes can pass some kinds of liveness checks, because they are able to respond dynamically. The techniques to detect modern presentation attacks involves advanced methods, which are more in the realm of AI engineering.

4. Maintain awareness of attackers’ methods  

In social engineering attacks, the attackers aren't using their own biometrics — they’re attempting to use stolen, fabricated, or synthetically generated representations of someone you already trust. Whether through deepfakes, voice cloning, or manipulated images, the goal is to exploit your familiarity with a real, legitimate identity. By staying aware of these techniques, you’ll be more likely to spot when something seems off.

5. Consider physical authentication alternatives 

In high-security contexts, use physical verification methods that can't be remotely spoofed. After all, an impersonator will have a hard time presenting a spoofed finger to an air-gapped fingerprint reader halfway across the world.

The future of biometrics in social engineering 

As biometric authentication becomes more prevalent, the social engineering techniques used to undermine and bypass it are evolving, too. The most concerning attack vectors involve synthetic impersonation — artificially generated faces, voices, and personas designed to exploit both humans and automated systems. 

While these sophisticated attacks remain relatively rare, organisations and individuals should remain vigilant, especially as synthetic media technology becomes more accessible. For now, the costs and technical barriers remain high, but that may not be the case for long.

The fundamental principle of social engineering remains unchanged: attackers exploit trust, not just technology. 

Luigi Pardey is a Security Engineer at Fourthline.

This article is for informational purposes only and does not constitute legal advice.