Kseniia Iankina is a Product Manager at Fourthline, focusing on the company's biometrics products. Since joining in October 2024, she has overseen the development and enhancement of liveness detection technologies that help verify users’ identities while combating sophisticated fraud attempts. In this conversation, she explains how Fourthline’s Random Liveness product represents a significant advancement in fraud prevention and discusses the evolving landscape of deepfake threats.
Can you start by describing your role at Fourthline and what products you work on?
I focus on the biometrics module. That's basically everything related to checking users’ liveness, taking selfie pictures, and helping our AI determine whether we're dealing with an authentic person, rather than a spoof or fraudster.
When we look at Fourthline's biometrics offerings, there are several different components. Can you walk us through what these are and how they work together?
Biometrics is the overarching product that can be embedded as part of our KYC solution or authentication solution. Within that product, we have different features or modules.
First, we have the Selfie Photo module, where we take a photo of the user and validate that this is an authentic person. But business partners often want more than just photos — they want active authentication to really check whether the person is live. That's where Selfie Liveness comes in. We ask users to hold their phone and make simple movements; then, we analyse the video using AI to detect things like masks and determine if it’s really a living person.
But here's the challenge: Selfie Liveness is very straightforward. If you're a fraudster doing this repeatedly, you can potentially simulate it — or inject a pre-recorded video of someone performing those predictable movements.
That's where Random Liveness comes in, right?
Exactly. Random Liveness is a substitute for the default liveness check. Instead of asking users to make the same predictable movements, we present them with an eight-point circle. The system randomly selects different angles and asks users to look at specific points — maybe the top right corner, then bottom left — in completely random sequences.
This makes it much harder for fraudsters to game the system, because they never know which angles will appear or in what order. As a business partner, you can configure this to catch more sophisticated fraud attempts.
When you’re developing a product like Random Liveness, how do you balance the trade-off between stronger fraud detection and maintaining good conversion rates?
The rationale for each partner is all about risk appetite. If you're a bank with strict compliance regulations and you value having a very low False Acceptance Rate (FAR), you're willing to accept some impact on conversion to significantly reduce fraud risk. It's always about finding that sweet balance between FAR and False Rejection Rate (FRR) based on your industry and risk tolerance.
With deepfakes becoming more sophisticated, we see Random Liveness as one additional layer of prevention that could help us catch more fraud. Of course, we need more real-world data to validate our assumptions, but we believe it's a step forward in fraud detection.
You mentioned the growing threat of deepfakes. How significant is this challenge, and how difficult is it for fraudsters to actually inject fraudulent videos into an onboarding flow?
I was at the Identity Week conference a few weeks ago, and deepfakes were the main focus. Industry leaders, including our biggest competitors, were all discussing this. The consensus was clear: just analysing a photo definitely won't save you from accepting deepfakes. Even basic liveness checks — the kind everyone uses, where you turn your head side to side — are becoming easier to hack.
With that said, to successfully inject a video, you likely need to be a professional fraudster. It's not something a beginner could easily do. But people who do this professionally will invest the effort to figure it out. We're seeing more sophisticated attempts in our production environment — deepfakes of faces and even deepfake documents.
The concerning trend is that account takeover fraud has increased significantly, coinciding with the rise of deepfake technology. When someone uses a deepfake to pretend they're you and gains access to your account, the financial damage can be devastating.
How does Random Liveness specifically address these deepfake threats?
It essentially removes an attack vector for fraudsters and closes a vulnerability that would otherwise exist in simpler systems. The randomisation makes it exponentially harder to create pre-recorded content that would work. Fraudsters would need to somehow anticipate the exact sequence of movements, which is virtually impossible since the system generates these randomly.
We're also exploring additional layers of protection. For example, we're now getting depth data from devices like iPhones, which helps us determine if a video was injected or if we're actually seeing a three-dimensional person. When you combine depth data with Random Liveness, it becomes quite powerful.
Can you walk us through what the user experience looks like with Random Liveness?
In a typical KYC flow, users first complete the document section by taking photos of the front and back of their ID. Once that's done, they move to the face verification section. They're asked to position their face within a circle on the screen.
With Random Liveness, for example, they might see a light move to a specific part of the circle — maybe the top right corner. Instructions tell them to look exactly where that light stopped, so they turn their head in that direction. Then, the light moves to a different random position, and they do it again. They complete this process a few times, and then the flow is finished.
It's intuitive enough that most users can follow along, but random enough that it's nearly impossible to game.
Do you see this technology primarily for financial services, or are there broader applications?
I wouldn't restrict it to just financial services. Sure, we do have highly compliance-focused industries in mind, but there are other use cases. For example, in gaming, you might want to prevent fraudsters from creating multiple accounts to exploit new user benefits or manipulate systems.
With that said, I do think about the user experience challenges. At a recent business review, one of our partners mentioned that users over 55 often drop off because of the digital complexity — they struggle with knowing where to click or what's expected of them. This makes me think about whether we can eventually develop more passive authentication methods that maintain high accuracy without requiring as much user interaction.
That's an interesting point. How do accessibility considerations factor into biometrics technology?
This is honestly quite challenging, and it was actually one of the main issues I wanted to tackle when I joined Fourthline. If someone can't lift their hand or has mobility restrictions, they won't be able to complete liveness modules or even take photos properly. Right now, people with accessibility issues often have to resort to manual processes, like visiting a bank in person.
We need to think about providing alternative authentication methods. Maybe that's where our Selfie Audio product could play a bigger role, along with other solutions that accommodate different disabilities.
How do you see liveness detection technology evolving over the next five to ten years?
From what I've heard at industry conferences, I think we'll see more complicated instructions and more sophisticated detection methods. But the real challenge is maintaining that balance between keeping the user experience simple and conversion rates high, while also maintaining accuracy and catch rates.
The ideal would be finding ways to detect deepfakes and fraud through passive authentication methods that don't require users to perform complex actions. However, I haven't heard a definitive strategy from anyone in the industry in terms of how to achieve this. I'm actually meeting with our AI team next week to review some new research papers on deepfake detection, so hopefully we'll discover some interesting developments.
Finally, what's your overall assessment of where Random Liveness fits in Fourthline's fraud prevention strategy?
I see it as an important layer in our multi-layered approach to fraud prevention. We're not just relying on one technology — we're combining liveness detection with depth data, document analysis, behavioural analytics, and other signals. Random Liveness makes it significantly harder for fraudsters to succeed with injected videos or deepfakes, especially when combined with our other detection capabilities.
The key is that we're staying ahead of the curve. While it's quite difficult to inject fraudulent videos today, professional fraudsters will eventually figure out ways around simpler liveness checks. Random Liveness gives us that extra security layer that makes our clients' systems much more resilient against sophisticated fraud attempts.
Kseniia Iankina is a Product Manager at Fourthline, where she focuses on biometric technologies. Her work involves balancing security, user experience, and accessibility to create robust identity verification solutions for clients across multiple industries. Learn more about Fourthline’s solutions.