AI has been arguably the biggest trend in tech for 2023 and is attracting a range of often
dramatic commentary. On one hand, many industry players believe it will usher in decades of innovation and be as transformational as the internet itself. On the other hand, a number of observers are uneasy about the technology and its potential for negative, or even catastrophic, consequences.
But for many financial institutions today, these headlines miss the point. Rather than looking ahead into what may or may not happen in the future, AI has very practical and beneficial applications in financial services right now.
However, it is also important to note that, for financial institutions, compliance, stability, and security are fundamental; everything else, even functionality, comes later. There is simply too much at stake, including customers' money, reputational risks, and possible regulatory fines, for it to be any other way.
In this post, we are going to take a look at why AI is important today, introduce Fourthline’s
approach and philosophy and some of the critical areas you need to consider when partnering with fintechs that leverage AI in their solutions.
Why AI is becoming ever-more critical in terms of combating financial crime
Financial crime is evolving quickly. New technologies are always available to scale their activities and beat security systems. On top of that, criminals are unburdened by regulations and ethics. As a result, they have many routes to accessing account details. Checks performed by humans are always going to be an important part of combating crime. But the proliferation and growing sophistication of malicious approaches, which are coming from humans, machines, and sometimes AI tools, makes the challenge of addressing them with humans alone increasingly difficult, not to mention expensive.
That’s where AI can help. AI solutions can be leveraged to spot patterns in financial crime that humans miss and automate time-consuming tasks. As an example, Fourthline uses biometric AI checks to detect, recognize, and read identification documents, matching images with people, for faster processing times, and higher conversions. Furthermore, we look at patterns on a cross-border, cross-partner level, without sharing client data, to monitor market-wide trends.
While human intervention will always be needed, AI will become increasingly useful, and even necessary, to manage the growing volume of fraud made possible by new technologies.
How Fourthline approaches AI
1. Consistent, long-term investment
While there will no doubt be some breakthrough startups emerging from this current AI hype cycle, many will fail. They don’t solve any problems, don’t actually use AI, are simply wrappers around ChatGPT, or any other number of reasons.
On the other hand, a number of tech companies have steadily been building AI capabilities over years. For example, Fourthline began investing in AI six years ago. Rather than relying on off-the-shelf software, Fourthline trains algorithms from scratch directly using our in-domain data. And we constantly test and refine its capabilities before rolling it out to customers.
This is important for financial institutions that partner with any fintech leveraging AI. You need to know that the solution is stable and reliable enough to trust with your organization’s reputation, not to mention customers’ personal details. This means it needs to have been thoroughly tested in real-life situations, across edge cases, at scale, over time. You also need to be able to explain how a solution reaches a decision, as this is a requirement of many regulators.
2. Technology built and maintained in-house
Many third-party AML or KYC solutions use APIs and core technology built by other companies. This is a particularly concerning issue since it isn’t always clear what is happening with the data, where it is being stored, and how secure it is.
At Fourthline, we take a different approach. We build all our technology and keep all our training data in-house. This has a number of benefits.
First, if a regulator wants to know how decisions are reached and how the model works, we can explain it easily. This is important because it is a regulatory requirement in the EU that Identity Verification and AML technologies, such as Fourthline, are able to explain how their solutions work. Further, we can provide full audit trails for auditors and regulators.
Second, it means that no data is being shared with third parties and therefore it cannot be used in ways that we are not aware of or wouldn’t expect. And finally, if we build in-house, we also have greater ability to control, test, modify, and build on our solutions for the benefit of our customers.
3. A structured approach to data integrity and ethical practices
AI raises a range of ethical questions that frequently make the news. For example, the data that facial recognition AI is trained on comes with its own ethical challenges. There have been cases where companies have developed and are exporting face recognition tools that can connect a surveillance camera image of a person to their online identities. Likewise, there have been many well-publicized issues surrounding the facial recognition of people with different ethnic backgrounds or genders. Being connected to any such issue is a huge regulatory, reputational, and financial risk.
Fourthline ensures integrity around data and an ethical approach to AI in several ways. First, all our training data is strictly governed, meaning there are no capabilities trained on unethically-sourced data. Second, the training data is representative of the end customers that use it. During Fourthline’s onboarding flow, customers agree to share their biometric data to improve the service, meaning our models are trained on the exact data to which they are applied. And third, because training data is stored on Fourthline servers, it also cannot be used for illegitimate or unethical purposes by any third parties.
4. A human in the loop at the right moments
AI can do many tasks faster and better than humans can. But there are always risks and gray areas, and being too hands-off with any AI solution can end up with problems. For example, with certain biometric checks, you may take a conservative approach and block any partial or indeterminate match. But of course, the problem with this approach is that you will also block legitimate customers, damage your reputation, and lose revenue. Or you could be overly trusting, and fail to notice as fraud evolves.
Fourthline introduces human checks at key moments, including if a decision is indeterminate, or if it is required by regulations that a person is involved. And finally, with human input, we back-test models for multiple reasons, such as to check if fraudsters have applied new techniques or technologies, or if document specs have changed. This means that we can constantly tweak or add checks, ensuring our customers stay ahead of any trends or changes.
AI can be leveraged in a compliant, safe, and reliable way today
In spite of all the recent headlines, AI is already being leveraged by fintechs and financial services to great effect. In fact, it provides a much faster, more accurate, and more efficient way to perform a range of tasks.
However, when considering whether you may want to partner with an AI solution, it is critical to understand key issues such as the provider’s track record, the ‘explainability’ of their solution to regulators, and their approach to data integrity and security, ethics, and so on.
If you’d like to find out more about how AI can be leveraged today through solutions that are fully developed in-house; explainable, and powered by ethical AI, contact us.
This article was written in consultation with Fourthline’s VP of Machine Learning Research, Sebastian Vater.