Traditional financial services’ fraud detection is focused on — surprise, surprise — detecting fraudulent transactions. And there’s no question that generative AI has added a powerful weapon to the fraud detection arsenal.
Financial services organizations have begun leveraging large language models to minutely examine transactional data, with the aim of identifying patterns of fraud in transactions.
However, there is another, often overlooked, aspect to fraud: human behavior. It’s become clear that fraud detection focusing solely on fraudulent activity is not sufficient to mitigate risk. We need to detect the indications of fraud through meticulously examining human behavior.
Fraud does not happen in a vacuum. People commit fraud, and often when using their devices. GenAI-powered behavioral biometrics, for example, are already analyzing how individuals interact with their devices — the angle at which they hold them, how much pressure they apply to the screen, directional motion, surface swipes, typing rhythm and more.
Now, it’s time to broaden the field of behavioral indicators. It’s time to task GenAI with drilling down into the subtleties of human communications — written and verbal — to identify potentially fraudulent behavior.
Using generative AI to analyze communications
GenAI can be trained using natural language processing to “read between the lines” of communications and understand the nuances of human language. The clues that advanced GenAI platforms uncover can be the starting point of investigations — a compass for focusing efforts within reams of transactional data.
How does this work? There are two sides to the AI coin in communications analysis — the conversation side and the analysis side.
On the conversation side, GenAI can analyze digital communications via any platform — voice or written. Every trader interaction, for example, can be scrutinized and, most importantly, understood in its context.
Today’s GenAI platforms are trained to pick up subtleties of language that might indicate suspicious activity. By way of a simple example, these models are trained to catch purposefully vague references (“Is our mutual friend happy with the results?”) or unusually broad statements. By fusing an understanding of language with an understanding of context, these platforms can calculate potential risk, correlate with relevant transactional data and flag suspicious interactions for human follow-up.
On the analysis side, AI makes life far easier for investigators, analysts and other fraud prevention professionals. These teams are overwhelmed with data and alerts, just like their IT and cybersecurity colleagues. AI platforms dramatically lower alert fatigue by reducing the sheer volume of data humans need to sift through — enabling professionals to focus on high-risk cases only.
What’s more, AI platforms empower fraud prevention teams to ask questions in natural language. This helps teams work more efficiently, without the limitations of one-size-fits-all curated questions used by legacy AI tools. Since AI platforms can understand more open-ended questions, investigators can derive value from them out-of-the-box, asking broad questions, then drilling down into follow up questions, with no need to focus on training algorithms first.
Building trust
One major downside of AI solutions in the compliance-sensitive financial services ecosystem is that they are available largely via application programming interface. This means that potentially sensitive data cannot be analyzed on premises, safe behind regulatory-approved cyber safety nets. While there are solutions offered in on-premises versions to mitigate this, many organizations lack the in-house computing resources required to run them.
Yet perhaps the most daunting challenge for GenAI-powered fraud detection and monitoring in the financial services sector is trust.
GenAI is not yet a known quantity. It’s inaccurately perceived as a black box — and no one, not even its creators, understand how it arrives at conclusions. This is aggravated by the fact that GenAI platforms are still subject to occasional hallucinations — instances where AI models produce outputs that are unrealistic or nonsensical.
Trust in GenAI on the part of investigators and analysts, alongside trust on the part of regulators, remains elusive. How can we build this trust?
For financial services regulators, trust in GenAI can be facilitated through increased transparency and explainability, for starters. Platforms need to demystify the decision-making process and clearly document each AI model’s architecture, training data and algorithms. They need to create explainability-enhancing methodologies that include interpretable visualizations and highlights of key features, as well as key limitations and potential biases.
For financial services analysts, building a bridge of trust can start with comprehensive training and education — explaining how GenAI works and taking a deep dive into its potential limitations, as well. Trust in GenAI can be further facilitated through adopting a collaborative human-AI approach. By helping analysts learn to perceive GenAI systems as partners rather than slaves, we emphasize the synergy between human judgment and AI capabilities.
The Bottom Line
GenAI can be a powerful tool in the fraud detection arsenal. Surpassing traditional methods that focus on detecting fraudulent transactions, GenAI can effectively analyze human behavior and language to sniff out fraud that legacy methods can’t recognize. AI can also alleviate the burden on fraud prevention professionals by dramatically reducing alert fatigue.
Yet challenges remain. The onus of building the trust that will enable widespread adoption of GenAI-powered fraud mitigation falls on providers, users and regulators alike.
Dr. Shlomit Labin is the VP of data science at Shield, which enables financial institutions to more effectively manage and mitigate communications compliance risks. She earned her PhD in Cognitive Psychology from Tel Aviv University.