Customer trust in financial institutions has never been higher, according to a recent survey of over 32,000 bank customers in 28 countries. The 2024 Edelman Trust Barometer found trust levels highest among high-income earners and developing countries, which also support banks’ use of LLMs, AI and advanced technologies to improve security, operations and customer experience.
Despite these positive indicators, the 2024 LexisNexis Risk Solutions True Cost of Fraud Study, EMEA Edition found security a top concern in financial services, due to growing use of stolen credentials and fraudulent identities on banking and ecommerce sites. The survey of 1,845 executives, conducted by Forrester, found 49% of respondents concerned about creating friction by introducing tougher security measures.
Warning that gaps in fraud coverage create vulnerabilities, researchers recommended taking a risk-based and data-driven approach to fraud management, by applying different levels of security based on risks associated with each transaction. “Raise customer awareness about emerging threats – especially in areas like scams, mobile transaction fraud, and QR code fraud – and the potential impact they have on their own security to help them understand the necessity of security measures,” they wrote, adding that these measures will help stakeholders adapt to PSD3’s higher authentication requirements.
Know Your Machine Intelligence
Sully Perella, senior manager at Schellman, agreed with researchers that AI-powered tools are effective at fighting fraud and enhancing the customer experience. He noted, however, that service providers do not always clearly explain AI’s features and benefits.
Perella pointed out that AI is not the first line of defense in a typical transaction flow, which begins with consumer checkout, followed by a bank verifying basic account details, such as IP address, location and time of day. After the bank confirms all checkpoints, a rule-base fraud detection sytstem gathers more granular details, such as browser version, smartphone orientation, last website visited, and how many times a transaction was attempted from a particular location or device.
The ML algorithms use this meta data to build a risk profile and determine if a transaction looks suspicious. For example, it might ask why a request is coming in from Thailand from a cardholder who lives in Fort Worth and has never used this card online. An issuer or service provider can set these threat metrics, for example based on the data received as part of the 3D Secure protocol, originally established by Visa in 2001.
Set Clear Rules on When to Apply Machine Intelligence
Thomas Mueller, co-founder and CEO of Rivero, a European fintech, advocates implementing rule-based machine intelligence when it comes to automating customer support and when offering customer self-serving. These technologies deliver a more consistent and reliable customer experience than large language models (LLMs), which he noted can be prone to hallucinating and making things up.
“There’s a lot of excitement about AI but people who see beyond the hype appreciate our approach and recognize the need to map problems to solutions in a secure, predictable way,” he said. “I would urge banks and card issuers not to expose their customers to a large language model and not to work with a fintech partner that uses LLM for customer service.”
Air Canada learned this the hard way, Mueller noted, when a chat bot made up a refund rule that didn't exist. When the consumer went to claim that refund, the airline was held liable for the chatbot’s actions and had to refund the flight. This highlights why LLMs are not ideal for building virtual agents and apps, he said.
Regulators are also taking a hard look at AI, Mueller noted, citing the European Union’s Artificial Intelligence Act, enacted in March 2024, a regulatory framework designed to be phased into law over a two-year period. This new law will make it more difficult for banks and B2B fintechs to build products on top of machine learning models that are not deterministic or explainable, he said, adding that he agrees with and supports these protections.
Engage with Your Customer
Rule-based virtual assistants are a powerful fraud deterrent for issuing banks, Mueller noted, because they respond rapidly to customer requests in clear, understandable language.
“It’s actually quite difficult to distinguish between real payment card fraud versus cardholders trying to gain a financial benefit by wrongfully claiming non-receipt or a problem with merchandise, or pretending not to recognize a transaction,” he said, noting that first-party fraud has been a growing concern for banks and fintechs.
Sharing transaction details or requesting missing documentation, a rule-based virtual agent engages directly with customers to resolve or deflect a dispute, Mueller said, and connects customers with merchants to solve problems before they become chargebacks. Throughout these conversations, the virtual agent remains attentive and supportive, he added, while staying within the boundaries of established threat metrics and card brand rules.
Related articles: