Fraud has become, to put it mildly, big business.
Featurespace Chief Operating Officer Tim Vanderham told PYMNTS’ Karen Webster in an interview that “when you think about the billions and billions of dollars that come from scams globally,” the money made from illicit gains overshadows the revenues of some of the largest businesses around the globe.
The conversation came against the backdrop of an article from The Wall Street Journal that detailed the rise of “scam dens,” which operate essentially as business centers with sophisticated setups, complete with separate departments for training fraudsters, “onboarding” unwitting victims and KPIs used to determine whether certain scams are successful or not.
Along the way, fraudsters are proving adept at using artificial intelligence to develop relationships and trust on the part of their victims, preying on human emotions and making off with individuals’ life savings and retirement holdings, draining their bank accounts with brazen speed, notably through authorized push payments.
In the United States alone, Vanderham said, the $2.7 billion in fraud reported just a few years ago represents only a fraction of the true tally — mostly because people are embarrassed to report that they’ve become prey to unscrupulous scams. In the meantime, the crime syndicates are using the stolen funds to bankroll other crimes such as human trafficking and the drug trade.
The banks and service providers tasked with battling fraudsters have a challenge when it comes to using AI to, well, fight AI.
“They’re not bound by the same criteria when it comes to leveraging AI and machine learning,” Vanderham said.
Financial institutions (FIs) are bound by ethical concerns and a burgeoning set of regulations that are still being hammered out.
But the data that crosses the financial services system daily, and a collaborative approach to harnessing and analyzing that data can go a long way toward modeling what “genuine human behavior” looks like — building profiles from individuals’ trends and transactions, he said.
Featurespace’s models use behavioral analytics and collaboration to understand, for instance, how the transactional behavior of an individual consumer in London might differ from another individual living in South Africa — or uncover whether a new transaction to Hong Kong might be a red flag if it comes from someone who’s never transacted there before, Vanderham said.
The data “helps banks and FIs with those warning signs,” Vanderham said, which in turn fosters education and a reality check for end users so that they can go through extra validations to ensure that the transactions are warranted and are going where they should be headed.
Featurespace has been investing in advanced algorithms to underpin fraud prevention efforts. Last year it launched TallierLTM, the world’s first large transaction model, which uses generative AI to improve fraud value detection by up to 71%.
“What OpenAI did around language and words, we’ve created for the payments environment — modeling what genuine behaviors and transactions will look like,” Vanderham said.
It will be critical for the public and private sectors to work together to help regulations and technologies evolve.
“We have to make sure that we’re using advanced data algorithms and machine learning over this data to combat the fraud and to do everything we can to allow consumers to transact more freely,” Vanderham noted.
As he told Webster, “We’re prepared to fight against these fraudsters — to take them out, and to beat them at their own game” with AI and machine learning as two of the most prominent lines of defense (and offense) against such criminals.