Protecting payments in an era of deepfakes and advanced AI

Image: VectorMine/Adobe Stock

In the midst of unprecedented volumes of e-commerce since 2020, the number of digital payments made every day around the planet has exploded – hitting about $6.6 trillion in value last year, a 40 percent jump in two years. With all that money flowing through the world’s payments rails, there’s even more reason for cybercriminals to innovate ways to nab it.

To help ensure payments security today requires advanced game theory skills to outthink and outmaneuver highly sophisticated criminal networks that are on track to steal up to $10.5 trillion in “booty” via cybersecurity damages, according to a recent Argus Research report. Payment processors around the globe are constantly playing against fraudsters and improving upon “their game” to protect customers’ money. The target invariably moves, and scammers become ever more sophisticated. Staying ahead of fraud means companies must keep shifting security models and techniques, and there’s never an endgame.

SEE: Password breach: Why pop culture and passwords don’t mix (free PDF) (TechRepublic)

The truth of the matter remains: There is no foolproof way to bring fraud down to zero, short of halting online business altogether. Nevertheless, the key to reducing fraud lies in maintaining a careful balance between applying intelligent business rules, supplementing them with machine learning, defining and refining the data models, and recruiting an intellectually curious staff that consistently questions the efficacy of current security measures.

An era of deepfakes rises

As new, powerful computer-based methods evolve and iterate based on more advanced tools, such as deep learning and neural networks, so do their plethora of uses – both benevolent and malicious. One practice that makes its way across recent mass-media headlines is the concept of deepfakes, a portmanteau of “deep learning” and “fake.” Its implications for potential breaches in security and losses for both the banking and payments industries have become a hot topic. Deepfakes, which can be hard to detect, now rank as the most dangerous crime of the future, according to researchers at University College London.

Deepfakes are artificially manipulated images, videos and audio in which the subject is convincingly replaced with someone else’s likeness, leading to a high potential to deceive.

These deepfakes terrify some with their near-perfect replication of the subject.

Two stunning deepfakes that have been broadly covered include a deepfake of Tom Cruise, birthed into the world by Chris Ume (VFX and AI artist) and Miles Fisher (famed Tom Cruise impersonator), and deepfake young Luke Skywalker, created by Shamook (deepfake artist and YouTuber) and Graham Hamilton (actor), in a recent episode of “The Book of Boba Fett.”

While these examples mimic the intended subject with alarming accuracy, it’s important to note that with current technology, a skilled impersonator, trained in the subject’s inflections and mannerisms, is still required to pull off a convincing fake.

Without a similar bone structure and the subject’s trademark movements and turns of phrase, even today’s most advanced AI would be hard-pressed to make the deepfake perform credibly.

For example, in the case of Luke Skywalker, the AI used to replicate Luke’s 1980’s voice, Respeecher, utilized hours of recordings of the original actor Mark Hamill’s voice at the time the movie was filmed, and fans still found the speech an example of the “Siri-like … hollow recreations” that should inspire fear.

On the other hand, without prior knowledge of these important nuances of the person being replicated, most humans would find it difficult to distinguish these deepfakes from a real person.

Luckily, machine learning and modern AI work on both sides of this game and are powerful tools in the fight against fraud.

Payment processing security gaps today?

While deepfakes pose a significant threat to authentication technologies, including facial recognition, from a payments-processing standpoint there are fewer opportunities for fraudsters to pull off a scam today. Because payment processors have their own implementations of machine learning, business rules and models to protect customers from fraud, cybercriminals must work hard to find potential gaps in payment rails’ defenses – and these gaps get smaller as each merchant creates more relationship history with customers.

The ability for financial companies and platforms to “know their customers” has become even more paramount in the wake of cybercrime’s rise. The more a payments processor knows about past transactions and behaviors, the easier it is for automated systems to validate that the next transaction fits an appropriate pattern and is likely authentic.

Automatically identifying fraud in these cases keys off of a large number of variables, including  history of transactions, value of transactions, location and past chargebacks – and it doesn’t look at the person’s identity in a way that deepfakes might come into play.

The highest risk of fraud from deepfakes for payments processors rests in the operation of manual review, particularly in cases where the transaction value is high.

In manual review, fraudsters can take advantage of the chance to use social-engineering techniques to dupe the human reviewers into believing, by way of digitally manipulated media, that the transactor has the authority to make the transaction.

And, as covered by The Wall Street Journal, these types of attacks can be unfortunately very effective, with fraudsters even using deepfaked audio to impersonate a CEO to scam one U.K.-based company out of nearly a quarter-million dollars.

As the stakes are high, there are several ways to limit the gaps for fraud in general and stay ahead of fraudsters’ attempts at deepfake hacks at the same time.

How to prevent deepfakes’ losses

Sophisticated methods of debunking deepfakes exist, utilizing a number of varied checks to identify mistakes.

For example, since the average person doesn’t keep photos of themselves with their eyes closed, selection bias in the source imagery used to train AI creating the deepfake might cause the fabricated subject to either not blink, not blink at a normal rate or to simply get the composite facial expression for the blink wrong. This bias may impact other deepfake aspects such as negative expressions because people tend not to post these types of emotions on social media – a common source for AI-training materials.

Other ways to identify the deepfakes of today include spotting lighting problems, differences in the weather outside relative to the subject’s supposed location, the timecode of the media in question or even variances in the artifacts created by the filming, recording or encoding of the video or audio when compared to the type of camera, recording equipment or codecs utilized.

While these techniques work now, deepfake technology and techniques are quickly approaching a point where they may even fool these types of validation.

Best processes to fight deepfakes

Until deepfakes can fool other AIs, the best current options to fight them are to:

  • Improve training for manual reviewers or incorporate authentication AI to better spot deepfakes, which is only a short-term technique while the errors are still detectable. For example, look for blinking errors, artifacts, repeated pixels or problems with the subject making negative expressions.
  • Gain as much information as possible about merchants to make better use of KYC. For example, take advantage of services that scan the deep web for potential data breaches impacting customers and flag those accounts to watch for potential fraud.
  • Favor multiple-factor authentication methods. For example, consider using Three Domain Server Security, token-based verification and password and single-use code.
  • Standardize security methods to reduce the frequency of manual reviews.

Three security “best practices”

In addition to these methods, several security practices should help immediately:

  • Hire an intellectually curious staff to establish the initial groundwork for building a safe system by creating an environment of rigorous testing, retesting and constant questioning of the efficacy of current models.
  • Establish a control group to help gauge the impact of fraud-fighting measures, give “peace of mind” and provide relative statistical certainty that current practices are effective.
  • Implement constant A/B testing with stepwise introductions, increasing usage of the model in small increments until they prove effective. This ongoing testing is crucial to maintain a strong system and beat scammers with computer-based tools to crush them at their own game.

End game (for now) vs. deepfakes

The key to reducing fraud from deepfakes today is primarily won by limiting the circumstances under which manipulated media can play a role in the validation of a transaction. This is accomplished by evolving fraud-fighting tools to curtail manual reviews and by constant testing and refinement of toolsets to stay ahead of well-funded, global cybercriminal syndicates, one day at a time.

rahm profile
EBANX’s VP of Operations and Data, Rahm Rajaram

Rahm Rajaram, VP of operations and data at EBANX, is an experienced, financial services professional, with extensive expertise in security and analytic topics following executive roles at companies including American Express, Grab and Klarna.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.