BID® Daily Newsletter
Dec 1, 2025

BID® Daily Newsletter

Dec 1, 2025

How CFIs Are Countering the Latest AI-Driven Biometric Fraud

Summary: AI-driven biometric fraud is surging, with face and voice ID spoofing, AI deepfakes, and synthetic identities. We delve into the details and how financial institutions are layering security methods to help keep accounts safe.

When it comes to synthetic identities, no movie depicts this more innovatively than the 1997 film “Face/Off”, in which actual faces are grafted onto the people aiming to impersonate them. In the Sci-Fi movie, FBI Special Agent Sean Archer (played by John Travolta) undergoes facial transplant surgery, taking the face off comatose terrorist Castor Troy (played by Nicolas Cage) so Archer can pretend to be Troy and infiltrate his operation. Then Troy wakes up from his coma, discovers he is faceless, and then finds a way to contact his gang to force the doctor to put Archer’s face on him, thereby assuming Archer’s identity. The ensuing plot against Archer-as-Troy and Troy-as-Archer gets pretty convoluted — and fascinating to watch.
Biometric Fraud Threat Grows
Though criminals aren’t exactly trading literal faces with their victims, they have been using real people’s likenesses to achieve their aim through biometric fraud. It will likely get worse, says Deloitte, as generative artificial intelligence (AI) can create “self-learning” deepfakes of all types that can continuously adapt to fool detection software.
Indeed, fraud losses from generative AI could reach $40B in the US by 2027, from $12.3B in 2023, a compound annual growth rate of 32%, according to Deloitte.
For all types of AI-driven biometric fraud — face and voice ID spoofing, AI deepfakes, and synthetic identities — research by Signicat found that 42.5% of all detected fraud attempts in the financial and payments sector is AI-driven biometric fraud. Of those attempts, 29% are successful.
Extra Measures Financial Institutions Are Taking
For years now, financial institutions have layered measures on top of measures to increase security, and now, they have to add on even more.
Chase, which lets customers authenticate with Face ID, now offers them the option to also enter their device PIN to log into the banking app. That way, a fraudster can’t fake face ID or even physically force customers to use their faces to get into their accounts — which actually happens.
“These were cases in New York where bar patrons were incapacitated and the bad actor yanked the phone out of that person's hand, put it in front of their face, and then logged into their bank account and sent money via Zelle to themselves,” said Goran Loncaric, managing director of product in the customer identity and authentication team at JPMorganChase.
Chase is also rolling out passkeys to replace passwords — one for the customer’s device and one held by the bank. Another tool that will be offered to customers will let them name a “trusted contact person” to receive alerts about high-risk wire transfers, without giving the contact access to the account.
Stearns Bank has added real-time risk signals like device changes to its multifactor authentication layers that include biometric identification, said Adam Gill, director of digital banking and product. The key to augmenting biometrics, Gill says, is layering measures with “something you know,” “something you have”, and “something you are.”
Likewise, other financial institutions are also layering behavioral biometrics on top of all of their security measures. Such tools analyze how a customer typically uses their device and whether that’s changed, including typing rhythms, mouse movements, touchscreen pressure, and how they hold their device.
“Risk-based authentication takes this a step further by combining behavioral data with contextual factors,” says Lucid. “These systems evaluate details like location, device type, access times and transaction patterns to calculate a real-time risk score.”
What This Means for CFIs
  • AI is accelerating biometric fraud, so institutions are revisiting how they authenticate customers and monitor risk signals.
  • Layered security — combining device checks, passkeys, behavioral biometrics, and contextual risk analysis — is becoming a standard approach to reduce successful spoofing attempts.
  • Customer experience remains important. Many institutions are adding protection without increasing friction, such as silent behavioral analysis or optional secondary checks.
  • As more fraudsters use deepfake tools, CFIs may see increased customer concern about identity safety, making clear communication about security measures even more valuable.
Biometrics will continue to play an important role, but they are no longer enough on their own. As AI-driven fraud becomes more sophisticated, more institutions are shifting toward layered authentication that blends biometrics with device checks, behavioral patterns, and contextual risk signals. The goal is the same — stronger protection without adding unnecessary friction for customers.
Subscribe to the BID Daily Newsletter to have it delivered by email daily.

Related Articles:

The Cyber Ripple Effect: When SMB Breaches Hit CFIs
SMB cyberattacks are rising as CFIs face growing counterparty risk. How do customer-side breaches impact credit, liquidity, and reputation — and what CFIs can do now to prepare?
SMBs Embrace AI — And Expect Their Banks to Do the Same
As small businesses rapidly adopt AI to stay competitive, CFIs must evolve too — integrating AI tools to meet rising expectations for speed, insight, and personalized financial guidance.