BID® Daily Newsletter
May 4, 2026
BID® Daily Newsletter
May 4, 2026

Article Lead Image

Deepfakes and Fraud Risk for CFIs

Summary: Deepfake fraud is accelerating, exposing gaps in how CFIs manage identity and transaction risk. Emerging threats and practical steps highlight how to strengthen controls, improve verification processes, and reduce exposure to increasingly sophisticated AI-driven fraud attempts.

There is no better example of the adage that looks can be deceiving than the slow loris. With giant eyes, a cuddly-looking face, a body only 8-15 inches tall, and a slow crawl that is the basis for its name, the furry mammal looks like a stuffed animal come to life. In reality, the Southeast Asian primate is extremely dangerous; its venomous bite can be fatal to other animals and humans alike.
The animal kingdom is not the only place where things are not always as they appear. The ever-increasing sophistication of deepfakes — deceptive, AI-generated videos, sound clips, and images based on real people that make them seem to say or do something that they didn’t — has made them hard to detect, creating a major risk for financial institutions.
Deepfakes Ramp Up
Deepfake fraud has surged, increasing by 2,137% in the past three years alone. As of 2024, new deepfake attacks averaged every five minutes. Not surprisingly, 71% of organizations cite deepfake defense among their top security priorities for the next 12-18 months. Yet only 37% of organizations have invested in deepfake defense.
With more than 40% of financial institutions having experienced fraud involving deepfakes, the risk of losses is significant. Losses related to deepfakes totaled more than $410MM in just the first half of 2025, with the average loss at financial service firms exceeding $680K per incident. In addition, deepfakes were involved in 20% of biometric fraud efforts in 2025. By 2027, Deloitte predicts that AI-enabled fraud will reach $40B annually. Community financial institutions (CFIs) are particularly susceptible to such fraud, as bad actors are aware that most CFIs have not made adequate investments in defense initiatives.
Deepfake Technology Advancements
Since the best defense is a clear understanding of the threats that organizations face, now is a critical time to review how deepfakes are created and how they are evolving.
Deepfakes are created using generative adversarial networks (GANs), a sophisticated type of artificial intelligence (AI) that incorporates computational models known as neural networks that can identify patterns within data and make predictions. These GANs continuously work to update their accuracy, making them increasingly sophisticated. GANs use two different types of neural networks (a generator and a discriminator) that rely on thousands of images and recordings to replicate a person’s appearance, facial expressions, movements, and even their voice in the most realistic way possible through images, videos, and 3D models.
The technology has advanced to a point that a believable deepfake can be created from data as simple as a photo from an individual’s social media and a sample of their voice as short as three seconds. The generator creates phony images, videos, or voices, while the discriminator evaluates what has been created, including the data used to produce it, to determine what is real and what has been faked. By working together, these two neural networks enable bad actors to create deepfakes that are good enough to bypass most cybersecurity initiatives.
The results are so believable that there are countless cases where fraudsters have successfully conned major organizations out of millions of dollars. Take global consultancy firm Arup, where an employee in the firm’s Hong Kong office was tricked into sending 15 wire transfers for a collected $25.6MM based on instructions given to them through a video conference with deepfakes representing the company’s CFO and multiple colleagues.
For financial institutions, the most common deepfake threats include voice cloning, phony videos, synthetic identities that create fake personas using a combination of real and phony data, and AI document forgery.
How To Detect Deepfakes
Defending against these threats has become more complicated, requiring organizations to take multi-pronged approaches to detection and security efforts. Security measures now need to incorporate detection methods looking at factors ranging from skin color anomalies and heart rate variations to pattern inconsistencies in eye blinking and jaw movements. The following are some of the most common defense methods that CFIs should consider adopting:
  • AI-backed liveness detection technology that can determine if an individual represented on a video call is real or a synthetic feed. Voice detection software that can identify audio anomalies that can indicate a voice has been cloned. 
  • Behavioral biometrics that analyze an individual’s historical behavior with a device by looking at patterns in things such as mouse movement and typing patterns. 
  • Metadata validation and digital watermarking that checks for unique identifiers that indicate whether audio or video has been created by AI or altered by it. 
  • The use of multiple authentication methods, delays on large transactions, and secondary confirmations through alternative channels for transactions that are considered high risk. 
Beyond ensuring that organizations are using multiple authentication methods and remaining current on the latest security measures, training employees about deepfakes and key things to look out for is critical. A 2025 study found that 33% of people were still tricked into sharing sensitive details with synthetic voice bots, even after being warned about what to look for. Employees should be taught to take a defensive approach to identity validation, both inside their organizations and when interacting with customers, and should be constantly reminded. Additionally, it is important for employees to understand the benefits of cybersecurity defense methods and how to work with them. 
As deepfakes become increasingly sophisticated and difficult to detect, CFIs need to take multi-pronged approaches to cybersecurity. Employees should understand how deepfakes work and things to be on the lookout for and simple measures such as time delays for high-risk transactions should be implemented. 
Subscribe to the BID Daily Newsletter to have it delivered by email daily.

Related Articles:
FinCEN Proposal Shifts AML Focus to Real Risk, Not Checklists
FinCEN has issued a proposed new rule that would transform anti-money laundering and countering the financing of terrorism (AML/CFT) programs to focus on each institution’s actual risks.
Insights from Bank Director’s 2026 Risk Survey
We look at some of the key findings from Bank Director’s 2026 Risk Survey and explore what they might mean for CFIs as they seek to manage risk in a fast-evolving, technology-enabled, and increasingly interconnected risk landscape.