BID® Daily Newsletter
Aug 5, 2025

BID® Daily Newsletter

Aug 5, 2025

Is AI Handing Cybercriminals the Keys to Financial Accounts?

Summary: OpenAI’s CEO says banks are too vulnerable to AI-powered attacks, and he fears a wave of fraud will sweep through the industry. Here are some defensive moves CFIs can take now.

Back in the 1970s, there was a highly successful ad campaign for a highly successful brokerage firm that included a slogan about the financial advice it dished out: “When E.F. Hutton talks, people listen.” Today, the slogan might be adapted to refer to comments about artificial intelligence (AI) by one of its pioneers: “When Sam Altman talks, people listen.”
Altman, CEO of OpenAI and one of the most influential figures in the development of generative AI, recently spoke at a conference hosted by the Federal Reserve Board, and what he had to say should be a wake-up call to community financial institutions (CFIs). Addressing the audience of bankers and policymakers, Altman warned of “a significant, impending fraud crisis” for financial institutions.
Specifically, he said some banks accept voice biometrics or facial recognition as the sole means of authentication, which AI can now defeat with its ability to imitate people with deepfakes. Heavily relying on those authentication methods is a “crazy thing to still be doing,” Altman said.
What’s at Stake for CFIs
The resource constraints of CFIs can make them slower to adopt the latest tech solutions, and that can make them particularly vulnerable to the newest advances in generative AI fraud. CFIs may rely on legacy authentication methods like voice prompts, making them easier targets for AI-driven fraud.
Once a bad actor obtains account credentials, it becomes possible to quickly steal assets or execute transactions. On top of financial losses, CFIs could face declining customer trust and increased reputational risk.
How AI Cracks Authentication Defenses
Generative AI has proven adept at creating videos, images, and sounds that seem like real people. For example, generative AI can be used to clone customer voices. These cloned voices are now realistic enough to deceive voice recognition authentication programs. Likewise, generative AI can duplicate customer faces well enough to fool facial recognition authentication.
How does a crook get your voice or face to start with? Generative AI scans the internet for images and voices, including social media. It takes as little as 3 seconds of a person’s voice for generative AI to create a clone. There are also sham calls made to individuals that ask the recipient questions, with the purpose being to get a sample of a voice that can be cloned.
The Trusty Password
Ironically, one authentication method that Altman said remained resistant to generative AI was the password. A password can’t be fooled by a cloned voice or face. You have to know the actual password. There are lots of ways to steal a password, of course, including phishing emails that fool people into divulging theirs, but that’s not really a generative AI gambit.
“AI has defeated most of the ways people authenticate currently — other than passwords,” Altman said. The lesson is that strong passwords remain a powerful tool in account security.
Best Practices for AI-Resistant Authentication
Here’s what CFIs can do right now to protect themselves and their customers from AI scams:
  • Require complex passwords. Make sure customers understand the importance of using strong passwords. Ideally, they should be at least 16 characters long and contain uppercase and lowercase letters as well as numbers and special characters. 
  • Avoid standalone voice & face biometrics. Do not use voice prompts or facial recognition as a sole means of authentication.
  • Use multi-factor authentication. This means using at least two forms of identification to authenticate. A strong password is a good start. 
  • Leverage AI. Use AI as a tool to fight destructive AI employed by hackers. AI can be employed by CFIs to continuously monitor activity on their platforms, looking for anomalies that are suspicious. If a hack makes it past the initial screening, CFIs can use transaction monitoring to look for suspicious ones that can be flagged for more scrutiny. The AI used should have a known algorithm.
  • Educate customers. Regularly inform customers about the latest scams and how to spot suspicious activity. Warn them about the hazards of social media posts, which can be data sources for hackers using AI. This lesson applies to CFIs as well.
  • Update cybersecurity procedures. Regularly review authentication methods, looking for loopholes and ways to improve.
  • Be aware. Stay abreast of industry updates and analysis of cyber threats.
AI is world-changing technology. It poses threats to security, but it also presents CFIs with tools to improve their own security and enhance efficiency. As Altman noted, financial institutions have been among the most avid early adopters of AI. If used effectively and responsibly, AI can be an important innovation for financial institutions large and small and a powerful weapon in fighting back against hackers armed with AI.
With AI growing more powerful by the day, CFIs may want to shore up security defenses against those who want to misuse the technology. CFIs should audit and update authentication protocols, embrace more powerful, advanced monitoring, regularly educate and update customers, and train employees on how to deal with the problem. The future of CFI security hinges on acting now. 
Subscribe to the BID Daily Newsletter to have it delivered by email daily.

Related Articles:

Compliance Lessons for CFIs: The Roles of AI, Funding, & Culture
Recent enforcements highlight the cost of underfunding compliance. CFIs can stay ahead by embracing better compliance and AI solutions to bolster and streamline security.
This Malware Makes You an Offer You Can’t Refuse
The new malware strain is so devious it can steal banking client credentials and drain assets before its presence is detected. CFIs should warn their customers about it before it’s too late.