BID® Daily Newsletter
Jan 9, 2024

BID® Daily Newsletter

Jan 9, 2024

Chatbots May Be Too Nice for Their Own Good

Summary: Bad actors can prey on a chatbot’s desire to be helpful, gaming its algorithms to provide access to customer accounts or financial systems. We discuss the risks of a chatbot that hasn’t been set up properly and what measures are being taken to make your CFI safer, if you deploy a chatbot.

Robby the Robot, the clunky mechanical droid who first appeared in the 1956 movie “Forbidden Planet,” has been called the “hardest-working robot in Hollywood.” With a computer brain that understood and responded to spoken queries with an earnest desire to be helpful, he was a fictional precursor to modern chatbots.
Robots like Robby are not yet wandering among us, but chatbots have become ubiquitous, using artificial intelligence (AI) to respond to all manner of queries. Chatbots have proven to be great tools for FIs, improving response times in dealing with common issues and questions, and being available 24/7.
It’s important to keep in mind, though, that chatbots are not perfect. If not set up expertly, chatbots may have vulnerabilities that can be used by cybercriminals who are using equally smart digital tools and AI.
Chatbot Vulnerabilities
Clever hackers knocking on a financial institution’s (FI’s) digital door can sometimes convince a chatbot to do their dirty work. FIs with chatbots may be vulnerable to a technique called “prompt injection,” in which a cybercriminal provides a chatbot with a text prompt that can cause it to circumvent previous instructions and do whatever the hacker requests, like downloading malware that leads to fraud, theft, or some other insidious gambit.
In addition to prompt injection, there are a number of other tactics that can be used to trick chatbots for nefarious ends. They go by various names like “jailbreak” (a special prompt created to allow the attacker in), “prompt leaking” (sabotaging or sharing prompts used in AI training models), and “SQL injection” (manipulating code to provide access to sensitive data). But they all revolve around the central goal of conning chatbots into doing a crook’s dirty work.
There are several risks associated with a compromised chatbot. It may divulge confidential customer information to a bad actor, including customer account data and how to access it. Or the chatbot may allow a crook inside an FI’s network, potentially allowing a hacker to take control of the system and demand ransom to release it. The same AI that powers chatbots can also be used by crooks to impersonate real customers, then gain access to customer and FI information. Chatbots can even be manipulated into making threatening statements or text that would otherwise be harmful to the FI’s reputation.
Regulatory Efforts

The Federal Trade Commission recently opened an investigation into OpenAI’s ChatGPT, looking into the problem of prompt injections. That is not the only government oversight. The UK has issued a warning about prompt injection. The White House also issued an executive order asking for better tests and standards for chatbots. 
FIs should take all this as a heads-up warning about the potential pitfalls surrounding chatbots and AI. While chatbots can tackle a variety of customer questions and reduce the workload of branch staff, chatbots can be a little bit too friendly sometimes. They respond to anyone and can have trouble telling the difference between a legitimate customer and a crook. Chatbots are programmed to be helpful, but they often lack nuance and sophistication when they try to act like a real person. They are, after all, still robots.
Chatbots can be a tremendous customer service tool, but they are not impervious to cybercrime. It’s important to make sure that any chatbot your institution uses is set up by experts and has protections in place to develop and maintain defenses against misuse by hackers and cybercriminals. 
Subscribe to the BID Daily Newsletter to have it delivered by email daily.

Related Articles:

The Risks Lurking in the AI Shadows
Employees using unapproved, public generative AI poses a major security risk for your business. We review what shadow AI is, how it leaves your business vulnerable, and what you and your employees can do to minimize the risk.
New Virtual Currency Scam Targets CFIs and Their Customers
Federal authorities have issued an alert about a financial scam called “pig butchering", in which victims are lured into investing in phony schemes, often involving crypto currency. The losses can be significant. We provide tips on how to identify these scammers, if they contact you.