BID® Daily Newsletter
Jun 27, 2024

BID® Daily Newsletter

Jun 27, 2024

Don’t Overlook the Risks of AI-Backed CRMs

Summary: AI-backed CRM systems offer major benefits, but they also involve risks. We outline the challenges of using AI for customer service and share training and security tips to minimize risks.

Whale watching is a popular vacation activity at oceanfront locations, so much so that 13MM people across 100 countries participate in the activity each year. But what most people view as a wholesome, family-friendly activity doesn’t always pan out that way. In 2015, Jennifer Karren, a Canadian woman vacationing in Cabo San Lucas, Mexico, was killed when a whale breached and landed on the boat she was on. 
While deaths due to breaching whales are few and far between, there have been several instances over the years. Community financial institutions (CFIs) may not have to worry about breaching whales harming their businesses, but they should be aware of the threat of another type of breach. At face value, customer relationship management (CRM) systems are beneficial tools for helping organizations enhance the customer service experience. But, if your CRM has introduced AI-powered tools and integrations, there could be unexpected risks involved in using it.
Risks Below the Surface
As financial institutions seek to enhance the customer experience with the use of CRMs, many are using systems augmented by artificial intelligence (AI). While AI can help CRM users better categorize customers based on their needs and behavioral patterns and can pull information about historical interactions with customers to help provide them with a better experience, there is a downside to incorporating AI as well.
The data processed by AI and CRMs must be stored somewhere, a realization that hackers are well aware of, making financial institutions a key target for attack. Failing to ensure that customer data is properly secured by your third-party vendors can open you up to reputational risk and regulatory repercussions. Since most employees lack the AI expertise necessary for CFIs to operate in-house CRMs on their own, such systems typically necessitate outside partnerships — opening banks to third-party risks if the partners they choose do not have adequate security measures in place.
There are also multiple instances of AI-backed chatbots being compromised so that they actually leak confidential information and customer data, a reality that has led major companies such as Samsung and banks — including Bank of America, Citigroup, Wells Fargo, and JP Morgan, among others — to outright ban the use of chatbots within their organizations.
Securing Your AI-Powered CRM
Before CFIs embrace AI-backed CRMs, they should consider the following security measures: 
  • Thorough vetting. The data analyzed by AI and CRMs is typically processed on external servers, so it is imperative to adequately vet AI providers. Ask questions about how and where your organization’s data will be processed and what steps are in place to keep it secure. Similarly, CFIs need to be sure that such information and any potential risks are clearly outlined, that there are clear guidelines and policies in place and that customers are aware of them. 
  • Constant vigilance. Given that hackers’ tactics are continuously evolving, security measures should include behavioral analysis and anomaly detection, and they should be constantly monitored and updated accordingly. There are multiple areas that cybercriminals can target, including the use of undetectable malware. 
  • Human touch. The use of AI in CRMs typically involves enhanced levels of automation. While there are benefits to automation, it can also open up organizations to faster, more extensive attacks from hackers. Maintaining a level of human oversight and involvement is critical.  
  • Limited access. Strict limitations should be placed on the information AI and CRMs can use, with access limited solely to necessary data. To foster tighter security, strict controls and multifactor authentication should be put in place, with access to data limited solely to those who truly need it. 
  • Ongoing training. Employees should be educated about the risks of using AI in CRMs and should receive continuous training and updates. 
Hallucinations and Inaccuracies
Beyond security measures, applying AI to a CRM without adequate preparation and planning can be problematic and may result in inaccurate data that renders CRMs ineffective. “We found that many are embracing AI-powered CRM without ensuring they have the necessary data infrastructure, making them more vulnerable to undesirable outcomes,” reports recent Forrester data. If organizations do not start initiatives involving AI-backed CRMs with well-thought-out and intentional planning, the data generated can be skewed. In such cases, AI can learn patterns that are not factually based and can generate false data and predictions, known as hallucinations.
Implicit bias in AI-backed CRMs is another potential pitfall. If the dataset used to train the AI is incomplete (for example, an AI CRM is asked to evaluate loan applications from a community for which it has had no training data), or if the developers inadvertently added their human biases to the AI CRM, the cycle of bias could continue.
In the same way that CFIs set up governance for other customer data, any information generated by AI-backed CRMs, whether chatbot responses or internal reports, needs to be continuously governed. AI algorithms need to be designed so that they are easy to understand and include measures to ensure data anonymity for customers from the start. Since most CFIs do not have the necessary in-house expertise needed for such efforts, CFIs should consider partnering with trusted specialists who are well-versed in the ins and outs of AI solutions and can demonstrate extensive security and data protection measures
There are plenty of advantages to using AI-backed CRM systems for CFIs, but before embracing such platforms, it is important to be aware of the downsides and risks they can entail as well. Failing to establish AI algorithms in a methodical and intentional way can result in misinformation. Meanwhile, with hackers heightening their efforts to glean data from such systems, enhanced security measures are critical. Ensuring robust training data and actively working to eliminate implicit bias can help ensure AI CRMs serve all customers fairly.
Subscribe to the BID Daily Newsletter to have it delivered by email daily.

Related Articles:

Heightened Regulation Is Making BaaS a Risky Proposition
As regulatory oversight of BaaS increases, CFIs need to be aware of the risks in their third-party relationships. We provide examples of BaaS flubs and how to avoid them.
1234, This Password’s Not Safe Anymore (If It Ever Was)
Is your password too easy to guess? We discuss recent findings about the most common passwords and provide tips on creating more complex passwords that you can still remember.