HR firm Mercer did a study last year that looked at what employees really want from their companies. For the financial services industry in particular, the following surfaced: 64% of employees say state-of-the-art digital tools are important to their success; 51% want more flexible work options; 44% are worried though that flexible work options negatively impact promotions; only 31% say their company makes it easy for them to innovate.
As you chew on that this morning, consider that banks are trying to innovate too with artificial intelligence (AI) to expedite decision-making and improve the customer experience too.
In a report by Accenture, 800 global bankers shared some of their findings on AI, as early adopters. We thought their input could be interesting for community bankers who are following developments in and around AI.
Surveyed bankers found the best uses of AI were: building customer trust and confidence (71%); cost and operations optimization (63%) and improving compliance and regulations (62%). Yet, 75% conceded that they weren't fully prepared to handle all of the "societal and liability issues" that could arise if most of the bank's decisions pertaining to customer accounts were being made by AI tools on a daily basis.
While AI is everywhere and influencing everyone these days, even the biggest banks still seem to be grappling with the complex issues of this relatively new technology. For instance, it is likely a waste of time, money and effort to incorporate sophisticated AI algorithms to answer simple questions such as, "How do I pay a bill" according to the findings. These questions are handled more efficiently with links to your website.
Yet, AI tools used for decision-making in loan applications could be more useful with the right data. This could sometimes be a challenge though, as banks often interface with several third-party data sources (such as government databases and vendor systems). Here the survey found 34% say their bank has been the victim of adversarial AI at least once. Adversarial AI is a malicious input fabricated to trick AI programs. Not surprisingly, 78% believe that automated systems give rise to new risks, including outside data manipulation, false data and bias.
It has also been found that the wrong data or faulty underwriting algorithms can lead to bias. For example, an auto lender asked loan applicants for their state of residency and the vehicle's mileage. This information, when combined in an algorithm, inadvertently declined more minority applicants.
When working out these kinks, it's good to always have "explainability" of AI tools in mind. Virtually all of the bankers surveyed agreed that it is imperative for regulators and customers to understand the decisions AI makes. To that end, about 33% said their AI decisioning tools would have complete transparency within the next 2Ys. Undoubtedly, transparency is key for the growing acceptance of AI in banking and the financial industry.
These tools will succeed best if decisions are based on "symbiotic human-machine interactions, not binary human-versus-machine choices," according to Accenture, so take note.
Early adopter banks have shown a balanced approach is the best and once your bank gets involved you will probably too.