Chatbots continue to handle and increasing amount of customer support and other B2C and B2B interactions. Reports and Data expect the global chatbot market to reach $10.08 billion by 2026, representing a 30.9% compound annual growth rate from the $1.17 billion market of 2018. Chatbots are being used to take customer orders, speed interactions, increase access, generate leads and for a variety of other uses.
Yet, despite these advantages, there are times that chatbots don’t provider better CX. Below are four examples of how chatbots can fail in CX.
Chatbots Can’t Perform Sophisticated Interactions
Chatbots don’t run a sophisticated underlying domain and language model that would allow them to execute complex tasks on a user’s behalf, said Jen Snell, Verint vice president, Go-to-Market, Conversational AI. “Making travel arrangements is the most common example that people use, but from insurance qualification lookups to financial advice, customer service inquiries and more, bots need help when your business process has more than one destination or more than one acceptable outcome.”
Similarly, most chatbots don’t have the ability to compile information from more than one system of record, Snell added. They run well on a single database or feed to which they are highly attuned. They don’t do well with different data structures and/or different web services, the combination of which inevitably throws them for a loop. For the enterprise, with many systems of record that their AI must call upon to achieve effective resolution and deliver the right information, a more integrated intelligent system will likely be needed.
Chatbots also tend to fail when asked to cross business lines, Snell said. “It is much easier to model a single line of business in a vertical with a finite number of statistically prevalent commands than it is to teach a computer how to traverse multiple fields of knowledge. Bots are, for this reason, best used for single-serve, single-environment needs.”
Related Article: A Guide to Marrying Bots and Humans for Exceptional Customer Service
Chatbots Can’t Replace Humans
“One of the biggest mistakes in designing a chatbot is to attempt to make it appear as though the bot is a human,” said Baruch Labunski, founder of Rank Secure. “Yes, chatbots, built and employed with a clear strategy, can help manage some customer demands on a website. But when chatbots can’t successfully assist your website customers, you end up with frustrated users who may well seek another website to handle their business.
The chatbot designer, the website owner and the website customers need to understand what a chatbot can and cannot do, Labunski added. While a well-designed chatbot can manage a simple customer return or a balance query, it might not be able to successfully help every customer.
“To be blunt, a chatbot can’t fully replace a human employee,” said Labunski. “Designing responses to be too cute or clever can make conversations confusing for users, and if you do succeed in deceiving a user, their expectations are likely to be unrealistically high.”
Chatbot Functionality Is Purposely Limited
All too often chatbots end up getting used to help identify only the broad category of issue and then route the customer to one or several self-service options and only after that is there an opportunity to speak to a live agent available, said Ubiquity COO Sagar Rajgopal. “When this journey is not designed well, it gives a perception that the company has little desire to engage with a customer and address a potential concern. This is something that gets magnified in the era of efficient social media.”
An associated, and sometimes contributing factor, is when chatbot technology isn’t strong enough to understand the breadth of potential inquiries that a customer might feed into the system, Rajgopal added. “If the resulting downstream response trees are not well designed it quickly leads to a frustrating experience. There are many strategies available to avoid or respond to such scenarios, but they require an appropriate investment of time and thought capital to minimize potential negative outcomes.”
Related Article: 10 Ways to Measure Chatbot Program Success
Chatbots Not Understanding Intent
This is what every brand fears — a “rogue” chatbot that engages in offensive conversation, Netomi said in a company blog. “Chatbots may be pre-programed to unknowingly respond yes or no to a question that it doesn’t understand for the sake of carrying on a conversation. There are numerous other ways trolls can make chatbots say inappropriate things that damage your brand.” Perhaps the most famous example of this was the Microsoft chatbot Tay, which was offering racist and other inappropriate comments only 24 hours after its debut and had to quickly be shut down.
Though that incident was five years ago, there are still similar incidents even today. In January, Lee Luda, a South Korean AI chatbot built to emulate a 20-year-old Korean university student, engaged in homophobic slurs on social media within three weeks after its debut and was shut down by creator Scatter Lab to fix its weaknesses, according to a published report.
To fix such issues with inappropriate responses, Netomi recommends that companies not train chatbots to respond blindly to queries, nor should they attempt to answer a query they don’t understand.