Building a Framework of Ethics and Trust in Conversational AI


A storefront with a neon sign above it that says code of ethical behavior - AI ethics concept


PHOTO:
Nathan Dumlao/Unsplash

When it is ethically used, conversational AI can help to improve customer trust and increase a brand’s ROI. When it is used unethically, brands tend to lose revenue due to faulty decision-making, unconsciously biased algorithms, non-compliant behavior, and more frequently, bad data. Additionally, a brand’s reputation can suffer serious damage that can be very difficult to repair.

Consumers don’t have a problem putting their trust in AI applications today. Trust and loyalty go hand in hand, particularly so when it comes to a brand and its customers. When one loses trust in an entity, feelings of loyalty are also lost — and brands rarely gain them back. A report from Capgemini entitled AI and the Ethical Conundrum revealed that 54% of customers have daily AI-based interactions with brands, and more importantly, 49% of those customers found their interactions with AI to be trustworthy.

Customers aren’t alone in their trust of AI — it turns out that employees trust AI as well. The Oracle and Future Workplace’s AI at Work report indicated that 64% of employees would trust an AI chatbot rather than their manager, and 50% have used an AI chatbot rather than going to their manager for advice. Additionally, 65% of employees said that they are optimistic, excited, and grateful about having AI “co-workers” and nearly 25% indicated that they have a gratifying relationship with AI at their workplace.

Related Article: How Conversational AI Works and What It Does

Conversational AI Is on the Rise

Tools such as BotSociety are enabling brands to create custom AI bots that customers can use for customer service inquiries, obtaining product information, providing feedback, and more. Projects such as DialoGPT and Replika provide a foundation for creating versatile open-domain AI chatbots that are able to provide engaging and natural responses. Additionally, conversational AI frameworks such as the open source RASA framework, the Microsoft Bot framework, and Google Dialogflow, are enabling brands to delve deeper into conversational AI application development with minimal initial expenditures.

In spite of AI alarmists such as AI expert Kai-Fu Lee, who this week released a list of the top 4 dangers of AI, the public has been very accepting of AI applications in general, and conversational AI specifically. In fact, the global conversational AI market size is expected to grow from $4.8 billion in 2020 to $13.9 billion by 2025, and Servion Global Solutions predicted that by 2025, AI will power 95% of all customer interactions, including live telephone and online conversations, providing those businesses with a 25% increase in operational efficiency.

Related Article: What’s Next for Conversational AI?

Build Ethics Into Conversational AI Foundations

Dr. Christopher Gilbert, international consultant, co-founder of NobleEdge Consulting, and author of the award-winning book, The Noble Edge, shared what he considers to be the four most important ethical considerations of conversational AI. Gilbert views conversational AI as a tool, albeit a complex one, and like other tools such as matchsticks or kitchen knives, it can be used for good or evil based on the will of the user. “The focus of an ethical rule set must be on not just maintaining but building trust between organization and user,” he said.

Gilbert’s four ethical considerations on conversational AI are well defined:

  1. “Be clear and specific about the goals the organization has for using chatbots. A problem well defined is a problem half-solved. It’s important to state the obvious in this regard — any conversational AI must be user-centric and assist directly in solving the user’s problem in a way the user trusts. Ethics aren’t in the talking, they are in the walking. Walk the straight and narrow with conversational AI!”
  2. “When planning or using a conversational AI process, clearly differentiate between what can be done with that system and what should be done with that system from both the organizations’ and user’s perspectives. The important distinction here is that where laws tell us what we can do, it is ethics that tell us what we should do. The best chance to avoid the unethical on this new horizon of AI is moving beyond the law and concentrating on what should be done to build genuine, long-term trust with the clients and customers.”  
  3. “Building trust through computerization or virtuality is a Corinthian task. Every action of planning and implementation must be permeated with complete transparency. The organization must set reasonable expectations with the customer or client by clearly communicating what the organization’s goals are for the chatbot as well as its capabilities and limitations. This should include how any information garnered will be used and protected. The organization must communicate clearly that it has a firm grasp of privacy in both conscience and technology.”  
  4. “Provide an alternative to the AI process either through a live body-in-waiting or a messaging option that is monitored and utilized within a set and minimum amount of time. It probably goes without saying, but comfort with conversational AI is generational. Many in the older generations view the use of conversational AI as glaringly impersonal and a money-saver for the company employing it.”

Liziana Carter, CEO, and founder of GR0W.AI, created an AI chatbot that works with marketing, sales, and operations. She said that although it wasn’t very long ago that the idea of being able to talk to machines was science-fiction, today it’s changing how we operate our daily lives and businesses. “However, conversational AI today is only ‘pattern recognition,’ which is still far from ‘creative thinking’ or AI General Intelligence. It’s machine learning around how to perform specific tasks,” said Carter. 

Again, it comes down to the biases, prejudices, and morals of those who create the conversational AI application. “To be ‘taught’ morality, fairness, or ethics, it needs to follow pattern recognition built by its coder/designer, which ultimately comes down to its designer’s understanding of morality and fairness. And this brings us back to having a vetted team of experts designing a solid foundation for the conversational AI framework from the beginning.”

Related Article: Conversational AI Needs Conversation Design

Eliminate Unconscious Bias in AI

In 2018, Amazon.com discovered that its AI-based recruiting engine was unconsciously biased against women, so it scrapped the tool and went back to the drawing board. Obviously, Amazon did not design the tool to be biased on purpose, but rather, its computer models were trained to vet job applicants through the observation of patterns in resumes that had been submitted to the company over a 10-year period. Most of the resumes came from male applicants, a reminder of male dominance within the IT sector. 

This is just one example of how unconscious biases can creep into AI applications. Such biases need to be recognized for what they are and the damage they can do, and they must be purposely eliminated. In other words, the data that is used to train AI has to be free from unconscious biases in order for the AI to also be free from those biases.

Although true conversational AI chatbots are not rule-based or scripted, most still do rely upon some scripted answers for specific queries. Conversational design is one tool that is used to prevent unconscious biases from being incorporated into AI applications. Specific governance structures must be used during the development process and after the conversational AI application is deployed. Human evaluation of data and processes must be used to continually evaluate the AI app to ensure that unconscious biases do not appear. 

Additionally, if machine learning is used to continually enhance the AI application, it must be monitored to ensure that the biases of those who are conversing with the AI app do not seep into the data. In 2016, Microsoft debuted its Tay Twitter bot, which they described as an experiment in “conversational understanding.” It took less than 24 hours for Tay, who had been bombarded by people who were tweeting racist, misogynistic remarks, to begin parroting the prejudiced tweets to other users, raising serious questions about the use of public data to teach AI applications. 

Related Article: Designing Effective Conversational AI

Simulated Emotion and Empathy Can Help Build Trust

Since we are obviously not at the point where conversational AI chatbots can express real emotion and empathy (and we are not likely to get AI to that point any time soon), simulated emotions and empathy can be incorporated into an AI experience. “Emotion and empathy come down to what makes us unique as humans — creative thinking,” said Carter. “Although we seek to automate as many repetitive tasks as possible, empathy makes us relate, connect, and engage in more activities that fuel these emotions.” This makes emotions and empathy serious tools that can be used to allow the people that interact with AI apps to become more engaged, comfortable, and satisfied by the experience.

“In this regard, we’ve found that simulating emotion as a part of a bot’s personality engages users much better, makes them react back with emotion, and even interact more — even though they know it’s a robot they’re talking to,” Carter explained.

The more comfortable the user is with conversing with the AI bot, the more they will be inclined to do so. “In the first stages of designing the bot’s personality, voice, and tone, we think about our primary purpose,” she said. “And that is to make the user feel as comfortable as possible, so they can stick around for longer and interact more with the brand.” 

If a human agent is needed to complete a customer service session, the brand voice should remain consistent. “We align that personality with the brand’s voice and ensure that even when the bot passes the conversation over to the human, the human continues in the same voice, this time naturally showing emotion and empathy and delivering a seamless experience from beginning to end,” suggested Carter.

Final Thoughts

The use of conversational AI applications is on the rise across many industries, and both customer and employee trust in AI is high. Ethics need to be incorporated into AI from the beginning, and unconscious bias must be eliminated from the data that is used to train the AI. Simulated emotions and empathy can be incorporated into conversational AI to build trust, engagement, and emotional satisfaction in conversations.



Source link

We will be happy to hear your thoughts

Leave a reply

JELIMARO
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart