Insights      Technology      Conversational AI      The Top 3 Conversational AI Pitfalls, And How to Avoid Them

The Top 3 Conversational AI Pitfalls, And How to Avoid Them

If you follow the conversational AI space, you’ve likely read an article or two proclaiming “Chatbots are dead!” If you’ve read my overview of conversational AI, you know that I disagree. Conversational AI technologies are already transforming business processes, and they’re only improving. That said, conversational AI definitely offers some challenges. In this post, I’ll talk about three of the most common conversational AI pitfalls that I’ve seen, and offer suggestions for how you can steer clear of them yourself.

Pitfall #1: Choosing Conversational AI When It Doesn’t Actually Make Sense

By and large, natural language is one of the most flexible communication systems that exists, but that doesn’t make it the best interface for everything. Imagine for a minute that you’re trying to help a less technically oriented relative set up a mobile phone. You might try to start guiding him verbally through the steps, but unless he catches on quickly, you’re likely to eventually take the phone and do it yourself.

It’s advisable to avoid the conversational AI analogue to this experience: when users know exactly what they want to do and just want to push a few buttons, but are instead stuck trying to explain everything to a chatbot. To avoid this pitfall, be sure that your use case is actually a good fit for conversation. Some of the most common reasons to use conversational interactions include:

Complexity

Natural language allows us to talk about almost anything under the sun. If your list of options, possible actions or the inputs you’re trying to collect is too long and unwieldy for typical GUI elements to handle, conversational AI is often a great fit. Similarly, if you’re operating in an area where technical jargon is common but may confuse users, conversational AI might help. Rather than presenting users with confusing terminology, you can let them express themselves in the language that is most natural to them while still ensuring that your application understands what they are saying.

Conversation as a value proposition

Beyond a transactional exchange of information, conversation can have value in and of itself. That’s why we like to meet a friend for a coffee and catch up, spend time messaging family and friends, and even strike up conversations with the stranger seated next to us on a plane. Conversation can be fun, engaging and illuminating. If your use case hinges on users developing a more personal relationship with your application or brand, conversational AI might be a good match. Examples of companies using conversational AI in this way include Woebot (a chatbot therapist) and Vi (a voice-based personal training application).

Convenience

Conversational AI applications are often the most convenient when you’re looking to augment or automate a business process that’s already grounded in a conversational medium. For instance, the Slack app ecosystem is thriving because it allows users to complete a variety of tasks without ever leaving the conversational interface that they use every day. Similarly, voice offers the possibility of allowing users to get information or execute tasks in a hands-free, eyes-free fashion. This can be incredibly convenient in certain situations (especially when you are driving or cooking a meal).

If your conversational AI application doesn’t deliver on at least one of these value propositions, think deeply to make sure conversational AI is really the right approach. Avoid using conversational AI simply because it’s popular, or because others in your space are.

Pitfall #2: Violating the Unspoken Rules of Human Conversation

Natural language works as well as it does between humans, because communities of speakers generally follow the same basic rules of conversation, even though we may not consciously be aware of them. We don’t have the same expectations of a chatbot that we do of another human. That said, it can be jarring when conversational AI blatantly violates the rules we’re used to taking for granted.

One of the most common formulations of these rules comes from the philosopher of language Paul Grice. He identified a number of conversational “maxims” that speakers generally try to follow. Let’s take a quick look at these and how they specifically apply to conversational AI applications:

  • The Maxim of Quality states that speakers try to avoid giving false or unsupported information. In conversational AI, this means you’ll want to be sure that your chatbot doesn’t represent itself as a human, is clear about its limitations, and is consistent about the information that it shares. You’ll also want to be sure that your training data is of high quality and that you have a regular process in place to keep it up to date.
  • The Maxim of Quantity states that speakers try to give just the right amount of information, i.e., not too little and not too much. In conversational AI, one common issue is that bots often lack memory or context, forcing the human to repeat information that’s already been given. This is not only frustrating, but detracts from the efficiency that makes conversational AI such a compelling medium to begin with.
  • The Maxim of Relevance states that each turn a speaker takes needs to further the common goals that the participants in the conversation share. For conversational AI applications, this means that you’ll need to invest in getting good coverage over user intents so that your bot knows what the user is trying to do and can give an appropriate response. It also means that you’ll want a user to have an engaging experience — where the bot understands and responds to their needs — in the first few seconds of a conversation.

Beyond these generally applicable rules of human conversation, you’ll want to be sure that you understand the unspoken rules of the channel that you’re using and respect those as well. For instance, using periods in text messages can indicate that you’re upset or saying something especially serious. This is especially important if you’re designing for channels or user groups that are less familiar to your design team, so you may need to invest in some user research to make sure you’re on the right track.

Pitfall #3: Poor Handling of Failure Modes

Even the most meticulously designed conversational AI application is going to fail at times. That’s just the nature of working with an interface where the user isn’t limited to a finite set of actions, and instead can tell you anything in any way. You’ll of course want to devote effort to reducing your failure rates over time, but it’s equally important to think about how you can provide the best failure experience possible.

There are two types of common failure modes for conversational AI applications. In the first, the user says something, and your application can’t assign a matching intent. In this case, you’ll want to let the user know that you haven’t understood and offer the opportunity to try again, or ask a clarifying question that helps you gather more context about the user’s intent. But don’t do this too many times. After a couple of attempts, offer an apology and a next step to the user — whether that’s a human in the loop joining the conversation directly, or another support channel where they can get help. If you don’t have human support, apologize to the user and let them know your team analyzes failed conversations to try to do better, then see if you can switch conversational gears to another topic.

The second failure type is a little trickier. This is the case where your application thinks it’s understood and continues the conversation, but is wrong about what the user wants. To handle this case, you’ll want to be sure you recognize when the user is trying to signal a failure so that you can recover and respond. Your chatbot should recognize common ways users try to fix a conversation, such as by saying “help.”  While these failures can be tricky, unless there’s a risk that a failure is really catastrophic, don’t be so afraid that you ask the user to confirm everything multiple times or incessantly repeat back what you think you heard. While this may reduce failures, it also detracts from the efficiency and naturalness that makes conversational AI compelling in the first place.

The Good News: All of These Conversational AI Pitfalls Are Avoidable

Aside from being aware of these pitfalls and addressing each individually as you build your conversational AI application, there are a few general strategies you can use to make sure that you avoid them:

  1. Build a team with proper expertise
  2. Be thoughtful and sparing in how you use conversational AI. Yes, it’s here, and it’s transformative for many aspects of business, but it’s not the answer to everything.
  3. Launch iteratively, and iterate frequently.

I’d love to hear about any other conversational AI fails you’ve encountered and strategies for avoiding them. I’m @jbrenier on Twitter.

 

Read more like this

Why Georgian Invested in Armis (Again)

Armis offers visibility, security and risk management to enterprises across the Internet…

Why Georgian Invested in Glooko (Again)

We are pleased to announce that Georgian has led Glooko’s $100 million...

Why Georgian Invested in Coder

We are excited to announce that Georgian has led Coder’s $35M fundraise…