Chatbots: 3 Things for PMs to Watch Out For
Solving problems for customers, as product managers, is our most important job. The tools we use are there to help us find solutions to those problems.
The increase of the use of AI in our products has opened up a brand new world for product teams to explore. One of those tools stemming from AI, chatbots, gives teams an opportunity to talk to customers on a 24/7 basis and hopefully solve those problems even when the team sleeps.
We’ve been using chatbots for several years now, and if there is one conclusion we can gather from the use of the tool - chatbots are hard.
Yes, when it comes to the business outcomes, most of the time these chatbots are designed to be a cost-saving measure to lower CSM to revenue ratio. For most companies though, they never fulfill that promise.
For example, according to The Information, Facebook had a 30% success rate with their chatbots. Even with essentially a blank check and likely some of the brightest minds in the field working on their chatbots, they still failed 70% of the time.
The issue with chatbots is that, unless they’re clearly outlined as a closed system, they do all three things we talked about above. The customer assumes that the chatbot can solve their problem, no matter how complex, the AI itself takes what is being fed and flattens the input to fit its world, and then the customer quits out of frustration, often lowering trust.
I had this happen personally with my cable company. I told the support bot about an internet outage issue I was having. I engaged with it as it helped solve my problems. The AI wasn’t reporting back to the other lines of support and flattened my issue into a binary. When I was fed up and called the company, they had no idea of the interactions and I found out that the chatbot system isn’t even CONNECTED to their customer support team. Guess where my trust is for that cable company now.
Again, think of Facebook, whose war chest is in the billions, failing here. Now think of your company, whose war chest is in the thousands, not being crystal clear about the problems you are solving with AI.
This is a good a place as any to talk about three things you, as a product manager, should watch for when constructing any project:
Data being collected but not connected
Avoiding the magic box
AI evolving but you aren’t
1. Data being collected but not connected
One of the things that frustrated me most during the cable story wasn’t just finding out that my information wasn’t getting over to the right people, it was also that the people helping me sounded just as dejected.
Our chatbots are not on an island, they represent the company both internally and externally. Our customers see them as representatives, no different than your customer success team when they exchange email or a phone conversation.
Our customers expect that level of follow-through when using the system.
If they aren’t playing well with the customer and they aren’t working with your customer support staff it leads to misalignment. When a customer talks to the support staff, you’ll find instant frustration – since the customer will wonder what happened to the conversation it had, and the representative will think of the multiple times this has happened to them with no recourse.
2. Avoiding the magic box
If you’ve ever looked at some AI process and wondered, “What’s happening in there?” and no one can answer your question, you might find yourself in the middle of what I call the “magic box” fallacy. This happens when someone feeds input into a tool/object/person and simply trusts the output without understanding the process.
The easiest way to see if your company has “magic box” thinking is its decision fitness. How often do they check to see if their decisions are aligned with their goals?
If asking the question, “How do we know this AI process is working to serve our customers?” brings nothing but strange looks, I can assure you that the AI is a magic box.
If people aren’t looking confused, ask further if there is some sort of process that sees if the major decisions the AI is making track with the expected behavior.
3. AI evolving but you aren’t
Your chatbot is getting better with every conversation. Can you say the same about your processes inside of your team? Customers tend to say things to chatbots that you may not get in any research conversation or see in any screengrab.
Our job is to solve problems. Using our customers to learn is a big part of how we can increase the confidence that we can solve those problems.
The benefit of those 24/7 conversations is that customers are talking directly to the problems they see in every conversation. That means our chatbots can function as another entry into knowing our problem better.
It’s important to have a disciplined research practice, and even more so if you are working with chatbots. Can you find the conversations that happen repeatedly? Do you have a home for edge cases? Are the customer success/support teams plugged in with product management to ensure conversation is monitored and the problems identified? The always-on nature of chatbots means you’ll have a lot of information coming at you, and if it isn’t structured, you’ll miss out on the opportunity to get better.
In conclusion
You don’t ever want to have customers experience that level of frustration when it comes to your chatbot solutions. Every conversation is an opportunity for one or two things – better customer outcomes or more frustration for your customers. As product people, the former means better long-term product success.
If not, your customers will be looking for someone else.
My cable company is a monopoly, and I am betting, the company is not. Don’t fall into those traps because trust is much harder to recapture than to earn.
PROMPT
If your company uses chatbots, does collected data reach the support team? How is it used to improve customer service?
Reply to this email OR in the comments - let's have a talk!
Want to have a 30-min conversation? Book my office hours here.
More on the topic
How Messenger and “M” Are Shifting Gears - more on facebook chatbots, by Cory Weinberg
How to Be an Ethical Product Manager - the moral code for PMs, by Ellen Merryweather
When IT Comes To AI, DON’T Get Trapped In The Magic Box - “magic box” fallacy explained, by me
A Discussion on Biases in Product Management - from my time on Black Epics, a podcast dedicated to sharing the stories of successful Black Product Managers
If You Do Research Well It Never Feels Like a Waste of Time - my episode of the Product Science Podcast
Reading Now
We’re making choices. Always making choices. And Annie Duke’s How to Decide helps us understand them. If you are wondering about the magic box in AI, it pales in comparison to the one that you walk around with everyday.
Give this one a read to help get away from haphazardly “deciding” to start thinking better.
Note: The content you see here was initially posted on Product School. You can find all my content that is on Product School here. Want to see more on Artificial Intelligence, click here.