Recent searches


No recent searches

Mark Answers as 'Good' or 'Bad' to help train bot



Posted Feb 22, 2023

I am finding that the chat bot frequently triggers answers with no understandable logic at all. Example, I created a test chat where the phrase I used to state my query was 

"Bucket of custard" 

That phrase does not appear in any of my training phrases for any of my existing answers. And yet, it triggered an an answer for "Forgot Pin" with the following training phrases:

"I have forgotten my pin"
"forgot pin"
"don't know pin"
"reset pin"
"new pin"

Firstly, why on earth would it take my question of "Bucket of custard" and suggest the "Forgotten pin" answer? 

Secondly, is there no way an agent can mark that answer as a good or bad example of an answer which will further train the bot? Customers are getting fed up already with erroneous answers being thrown at them since we introduced the chat bot into out platform 3 weeks ago.

You can train a bot on things to include in its training phrases, but can you not teach it to exclude things also?

UPDATE (June 2023)

The ability to mark bot answers as good or bad works really well in other systems. Here is a screenshot of how it is used in Intercom. 

The difference seems to be that when setting up an answer here, bot searches previous chats for the same questions customers have asked and uses real life examples as the training phrases for a bot answer. At this stage when it suggests training phrases you mark them as good or bad from the outset

I would really like to see something like this in Zendesk as it makes me wince when I see the bot answer questions with completely the wrong answer, and I have no way of stopping it from doing that other than to create a complete answer flow for all the possible incorrect answers and forcing them down the 'right' path.


5

4

4 comments

I agree with this. Also, I have learned that you can minimize this issue by creating more answers. For example, when a customer types "Bucket of custard", you can create an answer for that. 

To do this, I reviewed what customers were commonly typing and then created answers for those inquiries. This prevents the customer from being show irrelevant articles from the Help Center or being sent to irrelevant flows. 

3


I agree with this post.

The bot should ask to clarify with words or phrases instead of firing answers that are completely unrelated to the customer's concern.

I am also surprised that there is no feature like this to help train the bot to have better understanding of what intents should be related to certain answers. Aside from this great idea by Rachel, there should also be an exclude list or blacklist like how your Triggers and Automations have the feature that specifies how a ticket comment must not contain certain words or phrases for the answer not to fire.

For example say there are 2 types of verification processes.

1. ID verification
2. Address verification

The moment a customer types something like "add verification" it would still fire ID verification. For flows in which terminology is very close, it would be helpful if we can have blacklists. Also, since there are a lot of cases where answers fired are unrelated, just the ability to exclude certain words from which we see an incorrect pattern would greatly help avoid inconvenience to customers.

2


I noticed that any time a customer provided a model or part number, the bot wasn't replying in a helpful way. To combat this, I created an answer with maybe twenty examples of alphanumeric model/part numbers. This worked wonderfully to redirect the conversation until someone responded, “prop 65.” 

 

While the goal is to answer all questions well, we must respond to any compliance questions specifically and carefully. Like others, I needed to create a separate answer specific to Prop 65, which in fairness, I would have done anyway, but the ability to direct the bot with “this, not that,” or exclusionary training would be a powerful tool in my toolbelt. 

 

We aren't quite at a place where we are ready to turn generative AI on, so Bot must suggest articles or follow its training. I've noticed that it tends to shy away from article suggestions, even though this behavior is turned on. In the above example, there are several articles in our help desk, one even labeled Prop 65, and all with labels to help drive the customer in that direction. I am a little surprised that it would bypass the exact wording of an article title and reply with similar wording in a training phrase at all. 

 

To Kyna's point, we also have several similar sounding paths. What I've done to combat this is start with verification. For example, there are several issues that could come up with a fireplace. I start with the question, “does this request involve a fireplace insert?”, yes or no. This step is called present options. I then build in more steps using present options if they respond yes. They can specify whether the fireplace is damaged, an operator's manual is needed, if their request is about operation/usage, etc. You could try the same thing with verification: present options for ID verification or Address verification. I hope that helps :) 

0


image avatar

Daniel Aron

Zendesk Product Manager

Hi all, thanks so much for this thoughtful feedback. We will take note of these suggestions. It isn’t currently possible to train the bot in this way, or create an exclusion list of phrases. You can find some best practices for using training phrases here. And if you have an intent model, we recommend assigning a pre-trained intent to an answer flow, which should result in improved matching compared to relying on training phrases. Another option worth considering is Ultimate.ai, which provides some of the training tools suggested in this thread. I will also introduce Magda Pereira, an expert on our machine learning team who may be able to provide further assistance.

0


Please sign in to leave a comment.

Didn't find what you're looking for?

New post