You can use Zendesk QA to evaluate how well your AI agents perform in conversations with your customers. You can then analyze the results of this evaluation and update your AI agents and customer service workflows as necessary.
This article contains the following topics:
- Understanding which AI agents can be evaluated
- Manually evaluating AI agent conversations
- Automatically evaluating AI agent conversations
- Manually identifying AI agents
Related articles:
Understanding which AI agents can be evaluated
Zendesk QA automatically detects the following types of AI agents: conversation bots, Ultimate messaging bots, and bots created with Sunshine Conversations.
To see which bots have been detected
- In Zendesk QA, click your profile icon in the bottom-left corner.
- Select Users, bots, and workspaces.
- Select Bots. The list of bots appears, including the following columns:
- Bot name: The name of the bot.
- Workspaces: Which workspaces conversations with the bot have occurred in.
- Type: Workflow, Generative, or Unknown
Manually evaluating AI agent conversations
You can use Zendesk QA to evaluate the performance of your bots across various categories just like you can for human agents. To do so, you must set up a scorecard for the categories you want to evaluate the bot on.
To manually evaluate a bot’s performance
- Click the Conversations icon () in the sidebar.
- Select an existing or create a new filter (public or private) to identify the bot conversations that you want to review. For example, you might use any of the following filter conditions:
- Participant | is | <name of your bot>
- Bot | is | <name of your bot>
- Bot type | is | <workflow or generative>
-
Bot reply count | more than | 0
Alternatively, use a Spotlight filter to find bot conversations.
- Select the conversation you want to review.
- In the Review this conversation pane:
- In the Reviewee field, select the bot you want to review.
- In the Scorecard field, select the scorecard you want to use.
- For each category, rate the bot’s performance. See rating scale.
- (Optional) In the free-text field, enter comments about the bot’s performance.
- Click Submit.
Automatically evaluating AI agent conversations
If you’ve set up AutoScoring, your bots can be automatically evaluated for the following AutoQA categories:
- Greeting
- Empathy
- Spelling and grammar
- Closing
- Solution offered
- Tone
- Readability
- Comprehension
You can view the evaluation results in the Review this conversation panel in a conversation or in the Reviews section of an assignment. Automatically scored ratings are marked with a hologram icon.
Automatic evaluations are present only for conversations that were created after AutoScoring was turned on for each relevant AutoQA category.
Manually identifying AI agents
Besides Zendesk QA’s automatically detected AI agents: conversation bots, Ultimate messaging bots, and bots created with Sunshine Conversations, you can also manually report other users as AI agents, so they can be reviewed using the correct AutoQA resources.
Admins and account managers can mark users as bots.
Marking users as bots
- In Zendesk QA, click your profile icon in the bottom-left corner.
- Select Users, bots, and workspaces.
- Select Users. Your list of users is displayed.
- Click the options menu () next to the user.
- Select Mark as bot.
Your marked user-bot is now listed in the Bots section. You can exclude bots from reviews by selecting Yes or No using the checkbox in the Reviewable column. This can be useful when you feel you don’t yet have enough context to review a bot.