What's my plan?
Add-on Quality Assurance (QA) or Workforce Engagement Management (WEM)

Verified AI summary ◀▼

Evaluate AI agent performance using Quality Assurance to improve customer interactions. Configure which bots to review, and manually assess conversations using scorecards. Set up autoscoring for automatic evaluations in categories like empathy and tone. Use dashboards to analyze results and understand metrics, such as escalation rates to human agents, enhancing your team's support capabilities.

Zendesk QA can help you evaluate how well your AI agents perform in conversations with your customers. You can use this information to update your AI agents and workflows based on the results.

This article contains the following topics:

  • Configuring which AI agents are evaluated
  • Evaluating AI agent conversations

Related articles:

  • Manually identifying AI agents in Zendesk QA
  • Using the BotQA dashboard to understand bot escalations and performance

Configuring which AI agents are evaluated

Zendesk QA automatically detects the following types of bots as AI agents:

  • Conversation bots
  • Ultimate messaging bots
  • Sunshine Conversations bots
Tip: You can also manually report other users as AI agents so they can be reviewed using the correct resources. See Manually identifying AI agents in Zendesk QA

To view the bots Zendesk QA has detected

  1. In Quality assurance, click your profile icon in the top-right corner.
  2. Select Users, bots, and workspaces.
  3. Select Bots.
  4. Find the bot in the list, then select a value in the Reviewable column:
    • Yes: The bot is configured to be reviewed. If autoscoring is turned on, this will happen automatically.
    • No: The bot is excluded from reviews, which means it will not be included in autoscoring or assignments. Non-reviewable bots are also not displayed in filters. Additionally, new data will not appear in dashboards. This option is useful if you lack sufficient context to evaluate the bot.

The list of bots appears, including the following columns:

  • Bot name: The name of the bot.
  • Last chat: When the last conversation with the bot took place.
  • Reviewable: If the bot is included in reviews.

Manually evaluating AI agent conversations

You can use Zendesk QA to evaluate the performance of your bots across various categories just like you can for human agents.

To do so, you must set up a scorecard for the categories you want to evaluate the bot on.

To manually evaluate a bot’s performance

  1. In Quality assurance, click Conversations in the sidebar.
  2. Select an existing, or create a new filter (public or private) to identify the bot conversations that you want to review. For example, you might use any of the following filter conditions:
    • Participant | is | <name of your bot>
    • Bot | is | <name of your bot>
    • Bot type | is | <workflow or generative>
    • Bot reply count | more than | 0

      Alternatively, use a Spotlight filter to find bot conversations.

  3. Select the conversation you want to review.
  4. In the Review this conversation panel, select the bot you want to review in the Reviewee field, then select the Scorecard.
  5. Rate the bot’s performance for each category. See Grading conversations.

  6. (Optional) In the free-text field, enter comments about the bot’s performance.
  7. Click Submit.
Tip: Use the Reviews dashboard to analyze the results of a bot’s performance evaluation.

Evaluating AI agent conversations

You can use Zendesk QA to evaluate the performance of your bots across various categories, just like you can for human agents. To do so, you must set up a scorecard for the categories you want to evaluate the bot on.

If you’ve set up autoscoring, your bots are automatically evaluated.

You can also review your bots manually.

To manually evaluate a bot’s performance

  1. In Quality assurance, click Conversations in the sidebar.
  2. Select an existing filter or create a new filter to identify the bot conversations that you want to review.

    The following filter conditions are often useful in this scenario:

    • Participant | is | <name of your bot>
    • Bot | is | <name of your bot>
    • Bot type | is | <workflow or generative>
    • Bot reply count | more than | 0

    Alternatively, use a Spotlight filter to find bot conversations.

  3. From the filtered list, select the conversation you want to review.
  4. In the Review this conversation panel, set the Reviewee to the bot you want to review, and select the Scorecard to use.
  5. Select a rating for the bot’s performance in each category. Optionally, you can enter any additional comments about the bot’s performance in the text field at the end of the scorecard.

  6. Click Submit.
Use the Reviews dashboard to analyze the results of a bot’s performance evaluation. Use the BotQA dashboard to understand other important metrics about your AI agent’s performance, including how often Bot conversations were escalated to human agents.

If you’ve set up AutoScoring, your bots can be automatically evaluated for the following AutoQA categories:

  • Greeting
  • Empathy
  • Spelling and grammar
  • Closing
  • Solution offered
  • Tone
  • Readability
  • Comprehension
Note: Automatic evaluations are present only for conversations that were created after AutoScoring was turned on for each relevant AutoQA category.

You can view the evaluation results in the Review this conversation panel in a conversation or in the Reviews section of an assignment.

Automatically scored ratings are marked with a hologram ( ) icon.

Powered by Zendesk