What's my plan?
Add-on Quality Assurance (QA) or Workforce Engagement Management (WEM)

Verified AI summary ◀▼

Use the QA feature to assess AI agent performance in customer interactions. Configure which bots to evaluate, and use scorecards to review conversations manually or automatically. Analyze results with the Reviews dashboard and monitor key metrics like escalations with the BotQA dashboard. This helps you refine AI workflows and improve customer support quality.

Zendesk QA can help you evaluate how well your AI agents perform in conversations with your customers. You can use this information to update your AI agents and workflows based on the results.

This article contains the following topics:

  • Configuring which AI agents are evaluated
  • Evaluating AI agent conversations

Related articles:

  • Manually identifying AI agents in Zendesk QA
  • Using the BotQA dashboard to understand bot escalations and performance

Configuring which AI agents are evaluated

Zendesk QA automatically detects the following types of bots as AI agents:

  • Conversation bots
  • Ultimate messaging bots
  • Sunshine Conversations bots
Tip: You can also manually report other users as AI agents so they can be reviewed using the correct resources. See Manually identifying AI agents in Zendesk QA

By default, bots are included in reviews. You can configure the review settings for each bot on the Bots page.

To configure whether or not a bot is reviewable

  1. In Quality assurance, click your profile icon in the top-right corner.
  2. Select Users, bots, and workspaces.
  3. Click Bots.

    The list of bots appears, including the following columns:

    • Bot name: The name of the bot.
    • Last chat: When the last conversation with the bot took place.
    • Reviewable: If the bot is included in reviews.
  4. Find the bot in the list, then select a value in the Reviewable column:
    • Yes: The bot is configured to be reviewed. If autoscoring is turned on, bots are reviewed automatically. However, you can also review your bots manually.
    • No: The bot is excluded from reviews, which means it will not be included in autoscoring or assignments. Non-reviewable bots are also not displayed in filters. Additionally, new data will not appear in dashboards. This option is useful if you lack sufficient context to evaluate the bot.

Evaluating AI agent conversations

You can use Zendesk QA to evaluate the performance of your bots across various categories, just like you can for human agents.

To do so, you must set up a scorecard for the categories you want to evaluate the bot on.

If autoscoring is turned on, bots are reviewed automatically. However, you can also review your bots manually.

To review a bot’s performance manually

  1. In Quality assurance, click Conversations in the sidebar.
  2. Select an existing filter or create a new filter to identify the bot conversations that you want to review.

    The following filter conditions are often useful in this scenario:

    • Participant | is | <name of your bot>
    • Bot | is | <name of your bot>
    • Bot type | is | <workflow or generative>
    • Bot reply count | more than | 0

    Alternatively, use a Spotlight filter to find bot conversations.

  3. From the filtered list, select the conversation you want to review.
  4. In the Review this conversation panel, set the Reviewee to the bot you want to review, and select the Scorecard to use.
  5. Rate the bot’s performance for each category. See Grading conversations.

  6. (Optional) In the free-text field, enter comments about the bot’s performance.
  7. Click Submit.
Use the Reviews dashboard to analyze the results of a bot’s performance evaluation. Use the BotQA dashboard to understand other important metrics about your AI agent’s performance, including how often bot conversations were escalated to human agents.

Powered by Zendesk