Verified AI summary ◀▼
Use the QA feature to assess AI agent performance in customer interactions. Configure which bots to evaluate, and use scorecards to review conversations manually or automatically. Analyze results with the Reviews dashboard and monitor key metrics like escalations with the BotQA dashboard. This helps you refine AI workflows and improve customer support quality.
Zendesk QA can help you evaluate how well your AI agents perform in conversations with your customers. You can use this information to update your AI agents and workflows based on the results.
This article contains the following topics:
Related articles:
Configuring which AI agents are evaluated
Zendesk QA automatically detects the following types of bots as AI agents:
By default, bots are included in reviews. You can configure the review settings for each bot on the Bots page.
To configure whether or not a bot is reviewable
- In Quality assurance, click your profile icon in the top-right corner.
- Select Users, bots, and workspaces.
- Click Bots.
The list of bots appears, including the following columns:
- Bot name: The name of the bot.
- Last chat: When the last conversation with the bot took place.
- Reviewable: If the bot is included in reviews.
-

- Find the bot in the list, then select a value in the Reviewable column:
- Yes: The bot is configured to be reviewed. If autoscoring is turned on, bots are reviewed automatically. However, you can also review your bots manually.
- No: The bot is excluded from reviews, which means it will not be included in autoscoring or assignments. Non-reviewable bots are also not displayed in filters. Additionally, new data will not appear in dashboards. This option is useful if you lack sufficient context to evaluate the bot.
Evaluating AI agent conversations
You can use Zendesk QA to evaluate the performance of your bots across various categories, just like you can for human agents.
To do so, you must set up a scorecard for the categories you want to evaluate the bot on.
If autoscoring is turned on, bots are reviewed automatically. However, you can also review your bots manually.
To review a bot’s performance manually
- In Quality assurance, click
Conversations in the sidebar. - Select an existing filter or create a new filter to identify the bot conversations that you
want to review.
The following filter conditions are often useful in this scenario:
- Participant | is | <name of your bot>
- Bot | is | <name of your bot>
- Bot type | is | <workflow or generative>
- Bot reply count | more than | 0
Alternatively, use a Spotlight filter to find bot conversations.
- From the filtered list, select the conversation you want to review.
- In the Review this conversation panel, set the Reviewee to the bot you want to review, and select the Scorecard to use.
- Rate the bot’s performance for each category. See Grading conversations.

- (Optional) In the free-text field, enter comments about the bot’s performance.
- Click Submit.