What's my plan?
Add-on AI agents - Advanced
AI agent satisfaction (BSAT) ratings are feedback from your users that measure their happiness with how your AI agent performed. Analyzing your BSAT ratings can help you make targeted improvements to use cases or dialogues.

This article contains the following topics:

  • Analyzing BSAT ratings
  • Filtering conversations by BSAT ratings
  • Next steps

Related articles:

  • Collecting AI agent satisfaction (BSAT) ratings for advanced AI agents
  • Analyzing advanced AI agents with the Performance Overview dashboard

Analyzing BSAT ratings

You can analyze the BSAT ratings for your AI agent using the Performance Overview dashboard.

To analyze BSAT ratings
  1. In the top-right corner, use the AI agent drop-down field to select the AI agent you want to analyze BSAT ratings for.
  2. In the main menu on the left, select Analytics > AI agent analytics.
    The Performance Overview dashboard appears.
    Tip: For more information about the Performance Overview dashboard, see Analyzing advanced AI agents with the Performance Overview dashboard.

    This dashboard includes the following information about BSAT ratings:

    • Analyzing average BSAT score and response rate
    • Analyzing the overall BSAT score trend
    • Analyzing BSAT scores by use case

Analyzing average BSAT score and response rate

The average BSAT score shows you how well your AI agent is performing overall, and the response rate shows you how many of your users are submitting BSAT ratings.

To analyze average BSAT rating and response rate
  1. On the Performance Overview dashboard, find the Performance overview section.
  2. Under Areas to investigate, click the dropdown and select BSAT.

    The BSAT information appears.

    • The BSAT score (the number just under the dropdown) is a percentage that indicates how satisfied users are with your AI agent. This score is calculated by dividing the number of conversations with a rating of 4 or 5 by the total number of conversations with feedback provided, then multiplying that by 100.

      To the right of this score, you can also see the percentage change over the previous seven days.

    • The average BSAT score (shown in bold) is the sum of all collected ratings divided by the total number of conversations where feedback was provided by your users. The total number of conversations with ratings is shown in parentheses. The colored bar indicates the proportion of each feedback response from 1 to 5, shown in shades of red, yellow, and green.
    • The response rate (shown in bold) is the percentage of conversations where your customers provided rating feedback. The total number of conversations is shown in parentheses. The blue and gray bar provides a visual representation of the response rate, with the blue representing responses and the full length of the bar representing all conversations.

Analyzing the overall BSAT score trend

The overall BSAT score trend shows you how your AI agent's satisfaction rate is progressing or regressing.

To analyze the overall BSAT rating trend
  1. On the Performance Overview dashboard, find the AI agent performance analysis section.
  2. On the left side, use the dropdown list to select AI agent overall satisfaction score - overall trend.

    This view shows you how the overall BSAT score fluctuates week over week.

  3. On the right side, use the dropdown list to select AI agent overall satisfaction score trend by language.

    This view shows you how the overall BSAT score fluctuates week over week, segmented by language.

Analyzing BSAT scores by use case

By reviewing the BSAT scores for each use case, you can see how well the AI agent performs in conversations about different topics.

To analyze BSAT ratings by use case
  1. On the Performance Overview dashboard, find the AI agent performance analysis section.
  2. On the left side, use the dropdown to select BSAT score by intent.

    This view shows you the BSAT score for each use case. The use cases are listed in descending order of the most BSAT ratings provided.

  3. Now find the Use case performance section.

    This table summarizes you how well your AI agent is performing for each use case, and includes the following columns:

    • Use case name: The use case identified during the conversation.
    • Conversations: Out of the total number of conversations, how many had this use case.
    • Average BSAT: The average BSAT score for the use case.
    • First use case: How often the use case was the first use case identified during a conversation.
    • Custom resolution rate: The custom resolution rate.
    • AI agent-handled rate: The AI agent-handled rate.
    • Escalations: How many escalations to a human agent occurred during conversations with this use case. A high number here might indicate opportunities to improve the dialogue associated with this use case.
    • Failed escalations: How many escalations to a human agent were attempted during conversations with this use case but were unsuccessful because no agents were online or available.
    • Technical errors: How many technical errors occurred during conversations with this use case. For troubleshooting help, see Investigate any technical errors in the dialogue.
      Tip: Use the % / # toggle at the top-right of the table to switch between percentages and raw numbers.

Filtering conversations by BSAT ratings

In the conversation logs, you can filter the list of conversations based on the responses given to the BSAT rating request. Doing so lets you deep-dive into these conversations to look for specific opportunities to improve. For example, you can filter by:
  • Low BSAT ratings to see why the conversation went poorly and what improvements you can make.
  • High BSAT ratings to see why the conversation went well and how to replicate that success in other areas.

For more information, see Reviewing conversation logs for advanced AI agents.

Next steps

After you analyze your AI agent's BSAT ratings, you should take action to improve the AI agent's use cases and dialogues to better serve your customers. Use the following resources to help you:
  • Use cases
    • Creating use cases for advanced AI agents
    • Best practices for creating use cases for advanced AI agents
    • Managing use cases for advanced AI agents
  • Dialogues
    • Using the dialogue builder to create conversation flows for advanced AI agents
    • Managing conversation flows for advanced AI agents
Powered by Zendesk