Add-on | AI agents - Advanced |
This article contains the following topics:
Related articles:
Analyzing BSAT ratings
You can analyze the BSAT ratings for your AI agent using the Performance Overview dashboard.
- In the top-right corner, use the AI agent drop-down field to select the AI agent you want to analyze BSAT ratings for.
- In the main menu on the left, select Analytics > AI agent
analytics.The Performance Overview dashboard appears.Tip: For more information about the Performance Overview dashboard, see Analyzing advanced AI agents with the Performance Overview dashboard.
This dashboard includes the following information about BSAT ratings:
Analyzing average BSAT score and response rate
The average BSAT score shows you how well your AI agent is performing overall, and the response rate shows you how many of your users are submitting BSAT ratings.
- On the Performance Overview dashboard, find the Performance overview section.
- Under Areas to investigate, click the dropdown and select
BSAT.
The BSAT information appears.
- The BSAT score (the number just under the dropdown) is a
percentage that indicates how satisfied users are with your AI
agent. This score is calculated by dividing the number of
conversations with a rating of 4 or 5 by the total number of
conversations with feedback provided, then multiplying that by
100.
To the right of this score, you can also see the percentage change over the previous seven days.
- The average BSAT score (shown in bold) is the sum of all collected ratings divided by the total number of conversations where feedback was provided by your users. The total number of conversations with ratings is shown in parentheses. The colored bar indicates the proportion of each feedback response from 1 to 5, shown in shades of red, yellow, and green.
- The response rate (shown in bold) is the percentage of conversations where your customers provided rating feedback. The total number of conversations is shown in parentheses. The blue and gray bar provides a visual representation of the response rate, with the blue representing responses and the full length of the bar representing all conversations.
- The BSAT score (the number just under the dropdown) is a
percentage that indicates how satisfied users are with your AI
agent. This score is calculated by dividing the number of
conversations with a rating of 4 or 5 by the total number of
conversations with feedback provided, then multiplying that by
100.
Analyzing the overall BSAT score trend
The overall BSAT score trend shows you how your AI agent's satisfaction rate is progressing or regressing.
- On the Performance Overview dashboard, find the AI agent performance analysis section.
- On the left side, use the dropdown list to select AI agent overall
satisfaction score - overall trend.
This view shows you how the overall BSAT score fluctuates week over week.
- On the right side, use the dropdown list to select AI agent overall
satisfaction score trend by language.
This view shows you how the overall BSAT score fluctuates week over week, segmented by language.
Analyzing BSAT scores by use case
By reviewing the BSAT scores for each use case, you can see how well the AI agent performs in conversations about different topics.
- On the Performance Overview dashboard, find the AI agent performance analysis section.
- On the left side, use the dropdown to select BSAT score by
intent.
This view shows you the BSAT score for each use case. The use cases are listed in descending order of the most BSAT ratings provided.
- Now find the Use case performance section.
This table summarizes you how well your AI agent is performing for each use case, and includes the following columns:
- Use case name: The use case identified during the conversation.
- Conversations: Out of the total number of conversations, how many had this use case.
- Average BSAT: The average BSAT score for the use case.
- First use case: How often the use case was the first use case identified during a conversation.
- Custom resolution rate: The custom resolution rate.
- AI agent-handled rate: The AI agent-handled rate.
- Escalations: How many escalations to a human agent occurred during conversations with this use case. A high number here might indicate opportunities to improve the dialogue associated with this use case.
- Failed escalations: How many escalations to a human agent were attempted during conversations with this use case but were unsuccessful because no agents were online or available.
-
Technical errors: How many technical errors
occurred during conversations with this use case. For
troubleshooting help, see Investigate any technical
errors in the dialogue.Tip: Use the % / # toggle at the top-right of the table to switch between percentages and raw numbers.
Filtering conversations by BSAT ratings
- Low BSAT ratings to see why the conversation went poorly and what improvements you can make.
- High BSAT ratings to see why the conversation went well and how to replicate that success in other areas.
For more information, see Reviewing conversation logs for advanced AI agents.
0 comments