Add-on | AI agents - Advanced |
The Performance summary dashboard lets you dig deep into many different aspects of an advanced AI agent to see how well it’s performing. From here, you can view:
- Key metrics about customer volume, conversation management, and opportunities for improvement
- Visualizations based on volume, time, language, resolution, and intent
- Key metrics about use case performance
This article contains the following topics:
Related articles:
Accessing the dashboard
The Performance summary dashboard is available under Analytics in the left sidebar.
To access the Performance summary dashboard
- In the top-right corner, use the AI agent drop-down field to select the AI agent you want to analyze.
- In the left sidebar, click Analytics > AI agent analytics.
- Select the Performance summary tab.
About the reports
The Performance summary dashboard is broken down into the following sections:
Performance overview
The Performance overview section gives you quick insights into key metrics about your AI agent. The reports in this section differ depending on whether you’re:
Analyzing the performance of a zero-training or expression-based AI agent
When analyzing a zero-training or expression-based AI agent, the Performance overview section shows the following reports:
-
Customer volume: Shows the number of conversations or messages
handled by the AI agent.
- When Conversations is selected in the drop-down, the
following additional metrics appear:
- Conversations with meaningful intent: Conversations that involve interactions associated with frequent user query topics.
-
Conversations fully understood: Conversations
where all messages were above the confidence
threshold to be understood.
- When Messages is selected in the drop-down, the following
additional metrics appear:
- Unrecognized messages: Conversations that involve interactions where the AI agent did not understand the user query.
- Recognized messages: Conversations where the AI agent was able to match the user query to an appropriate response.
- When Conversations is selected in the drop-down, the
following additional metrics appear:
-
Fully managed conversations: Shows the percentage of AI agent-handled conversations
or the custom resolution rate.
- When AI agent-handled conversations is selected in the
drop-down, the following additional metrics appear:
- Deflection rate: Conversations that are not escalated to a human agent.
-
Conversations with no status: Conversations that
do not have an automatically assigned conversation
status.
- When Custom resolution rate is selected in the drop-down,
the following additional metrics appear:
- Informed conversations: Conversations where instructions or guidance was provided to the user by the AI agent. See Using the Informed state for details.
- Resolved conversations: Conversations where a meaningful resolution was provided to the user and no further questions were asked.
- When AI agent-handled conversations is selected in the
drop-down, the following additional metrics appear:
-
Areas to investigate: Shows the escalation rate, number of
unsuccessful conversations, or AI agent satisfaction (BSAT)
score.
- When Escalated conversations is selected in the
drop-down, the following additional metrics appear:
- Escalation rate: The percentage of conversations that were escalated from an AI agent to a human agent.
-
Failed escalations: The percentage of
conversations where an escalation to a human agent was
attempted but was unsuccessful because no agents were
online or available.
- When Unsuccessful conversations is selected in the
drop-down, the following additional metrics appear:
- Failed actions: Conversations where an action failed to be performed.
- Technical errors: Conversations where a technical issue occurred with the AI agent or an integration. For example, when an unexpected error occurs when connecting to an external API.
- When BSAT is selected in the drop-down, the following
additional metrics appear:
- Average BSAT score: The sum of all collected ratings divided by the total number of conversations where feedback was provided by your users.
-
Response rate: The percentage of conversations
where your customers provided rating feedback.
For more details, see Analyzing average BSAT score and response rate.
- When Escalated conversations is selected in the
drop-down, the following additional metrics appear:
Analyzing the performance of an agentic AI agent
When analyzing an agentic AI agent, the Performance overview section shows the following reports:
-
Total conversations volume: Shows the actual number and
change-over-time percentage of conversations handled by the AI
agent.
-
Understood conversations: Shows the percentage, actual number,
and change-over-time percentage of conversations where an answer was
provided from a knowledge source or conversations that were matched to a
use case. This excludes small talk and directly escalated conversations.
This metric is further broken out by the following response types:
- Use case response: The percentage and actual number of responses in understood conversations that triggered a dialogue reply or procedure.
- Knowledge response: The percentage and actual number of responses in understood conversations that were generated from a knowledge source.
-
Hybrid response: The percentage and actual number of
responses in understood conversations that included a use case
response and a knowledge response.
-
Automated resolutions: Shows the percentage, actual number, and
change-over-time percentage of automated resolutions. Automated
resolutions are calculated as AI agent-handled (for messaging AI agents)
and answered (for email AI agents) conversations that have passed
verification by our large language model (LLM) as not requiring human
intervention. Automated resolutions data is available only from October
7, 2024 and later. For more information about automated resolutions, see
About automated resolutions for AI
agents.
This metric is further broken out by the following response types:
-
Use case response:
- The percentage of automated resolution conversations that originated from a dialogue and were understood.
- The actual number of responses in automated resolution conversations that originated from a dialogue.
-
Knowledge response:
- The percentage of automated resolution conversations that were generated by generative replies and understood.
- The actual number of automated resolution conversations that were generated by generative replies.
-
Hybrid response:
- The percentage of automated resolution conversations that elicited a hybrid response and were understood.
- The actual number of responses in automated resolution
conversations that included a hybrid response.
-
Use case response:
-
Other conversations: Shows the percentage, actual number, and
change-over-time percentage of conversations that were not automatically
resolved by the AI agent. This metric is further broken out by the
following response types:
-
Not understood: Shows the percentage and actual number of
the AI agent responses that did not provide a knowledge answer
or a use case, and direct escalations.Note: This metric differs from the results of the Conversation type > Conversations with messages not understood filter on the conversation logs, which returns conversations in which a default reply was triggered and an error occurred when attempting to perform sanitization, entity detection, or language detection.
- Escalated conversations: The percentage and actual number of all conversations that were escalated to a human agent, whether or not they were understood.
-
Failed escalations and errors: The percentage and actual
number of other conversations where an attempted escalation was
unsuccessful or had failed actions.
-
Not understood: Shows the percentage and actual number of
the AI agent responses that did not provide a knowledge answer
or a use case, and direct escalations.
AI agent performance analysis
In the AI agent performance analysis section, you can visualize your data. On the left, you can view volume-related metrics. On the right, you can view metrics focused on time, language, resolution, and intent.
On the left side of this section, select one of the following reports from the drop-down:
- Automation duration - overall trend: Shows the week-over-week trend of the average handle time in minutes.
- AI agent-handled rate - overall trend: Shows the week-over-week trend in the AI agent-handled rate and displays the percentage of conversations effectively managed by the AI agent over time. Not available for agentic AI agents.
- AI agent overall satisfaction score - overall trend: Shows the AI agent satisfaction score over time in a week-over-week trend and provides a visual representation of how customer satisfaction levels fluctuate and evolve.
- BSAT score by intent: Shows you the BSAT score for each use case.
- Conversations: Shows the daily trends in customer conversations handled by the AI agent to identify traffic peaks.
- Conversation status: Shows the AI agent's workload segmented by conversation status.
- Conversations with failed escalations: Shows the number of conversations where escalations were attempted but weren’t successful.
- Deflection rate - overall trend: Shows the week-over-week trend of the deflection rate and demonstrates how the AI agent redirected conversations from human agents across various languages.
- Escalation rate - overall trend: Shows the week-over-week trend in the escalation rate and highlights how often the AI agent successfully escalated conversations to human agents.
- Recognized messages rate vs. unrecognized messages rate: Shows the week-over-week trend of messages the AI agent understood and didn't recognize, and provides insights into its comprehension abilities and areas for improvement in managing user inquiries over time. Not available for agentic AI agents.
- Languages: Shows the AI agent's workload segmented by language. Available only for agentic AI agents.
- Unsupported languages breakdown: Shows the languages your customers speak that your AI agent doesn't support.
- Conversation custom resolutions by hour: Shows the final custom resolution states of users’ inquiries by hour to identify potential improvement areas. This is limited to dialogue replies.
- Conversation custom resolutions: Shows the final custom resolution states of your users’ inquiries to identify areas for improvement. This is limited to dialogue replies.
- Custom resolution rate - overall trend: Shows the week-over-week trend of the custom resolution rate to demonstrate the AI agent's efficiency in providing users with necessary information or resolving inquiries. This is limited to dialogue replies.
- Conversation by day and label: Shows the AI agent's workload segmented by conversation labels to understand the most frequent topics and how they evolve over time.
On the right side of this section, select one of the following reports from the drop-down:
- Automated resolutions - overall trend: Shows the volume of automated resolutions graphed over time.
- Automation duration trend by language: Shows the week-over-week trend of the average handle time in minutes, segmented by language.
- AI agent-handled rate trend by language: Shows the week-over-week trend in the AI agent-handled rate, segmented by language, and displays the percentage of conversations effectively managed solely by the AI agent over time. Not available for agentic AI agents.
- AI agent overall satisfaction score trend by language: Shows the AI agent satisfaction score over time in a trend view, segmented by language, and provides a visual representation of how customer satisfaction levels fluctuate and evolve.
- Conversation status by intent: Shows the 10 most frequently recognized intents based on conversation status.
- Conversation status by label: Shows the 10 most frequent conversation labels based on the conversation status.
- Conversation status by language: Shows the final conversation status by language to categorize and identify potential improvement areas.
- Conversation status by time: Shows the distribution of conversations by status over time to identify trends.
- Deflection rate trend by language: Shows the week-over-week trend of the deflection rate and demonstrates how the AI agent redirected conversations from human agents across various languages. Not available for agentic AI agents.
- Escalation rate breakdown: Shows the breakdown of successfully escalated conversations.
- Escalation rate trend by language: Shows the week-over-week trend in the escalation rate and highlights how often the AI agent successfully escalated conversations to human agents.
- Unrecognized messages rate trend by language: Shows the week-over-week trend of the “Messages not understood” rate, segmented by language, with a notable recommended threshold line at 20%. Not available for agentic AI agents.
- Unrecognized messages rate breakdown by language: Shows a breakdown of messages the AI agent didn't understand, segmented by language. Not available for agentic AI agents.
- Compare the 10 most frequent conversation labels based on the final custom resolution state: Shows the 10 most frequent conversation labels based on the final custom resolution state.
- Custom resolution by language: Shows the final custom resolution states of your customers' inquiries by language to identify potential improvement areas.
- Custom resolution rate trend by language: Shows the week-over-week trend of the custom resolution rate, segmented by language, to demonstrate the AI agent's efficiency in providing users with necessary information or resolving inquiries.
- Custom resolution by intent: Shows the 10 most frequent conversation intents based on the final custom resolution status.
Use case performance
In the Use case performance section, you can view a table of your use cases and their key performance metrics, including:
- Conversations: Out of the total number of conversations, how many had this use case. Click the conversations icon to open the conversation logs filtered to the specified use case.
- Average BSAT: The average BSAT score for the use case.
- First use case: How often the use case was the first use case identified during a conversation.
- Custom resolution rate: The custom resolution rate.
- AI agent-handled rate: The AI agent-handled rate.
- Escalations: How many escalations to a human agent occurred during conversations with this use case.
- Failed escalations: How many escalations to a human agent were attempted during conversations with this use case but were unsuccessful because no agents were online or available or because of a technical error.
- Technical errors: How many technical errors occurred during conversations with this use case. For troubleshooting help, see Investigate any technical errors in the dialogue.
Use the % / # toggle at the top-right of the table to switch between percentages and raw numbers. Click any of the headers to sort the table by that column.
Filtering the dashboard
You can filter the dashboard’s reports by time frame or other filters, including language, conversation status, label, intent, and test conversations.
To filter the dashboard by time frame
- In the Performance summary dashboard, in the upper-right corner, click Time frame.
- Using one of the following methods, select the dates you want to view
conversations for:
- On the right, select one of the predefined time frames: Today, Yesterday, Last 7 days, Last 30 days, This month, Last month
- Select specific beginning and end dates on the calendar
To filter the dashboard by other filters
-
In the Performance summary
dashboard, click one of the following filters and select its
values as needed depending on the data you want to see:
- Language: Select one or more languages.
-
Conversation status: Select one or more of the following
statuses:
- Agent escalation: The conversation was successfully transferred to a human agent.
- AI agent handled: The conversation had a meaningful intent, did not attempt escalation, and ended without the AI agent misunderstanding a message.
- Custom escalation: The conversation had a custom escalation attempt.
- Email escalation: The conversation was successfully emailed to the support team.
- Escalation failed: The conversation had an unsuccessful escalation attempt.
- No status
- Label: Select one or more labels. See Using labels to tag conversation content for advanced AI agents.
- Intent: Select one or more intents. See About intents in advanced AI agents.
-
Advanced: Select Show test conversations to include
conversations created through the test widget.
- Click Apply.
- Repeat the steps above to apply any additional filters as needed.
1 comment
امیر رفیعی
Okay
0
Sign in to leave a comment.