Add-on | Quality Assurance (QA) or Workforce Engagement Management (WEM) |
The AutoQA dashboard tracks team performance across all AutoQA categories. It monitors quality performance and agent scores over time, providing breakdowns by agent and category. This dashboard is ideal for monitoring trends and identifying the training needs of your teams.
While manually submitted grades contribute to the agents’ Internal Quality Score (IQS), automated reviews are tracked using the Auto Quality Score (AQS).
This article contains the following topics:
- Understanding the main AutoQA dashboard cards (with examples)
- Understanding additional AutoQA dashboard cards
Related articles:
Understanding the main AutoQA dashboard cards with examples
The following quality indicators represent the main default cards displayed on the AutoQA dashboard:
Use the table below to understand the metrics for your main dashboard cards, using the values in the screenshot above as an example:
Metric | Description | Example (using the screenshot above) |
Auto-reviewed conversations | The number of conversations that had at least one category automatically scored without human intervention. | Using the above screenshot as an example, there are 404 conversations that have undergone at least one auto-review but have not received any manual reviews. |
Manually reviewed conversations | The number of conversations that were manually reviewed by
humans. Note: This value is based on the date filter and may differ from the review date reported in the Reviews dashboard. AutoQA can be triggered in various circumstances, such as when an existing conversation with a status of Closed or Solved is updated. |
In the screenshot above, 173 conversations were manually reviewed. It’s important to note that there may be overlap between conversations that were auto-reviewed and those that received manual reviews, as the conversations counted in this card include at least one manual review. |
Efficiency gain | The ratio of how many tickets AutoQA can cover compared to manual reviews,
calculated as follows:
|
In the example above, the efficiency gain is calculated as follows:
This indicates that for every manual review, you gain the equivalent of 2.34 conversations through auto-review. |
Auto-reviewed per reviewee | The average number of reviews conducted for each reviewee by AutoQA, illustrating how AutoQA aids in providing feedback. | In the screenshot above:
Please note that this figure reflects the number of users who received an auto-review, not the total number of users in a workspace. The numerator for this metric includes conversations with both auto and manual reviews. This metric assesses auto-review coverage per reviewee, which is why all conversations that have received an auto-review are considered. |
Manually reviewed per reviewee | The average number of manual reviews given to each reviewee. | Using the above screenshot as an example, the calculation is as follows:
This figure refers to the number of users who received a manual review, not the total number of users in a workspace. |
Acceptance rate | The percentage of AutoQA scores accepted by users as accurate. A high
acceptance rate indicates strong alignment between AutoQA scores and user
assessments. For simplicity, suppose AutoQA assigns a score of 100% for Category A and Category B, and N/A for Category C. If you submit a manual review maintaining 100% for Category A, adjusting Category B to 50%, and assigning 100% for Category C (disagreeing with the scores for both Category B and Category C), your acceptance rate for this review is calculated as follows:
|
Using the above screenshot as an example, the acceptance rate is calculated
as follows:
This rate specifically pertains to conversations where a manual review replaced the auto-review for a particular reviewee. |
Modified AutoQA conversations | Number of conversations where a human has changed the AutoQA score in at least one category. | Because there was disagreement in several categories with AutoQA, these categories would be included in the 79 Modified AutoQA conversations shown in the dashboard screenshot above. |
AQS | Average auto quality score of all conversations evaluated by AutoQA. |
Understanding additional AutoQA dashboard cards
Use the table below to understand the metrics for the additional AutoQA cards:
Metric | Description |
AQS over time | Difference in auto quality score over a selected period. This metric helps quickly identify drops in performance, allowing you to address issues promptly. |
Breakdown for top languages | This metric provides the ratio and count of conversation languages, helping you identify where to invest your support efforts. For example, if you notice an increase in volume for a specific language, it may be beneficial to include that language in your knowledge base. |
Category insights | Breakdown of AutoQA scores per category. |
Custom category insights | Breakdown of AutoQA scores per custom AutoQA category. |
Category scores over time | Line graph showing changes in category scores over a selected period. |
AutoQA conversations by category |
This metric represents the number of conversations that received an AutoQA rating. When a new category is added, only new incoming conversations with a Closed or Solved status are automatically analyzed and rated based on the category conditions. These are conversations that were closed with at least one public reply from both the agent and the customer. Existing conversations with a Closed or Solved status that receive an update after the category is added are also automatically analyzed and rated. Consequently, you may notice a conversation that has already been analyzed and rated being assessed again in a new workspace. |
Category scores per reviewee | This metric provides information on reviewees and their automatically calculated average scores for specific auto categories over a selected time period. It helps quickly identify which agents are underperforming in those areas, allowing you to determine who needs training. Additionally, it provides a reference for relevant conversations to review for each agent. |
Root causes of spelling and grammar mistakes per reviewee | Information about reviewees and the AutoQA-assessed root causes identified by AutoQA's Spelling and Grammar categories. |
Root causes of tone mistakes per reviewee | This metric provides information about reviewees and the root causes of tone mistakes identified by AutoQA's tone category. |
Repeated spelling and grammar mistakes per reviewee | This metric lists repeated errors, broken down by reviewee and their frequency. |
Last data update time (UTC) | The date and timestamp of the last dashboard update. The AutoQA dashboard updates automatically every hour. |
0 comments