This article describes different methods for reviewing customer support conversations and their benefits for your team.
This article contains the following sections:
Manager reviewing agents' conversations
In this traditional approach, the Customer Support (CS) Manager or a Team Lead reviews agents' work and provides feedback. For larger teams, a dedicated Quality Reviewer or team may be responsible for this task.
This method works well for companies with structured teams and a hierarchical setup. It creates a consistent workflow, as the same people review everyone's work, ensuring uniform feedback and facilitating comparison.
Peer-to-peer reviews
In peer-to-peer reviews, agents review each other's work. This method is effective for smaller teams and companies with an open culture. Agents can learn from their peers by seeing how others handle similar issues and sharing tips and experiences.
Receiving feedback from different people can provide new perspectives on the review process. Additionally, involving more people in the process helps cover more conversations. This approach fosters a collaborative culture where agents support each other in improving their performance.
However, it can be challenging to compare agent performance when multiple reviewers are involved. Training everyone on conversation reviewing and tracking the reviews can be time-consuming, but the benefits can be significant. The calibration process can help align reviewers on evaluation standards.
Self-review
In self-reviews, agents review their own conversations, critically evaluating their responses and various aspects of their job.
Since you invest time in hiring the right people, it makes sense to trust them to review and evaluate their work critically.
Reactive review
With a large volume of conversations, it may be practical to focus feedback efforts on cases where issues are known to have occurred, such as poor CSAT ratings, conversations with extensive back-and-forth, or long response times.
This approach can introduce bias into your internal quality score, but it is a quick way to see the positive impact of conversation reviews on your results.
It is important not to mix reactive and proactive (randomly selected) conversation review results, as reactive reviews will likely have lower scores and will not be comparable to proactive review scores.