About Satisfaction Prediction Scores Follow

team professional enterprise plans

Satisfaction Prediction Scores are generated using your account’s customer support data, and can identify what characteristics are likely to result in your customer being satisfied. When you enable satisfaction prediction, you can integrate it into your business rules, and use scores to create views, triggers, and automations to draw attention to at-risk tickets. The prediction serves as an early warning system so you can turn things around before it’s too late.

Satisfaction prediction is available to Enterprise customers.

This article discusses the following topics:

If you're ready to get started with satisfaction prediction on your account, see Working with satisfaction prediction.

How Satisfaction Prediction Scores are generated

The Satisfaction Prediction Score is an indicator of whether a ticket is likely to receive a good or bad satisfaction rating. A predictive model is built for your account using past customer support and satisfaction rating data. New tickets and ticket updates are evaluated against this model to determine if a customer is likely to be satisfied at the end of their interaction.

The predictive model takes into account the following data:
  • Time metrics, such as first reply time, full resolution time, and requester wait time.
  • Ticket text, drawn from the subject, description, and comments fields.
  • Effort metrics, including the number of replies, reopened tickets, and reassigned tickets.

A personalized model is created from this data that identifies what characteristics are likely to result in a satisfied customer.

(Artist's rendition of the prediction process.)

In order to create a reliable predictive model,, you need a minimum of 200 satisfaction ratings per month, with a combination of both good and bad ratings, on tickets that have a First Reply Time greater than 0 (Chat and Talk tickets are excluded for this reason). If this criteria is met a model can be built. A validation and performance check is then run on the model. Provided the model meets a performance threshold the feature will become available for your account.

Note: Prediction scores are only done for tickets created or updated after you enable Satisfaction Prediction. Tickets existing prior to enabling this feature will not receive prediction scores until they are updated.

Once you have enough ratings, and have enabled satisfaction prediction in your account, the score appears in your tickets, and you can add Satisfaction Prediction Scores to ticket views.

You can also view your Satisfaction Prediction Scores in the Prediction tab of the Insights dashboard. After enabling the feature, the Prediction tab appears in the dashboard within 24 hours. For information on the Insights Prediction tab, see Prediction tab reports.

How events affect Customer Satisfaction Scores

Every time you take an action on a ticket, the prediction score is recalculated. This makes it easy to understand the impact each ticket update has. The updated score is attached to each ticket event, so you can view the score history throughout the life of the ticket.

When you're viewing a ticket, select the Show all events option. The ticket action directly before each prediction is the event that caused the prediction update.

Have more questions? Submit a request


  • 1
    Josh Greenwald

    Thanks for this detailed explanation Mike. Super helpful!

  • 1

    I'm wondering if there's more detailed information on which data is weighed how? We turned on the prediction model a couple days ago, but so far it's not given us very reliable data. 

    Is there a way to know what the predictions are based on, specifically? Thanks!

  • 0

    Hi Nienke!

    I don't know any specifics on how each part of the data set used is specifically weighted, but I do know that a machine learning process is used to train the model for your account.

    In general, when users experience the kind of issue you're seeing, it's because we just need more data to be able to build an accurate model. This usually happens because there are many varieties of tickets or not enough good/bad ratings in your account, and it's possible to run into this issue even if you otherwise meet the minimum criteria to use the feature.

    The good news is that all you need to fix it is a tincture of time. The system will continue gathering information and get progressively "smarter", resulting in better data.

    Please let me know if you have any other questions!

  • 0

    Is the machine learning process taking into account the real satisfaction rating once a ticket is solved and a customer rates?  For example, a predicted score of 43/100 based on touches or text, but the customer rating positively after the ticket is actually solved?

  • 0


    I tested this in my test account and it does not appear to take into account the satisfaction rating of a rated ticket.

    The Satisfaction Prediction model is based on ticket data and takes into account time metrics, ticket text, and effort metrics, but does not appear to be affected by the actual rating of a single ticket. Ticket ratings do however affect the model's predictions going forward for other tickets, but it does not effect the prediction for the rated ticket. Further the prediction score of a rated tickets can change if more ticket events occur that the model considers.

    Edited by Rebecca
  • 0

    Rebecca nailed it, the ML process will consider this ticket and rating and use it in building a more accurate model for future predictions. It won't directly affect / adjust the ticket that's been rated.

  • 0

    Thanks Jessie, that makes sense!

    The other thing I'm wondering about is how the general satisfaction ratings are calculated. I know it's not just based on good scores and bad ones, since we run a homemade script too and that gives us different results than the ones available in Insights.

    Anyone has more knowledge to share with me?

  • 0

    Hi Mike,

    Well, it sounds interesting and I have 2 questions.

    1. is this predictive model trained for each agent account or customer account?

    2. it's about the text metrics you mentions, I'm really curious about how you relate those texts to the predictive model by machine learning, thanks


    Edited by 周凯
  • 0

    Hi 周凯,

    1) the model is trained for each zendesk account - so for all of your tickets, from all of your customers - if that makes more sense?

    2) The machine learning combines 3 ML models, 1 for all the comment text which is essentially doing text analysis and correlating words of the ticket against words of similar tickets that have been rated badly / well. 


  • 0

    hi Mike,

    I have a question that I think is similar to 周凯 above, but I would like you to clarify the answer.

    周凯 asked

    1. is this predictive model trained for each agent account or customer account?

    What I would like to know is if a ticket's Predictive Score is based on previous tickets from the same customer or tickets handled by the same Agent.  i.e. previous BAD Sat from this customer may mean a BAD Sat on this ticket.

    thanks, Mark

  • 0

    Hi, I was wondering how I should interpret the score. In other words, does a score of 45 mean that the ticket has a 45% chance of receiving a positive rating? Or if I'm looking at an average prediction of 65, can I expect my overall satisfaction rate to be around 65%?

  • 0

    Hi Sam, great question - I'll update the articles to be clearer about what the score means and how to use it because it's definitely missing that context. 

    The easiest and most effective way to use the score is to take your long term CSAT average (let's just say it's 85%). 100 - 85 = 15. So, in this situation a score of 15 or less is the clearest signal that the ticket satisfaction will be bad. Utilising the Insights reports, you could also dig a little deeper to find other patterns that can be useful when considering workflow changes as a result of the prediction score.

    Mark, sorry I never got back to you! The scores (and the model) are generated per account and take into consideration many many signals to determine the ultimate prediction score. Only one of those signals is the requester and depending on your model, this might be only a minor part of the overall prediction calculation.

    Thanks! -Mike

Please sign in to leave a comment.

Powered by Zendesk