About Satisfaction Prediction Scores

Have more questions? Submit a request

19 Comments

  • Josh Greenwald
    Comment actions Permalink

    Thanks for this detailed explanation Mike. Super helpful!

    1
  • Nienke
    Comment actions Permalink

    I'm wondering if there's more detailed information on which data is weighed how? We turned on the prediction model a couple days ago, but so far it's not given us very reliable data. 

    Is there a way to know what the predictions are based on, specifically? Thanks!

    1
  • Jessie Schutz
    Comment actions Permalink

    Hi Nienke!

    I don't know any specifics on how each part of the data set used is specifically weighted, but I do know that a machine learning process is used to train the model for your account.

    In general, when users experience the kind of issue you're seeing, it's because we just need more data to be able to build an accurate model. This usually happens because there are many varieties of tickets or not enough good/bad ratings in your account, and it's possible to run into this issue even if you otherwise meet the minimum criteria to use the feature.

    The good news is that all you need to fix it is a tincture of time. The system will continue gathering information and get progressively "smarter", resulting in better data.

    Please let me know if you have any other questions!

    0
  • John Miller
    Comment actions Permalink

    Is the machine learning process taking into account the real satisfaction rating once a ticket is solved and a customer rates?  For example, a predicted score of 43/100 based on touches or text, but the customer rating positively after the ticket is actually solved?

    0
  • Rebecca
    Comment actions Permalink

    John-

    I tested this in my test account and it does not appear to take into account the satisfaction rating of a rated ticket.

    The Satisfaction Prediction model is based on ticket data and takes into account time metrics, ticket text, and effort metrics, but does not appear to be affected by the actual rating of a single ticket. Ticket ratings do however affect the model's predictions going forward for other tickets, but it does not effect the prediction for the rated ticket. Further the prediction score of a rated tickets can change if more ticket events occur that the model considers.

    0
  • Mike Mortimer
    Comment actions Permalink

    Rebecca nailed it, the ML process will consider this ticket and rating and use it in building a more accurate model for future predictions. It won't directly affect / adjust the ticket that's been rated.

    0
  • Nienke
    Comment actions Permalink

    Thanks Jessie, that makes sense!

    The other thing I'm wondering about is how the general satisfaction ratings are calculated. I know it's not just based on good scores and bad ones, since we run a homemade script too and that gives us different results than the ones available in Insights.

    Anyone has more knowledge to share with me?

    0
  • 周凯
    Comment actions Permalink

    Hi Mike,

    Well, it sounds interesting and I have 2 questions.

    1. is this predictive model trained for each agent account or customer account?

    2. it's about the text metrics you mentions, I'm really curious about how you relate those texts to the predictive model by machine learning, thanks

     

    0
  • Mike Mortimer
    Comment actions Permalink

    Hi 周凯,

    1) the model is trained for each zendesk account - so for all of your tickets, from all of your customers - if that makes more sense?

    2) The machine learning combines 3 ML models, 1 for all the comment text which is essentially doing text analysis and correlating words of the ticket against words of similar tickets that have been rated badly / well. 

     

    0
  • Mark Hinson
    Comment actions Permalink

    hi Mike,

    I have a question that I think is similar to 周凯 above, but I would like you to clarify the answer.

    周凯 asked

    1. is this predictive model trained for each agent account or customer account?

    What I would like to know is if a ticket's Predictive Score is based on previous tickets from the same customer or tickets handled by the same Agent.  i.e. previous BAD Sat from this customer may mean a BAD Sat on this ticket.

    thanks, Mark

    0
  • Sam Tilin
    Comment actions Permalink

    Hi, I was wondering how I should interpret the score. In other words, does a score of 45 mean that the ticket has a 45% chance of receiving a positive rating? Or if I'm looking at an average prediction of 65, can I expect my overall satisfaction rate to be around 65%?

    0
  • Mike Mortimer
    Comment actions Permalink

    Hi Sam, great question - I'll update the articles to be clearer about what the score means and how to use it because it's definitely missing that context. 

    The easiest and most effective way to use the score is to take your long term CSAT average (let's just say it's 85%). 100 - 85 = 15. So, in this situation a score of 15 or less is the clearest signal that the ticket satisfaction will be bad. Utilising the Insights reports, you could also dig a little deeper to find other patterns that can be useful when considering workflow changes as a result of the prediction score.

    Mark, sorry I never got back to you! The scores (and the model) are generated per account and take into consideration many many signals to determine the ultimate prediction score. Only one of those signals is the requester and depending on your model, this might be only a minor part of the overall prediction calculation.

    Thanks! -Mike

    1
  • Lee Reichardt
    Comment actions Permalink

    What is the sample size for this to actually kick in?

    0
  • Jessie Schutz
    Comment actions Permalink

    Hey Lee!

    Per the article above:

    "To create a reliable predictive model, you need a minimum of 200 satisfaction ratings per month for three months, with a combination of both good and bad ratings, on tickets that have a First Reply Time greater than 0 (Chat and Talk tickets are excluded for this reason)".

    You won't be able to enable the feature until you've met that criteria. I hope that helps!

    0
  • Lijun Wu
    Comment actions Permalink

    Is this article up to date?

    I have the following message from my Zendesk Prediction Score Settings:

    This feature is only available for Enterprise customers who receive a minimum of 500 satisfaction ratings per month. Learn more

     

    0
  • Brett - Community Manager
    Comment actions Permalink

    Hi Lijun,

    I reached out internally and confirmed that in some cases the minimum can be higher so this error can be expected. When the feature was built there was a 500 minimum however they were able to decrease this to 200 on occasion. We are looking into building a better data model for this feature however we are still ironing out some bugs before we do so. The expectation for now is that the 200 minimum is just a guide but can be higher.

    I hope this clears up any confusion!

    0
  • Rebecca Love
    Comment actions Permalink

    Hi Guys, 

    Our CSAT prediction is yet to be activated.

    Is someone able to please look into this for us?

    1
  • Rebecca Love
    Comment actions Permalink

    Hi Guys, 

    Any luck on this?

    0
  • Louise Dissing
    Comment actions Permalink

    Hi! :-)

     

    I would love to know if this feature could work together with a third party satisfaction tool such as Surveypal?

    I'm looking forward to hearing from you!

    0

Please sign in to leave a comment.

Powered by Zendesk