CSAT survey default to "Bad"



  • Official comment
    Tetiana Gron
    Zendesk Product Manager

    Hi all,

    I am excited to share the news that we've recently rolled out the solution to this problem. Check our announcement Announcing a spam prevention tool in CSAT.


  • NIC Inc

    At the Oregon E-Government Program we see the same thing.  Very similar circumstances where the vast majority of our end users are state employees of various agencies.  Some of those have malware checkers that cause the feedback to go bad without any action from the actual requester.  Maybe just changing the order of the feedback links on the satisfaction survey email?  I don't know but it causes inconsistencies in our data too.


  • David Brown

    The Zendesk agent I was corresponding with was able to provide a link to this article which seems to also describe the issue, and maybe a fix by altering the automation. I have not had time just yet to clone the existing automation in my sandbox, make the revision, and test however. I am just passing that information along. Regardless, the out-of-the-box CSAT automation solution should work for all configurations taking this scenario into account. Corporate environments will with almost 100% certainty include defensive measures, else they do not know what they are doing and leaving themselves vulnerable. I think completely removing the binary two-link selection system which precludes that the satisfaction rating is determined simply by clicking said link(s) needs to be done away with entirely, instead provide a single trusted link without the predisposition and let the end-user select their rating, as well as provide their free-text feedback, at that CSAT destination URL.

  • Scott Allison
    Zendesk Product Manager

    Thank you for your feedback, it's truly appreciated. I want to confirm that Zendesk have made no changes to our CSAT feature that would cause this kind of effect. Additionally, we've undertaken a review of the data here and it's unclear how prevalent this issue actually is, or whether it's truly skewed towards negative. It looks like any non-human input is affecting both good and bad ratings equally. That said, if any customer believes they have data that may indicate otherwise, we'd be very interested to look at that. Unfortunately this is a tricky issue and not one we can easily solve given that there is so much here that is outside our control.

    Our recommended guidance is to remove the direct links in the CSAT automation for good and bad, and, instead, to link to the CSAT landing page. For how to do that, read on.  

    Modify your survey automation to not include direct response links

    By default, the Request customer satisfaction rating (system automation) includes a block with both the good and bad response links. You can switch this out for a placeholder that contains a link to a separate page with these options instead.

    To accomplish this

    1. In Admin Center, navigate to the Automations page
    2. Locate the Request customer satisfaction rating automation, and click on the automation to edit.
    3. Scroll down to Perform these actions to locate the email body section.
    4. Locate the {{satisfaction.rating_section}}placeholder and replace it with {{satisfaction.rating_url}}

    For more information on placeholders available in Zendesk Support, see the article: Zendesk Support placeholders reference.

  • David Brown

    It looks like perhaps a community mod has removed the quoted text from my original post. The summary is that anti-malware and anti-phishing software in use on corporate email servers may be the culprit, scanning through the links for safety. The "Bad" selection is the last link in the automation, and when a user clicks either the "Good" or "Bad" link it sometimes still sets the rating as "Bad" regardless of actual selection. Perhaps calling these defensive SW solutions as the culprit is the wrong term, they are doing their job, the problem is the CSAT automations construction.

  • Dave Dyson
    Hi David, I just want to apologize if one of us deleted some of your text without leaving a note. We do generally remove content that conflicts with our Community Code of Conduct, but our practice is to leave a note explaining why, such as:
    This post was edited by the Zendesk Community Team [reason]
    Again, I'm sorry that didn't happen here. We do appreciate your feedback, and thanks again for that1
  • Andrew Chu

    Thanks David Brown for raising this post - in my group, we notice the same trend too that a huge spike of negative ratings in our CSAT with no reasons provided. Also when we reach back to our customers they said they never even touch the survey Email. Want to check if you notice timeline when the trend started to pick up? From our end, we notice that it's coming since end of Feb.

    I would also request Zendesk team to conduct an internal investigation and see any factors internally are causing this. We've been using ZD for half a year now with the default rating format since Day 1, however only recently such huge spike started to pick up?

  • David Brown

    Andrew Chu I'm not precisely sure when it started to occur. Per my initial discussion with a Zendesk agent through messaging, it seems to have been noted that his problem was identified in November 2021. On my end, adoption of our service is steadily increasing, and so more ticket volumes means more CSAT survey answers coming in. One particular agent who tends to have a very high volume of tickets due to his area of expertise became concerned after checking his rated tickets and reached out to his requesters to try to understand why they were giving a bad rating when the back and forth exchange while working the tickets were quite positive, this is when it was discovered that they were in fact not clicking the link to provide a bad rating, and sometimes also not responding to the CSAT survey solicitation at all, which is what prompted my initial ticket via messaging and this community post to try to raise awareness in the community for other admins and get some priority put on resolving this issue for everyone.

  • Gareth Elsby

    I've seen this behaviour in the wild in a previous role that affected our timesheet approvals process. The solution we came up with was to hide a hyperlink in the email that was designed as 'honeytrap' for the bots.

    Essentially, if the hidden link was clicked, we could say with confidence that only a bot could find it and click it. Could Zendesk consider the same, whereby if the third link was clicked, the CSAT response is nullified and the next click would come from a human and be expected. This mitigates the risks identified:

    1. A bot clicks all links from top to bottom
    2. The negative CSAT option is usually the second option
    3. Zendesk records the last CSAT click as the final answer from the rater
    4. We don't want to increase customer effort by introducing a two-step rating process
    5. Zendesk polls results on an hourly/half-hourly basis, so won't be affected by multiple bot clicks.

    Could this option be explored by Zendesk as a solution to combat anti-spam link clickers?

  • Rusty Wilson

    Scott Allison - you said, "We've undertaken a review of the data here and it's unclear how prevalent this issue actually is, or whether it's truly skewed towards negative."

    You are kidding, right? Have you not seen your own article and "rage" it's created with a MASSIVE number of your customers. Everyone confirms the same thing. In, or around, October of 2021, Zendesk made *some* change that resulted in a massive skew of CSAT to the negative.

    Literally overnight, our csat drop 20%.

    How much more evidence do you need that some change you made has created a terrible skew to the negative?

    ref: https://support.zendesk.com/hc/en-us/articles/4408831380122-Why-am-I-receiving-unexpected-bad-satisfaction-ratings-


  • David Brown

    While I am experiencing this problem, and started this discussion, I am not completely convinced that the issue lies with Zendesk. What seems to be happening is that incoming emails are scanned for malicious links, and this scan executes in a top to bottom approach. The default CSAT survey has the good rating first, the bad rating last in the email. The scanner likely sets the score to good first in its scan, then to bad as it is the last hyperlink scanned. This is why the surveys are bot clicked bad and not good, it is just the order of operations and the arrangement of the hyperlinks. I think were we inclined to reverse the order of the survey links that we would still have bot clicks, but the result would be in false positive ratings. What I am wondering now is if the real culprit is the email scanner itself, it could be that those affected all happen to be using the same one, and it is what was updated that has caused all of this pain. I am still in testing mode in my sandbox for a CSAT survey which links only to the survey form without a default rating set by way of the click-through from the initial survey solicitation email. So far out of ~40 test tickets set to solved, each with two survey opportunities sent (automation sending a second CSAT survey response opportunity based on lack of response to the initial one) there have been no bot clicks detected. This solution will rely on the end-users making just one additional click to provide their feedback: one to get to the survey, one to select their rating, and one to submit; whereas the default sets the rating based on the hyperlink clicked in the default survey. I'm hopeful that this resolves my issue, but as others have said, it may result in less responses, thus the 2 CSAT survey solicitation approach I am testing and aim to deploy.

  • David Brown

    I had >100 tests completed in my sandbox with 0 bot clicks with my new CSAT schema which redirects end-users to the survey site using a single link and does not auto-set the rating on clickthrough. I rolled this into production on April 14 with no bot click incidents detected. Our email targeted attack protection was the culprit, as it scans all incoming emails for targeted phishing, scans links to sites to check for malware hosting, and/or otherwise malicious or threatening intent. For the sake of my own company's security, I will not disclose what security products we use publicly, however were Zendesk to contact and ask directly, that conversation may be covered under existing CDA.

  • Gwenaelle Bos

    Hi, to share our own experience with this issue - we have always known that some anti-virus software could click on links, and it has always represented a very small amount of our Bad ratings.
    A few weeks ago, our company has migrated Zendesk instance - we moved from one contract to another one. Since that day, we are witnessing a HUGE increase in the number of Bad ratings we receive (+250% to be exact). We have reached out to customers who confirmed they have never clicked the link on purpose. How would you explain that from one month to another one, all our clients (from different companies and different countries) would start using anti-virus software at the same time? We now receive dozens of fake Bad ratings and this behaviour is clearly impacting our customer service. We hope this issue will be addressed!

  • Scott Allison
    Zendesk Product Manager

    Gwenaelle Bos Thanks your message and I'm sorry to hear that this is affecting you right now. We do have a known solution for it, that does work. It's the same one as listed at the top of this page, but there's also a comment on this article from one of our Vice Presidents which explains a little more about that.

  • Rusty Wilson

    Scott Allison - appreciate the info - but once again, Zendesk has completely sidestepped the *QUESTION*.

    I'll post the question Gwenalle asked as its the same question I asked a few months ago.

    Please explain how the number of "false bad" ratings can change dramatically overnight.

    In their case - when they changed contract they saw an immediate increase. 

    In my case - all I have to do is look at the historical metrics to see that in Oct of last year there was a massive, immediate increase.

    We made no changes - at this time - we were in a configuration freeze.

    I'm sorry but I do not expect that thousands of our customers chose to install an email scanning system on exactly the same day.

    There is absolutely something Zendesk did on their side that exacerbated the issues.

    I appreciate the pointer to a work around, but the continual avoidance of actually answering the base question is concerning. It means you either don't care, or you don't know.

  • Scott Allison
    Zendesk Product Manager

    Rusty Wilson Thanks for the follow up question. First up, I want to make sure you have seen the response from one of our Vice Presidents on the main article about CSAT ratings. I'll copy it here below for others benefit.

    My name is Pablo Kenney, and I'm a vice president of product here at Zendesk. I wanted to take a moment to jump into this thread and lay out a few responses on this issue. 

    First, the relative lack of response on this thread is not an indication of our lack of interest in the issue. We are aware of this issue, and appreciate the difficulty of the situation for those experiencing this problem. If you hop over to the Support Topic, you'll see that we have been engaging on this topic over there. 

    Here's what I can tell you: 

    1) We have not changed how CSAT operates in any way that would lead to the issues mentioned in this thread. Instead, as mentioned above, what is likely happening in these instances is the end-user has a link expander, like an anti-virus checker, installed on their machine or running on their mail server. 

    2) We have seen that a set of customers have received false CSAT scores over time, increasing in the October-December 2021 period. This is likely tied to a change in link-expansion behavior, but we have not tied it to a specific service or set of services. Obviously, the impact is significant for those who are experiencing the issue, but it is not widespread and does not seem related to how the Zendesk product behaves. 

    3) We’ve thoroughly investigated this issue, and we continue to recommend the suggestions in the article listed above. The suggestions in that article will resolve this for anyone experiencing the issue. 

    4) We have been thinking about how to improve CSAT functionality (including avoiding issues like this one) and look forward to delivering those improvements in the future. Those investments remain in the planning stage. 

    Note: In the interest of keeping related conversations in the same space, product managers primarily respond to product feedback in the feedback topic threads, so while we will continue to allow comments on this article for questions, if you wish to provide feedback on this issue, please do so in the Support Feedback Topic.

    A special thanks to those who have provided their knowhow to help others implement the workarounds from the article.

    I also want to reiterate Pablo Kenney's comment that we at Zendesk have not made changes that would affect CSAT in this way.

    To answer your question of why then this is happening, our working hypothesis is that it's third party email security vendors that have changed something, perhaps the algorithms employed by them to assess whether or not they need to click on links, for example.

  • Rusty Wilson

    Thank you very much Scott Allison and Pablo Kenney

  • Nathan Purcell

    An obvious side effect of this problem is that the Zendesk admins have no method of changing a user's rating. 

    Adding such a feature clearly allows it to be gamed (only to ones own detriment) but this should definitely be an option in either the UI or API. It's unprofessional to have to explain to a customer the weakness of the CS platform after having asked them for feedback on a negative rating and then to have to ask them to either undo it, or replace their bad rating with a good one. 

    I intend on adopting the stop-gap approach, but there is more to do here please Zendesk. 


Post is closed for comments.

Powered by Zendesk