Recent searches
No recent searches
CSAT survey default to "Bad"
Completed
Posted Mar 09, 2022
I am an admin and have received reports via my agents that end-users CSAT surveys are pre-populated as Bad. I reached out for Help, and via that discussion found out that this is indeed an existing problem with a configuration like I have in place. I have a configuration which is inward facing, all of our agents and end-users are corporate employees. All email traffic ultimately comes in through our email services and are scanned for malware, malicious links, phishing attempts, etc.
I have lost confidence in our CSAT ratings. Our end-users should not be seeing a CSAT survey which defaults as Bad when clicking the in-line link in the email they receive. This really needs to be sorted out and the default automation changed.
7
18
18 comments
Official
Tetiana Gron
Hi all,
I am excited to share the news that we've recently rolled out the solution to this problem. Check our announcement Announcing a spam prevention tool in CSAT.
0
Shawn Amsberry
At the Oregon E-Government Program we see the same thing. Very similar circumstances where the vast majority of our end users are state employees of various agencies. Some of those have malware checkers that cause the feedback to go bad without any action from the actual requester. Maybe just changing the order of the feedback links on the satisfaction survey email? I don't know but it causes inconsistencies in our data too.
1
David Brown
The Zendesk agent I was corresponding with was able to provide a link to this article which seems to also describe the issue, and maybe a fix by altering the automation. I have not had time just yet to clone the existing automation in my sandbox, make the revision, and test however. I am just passing that information along. Regardless, the out-of-the-box CSAT automation solution should work for all configurations taking this scenario into account. Corporate environments will with almost 100% certainty include defensive measures, else they do not know what they are doing and leaving themselves vulnerable. I think completely removing the binary two-link selection system which precludes that the satisfaction rating is determined simply by clicking said link(s) needs to be done away with entirely, instead provide a single trusted link without the predisposition and let the end-user select their rating, as well as provide their free-text feedback, at that CSAT destination URL.
1
Scott Allison
Thank you for your feedback, it's truly appreciated. I want to confirm that Zendesk have made no changes to our CSAT feature that would cause this kind of effect. Additionally, we've undertaken a review of the data here and it's unclear how prevalent this issue actually is, or whether it's truly skewed towards negative. It looks like any non-human input is affecting both good and bad ratings equally. That said, if any customer believes they have data that may indicate otherwise, we'd be very interested to look at that. Unfortunately this is a tricky issue and not one we can easily solve given that there is so much here that is outside our control.
Our recommended guidance is to remove the direct links in the CSAT automation for good and bad, and, instead, to link to the CSAT landing page. For how to do that, read on.
Modify your survey automation to not include direct response links
By default, the Request customer satisfaction rating (system automation) includes a block with both the good and bad response links. You can switch this out for a placeholder that contains a link to a separate page with these options instead.
To accomplish this
{{satisfaction.rating_section}}
placeholder and replace it with{{satisfaction.rating_url}}
For more information on placeholders available in Zendesk Support, see the article: Zendesk Support placeholders reference.
0
David Brown
It looks like perhaps a community mod has removed the quoted text from my original post. The summary is that anti-malware and anti-phishing software in use on corporate email servers may be the culprit, scanning through the links for safety. The "Bad" selection is the last link in the automation, and when a user clicks either the "Good" or "Bad" link it sometimes still sets the rating as "Bad" regardless of actual selection. Perhaps calling these defensive SW solutions as the culprit is the wrong term, they are doing their job, the problem is the CSAT automations construction.
2
Dave Dyson
This post was edited by the Zendesk Community Team [reason]
Again, I'm sorry that didn't happen here. We do appreciate your feedback, and thanks again for that1
0
Andrew Chu
Thanks David Brown for raising this post - in my group, we notice the same trend too that a huge spike of negative ratings in our CSAT with no reasons provided. Also when we reach back to our customers they said they never even touch the survey Email. Want to check if you notice timeline when the trend started to pick up? From our end, we notice that it's coming since end of Feb.
I would also request Zendesk team to conduct an internal investigation and see any factors internally are causing this. We've been using ZD for half a year now with the default rating format since Day 1, however only recently such huge spike started to pick up?
1
David Brown
Andrew Chu I'm not precisely sure when it started to occur. Per my initial discussion with a Zendesk agent through messaging, it seems to have been noted that his problem was identified in November 2021. On my end, adoption of our service is steadily increasing, and so more ticket volumes means more CSAT survey answers coming in. One particular agent who tends to have a very high volume of tickets due to his area of expertise became concerned after checking his rated tickets and reached out to his requesters to try to understand why they were giving a bad rating when the back and forth exchange while working the tickets were quite positive, this is when it was discovered that they were in fact not clicking the link to provide a bad rating, and sometimes also not responding to the CSAT survey solicitation at all, which is what prompted my initial ticket via messaging and this community post to try to raise awareness in the community for other admins and get some priority put on resolving this issue for everyone.
1
Gareth Elsby
I've seen this behaviour in the wild in a previous role that affected our timesheet approvals process. The solution we came up with was to hide a hyperlink in the email that was designed as 'honeytrap' for the bots.
Essentially, if the hidden link was clicked, we could say with confidence that only a bot could find it and click it. Could Zendesk consider the same, whereby if the third link was clicked, the CSAT response is nullified and the next click would come from a human and be expected. This mitigates the risks identified:
Could this option be explored by Zendesk as a solution to combat anti-spam link clickers?
1
Rusty Wilson
Scott Allison - you said, "We've undertaken a review of the data here and it's unclear how prevalent this issue actually is, or whether it's truly skewed towards negative."
You are kidding, right? Have you not seen your own article and "rage" it's created with a MASSIVE number of your customers. Everyone confirms the same thing. In, or around, October of 2021, Zendesk made *some* change that resulted in a massive skew of CSAT to the negative.
Literally overnight, our csat drop 20%.
How much more evidence do you need that some change you made has created a terrible skew to the negative?
ref: https://support.zendesk.com/hc/en-us/articles/4408831380122-Why-am-I-receiving-unexpected-bad-satisfaction-ratings-
0
David Brown
While I am experiencing this problem, and started this discussion, I am not completely convinced that the issue lies with Zendesk. What seems to be happening is that incoming emails are scanned for malicious links, and this scan executes in a top to bottom approach. The default CSAT survey has the good rating first, the bad rating last in the email. The scanner likely sets the score to good first in its scan, then to bad as it is the last hyperlink scanned. This is why the surveys are bot clicked bad and not good, it is just the order of operations and the arrangement of the hyperlinks. I think were we inclined to reverse the order of the survey links that we would still have bot clicks, but the result would be in false positive ratings. What I am wondering now is if the real culprit is the email scanner itself, it could be that those affected all happen to be using the same one, and it is what was updated that has caused all of this pain. I am still in testing mode in my sandbox for a CSAT survey which links only to the survey form without a default rating set by way of the click-through from the initial survey solicitation email. So far out of ~40 test tickets set to solved, each with two survey opportunities sent (automation sending a second CSAT survey response opportunity based on lack of response to the initial one) there have been no bot clicks detected. This solution will rely on the end-users making just one additional click to provide their feedback: one to get to the survey, one to select their rating, and one to submit; whereas the default sets the rating based on the hyperlink clicked in the default survey. I'm hopeful that this resolves my issue, but as others have said, it may result in less responses, thus the 2 CSAT survey solicitation approach I am testing and aim to deploy.
0
David Brown
I had >100 tests completed in my sandbox with 0 bot clicks with my new CSAT schema which redirects end-users to the survey site using a single link and does not auto-set the rating on clickthrough. I rolled this into production on April 14 with no bot click incidents detected. Our email targeted attack protection was the culprit, as it scans all incoming emails for targeted phishing, scans links to sites to check for malware hosting, and/or otherwise malicious or threatening intent. For the sake of my own company's security, I will not disclose what security products we use publicly, however were Zendesk to contact and ask directly, that conversation may be covered under existing CDA.
1
Gwenaelle Bos
Hi, to share our own experience with this issue - we have always known that some anti-virus software could click on links, and it has always represented a very small amount of our Bad ratings.
A few weeks ago, our company has migrated Zendesk instance - we moved from one contract to another one. Since that day, we are witnessing a HUGE increase in the number of Bad ratings we receive (+250% to be exact). We have reached out to customers who confirmed they have never clicked the link on purpose. How would you explain that from one month to another one, all our clients (from different companies and different countries) would start using anti-virus software at the same time? We now receive dozens of fake Bad ratings and this behaviour is clearly impacting our customer service. We hope this issue will be addressed!
1
Scott Allison
Gwenaelle Bos Thanks your message and I'm sorry to hear that this is affecting you right now. We do have a known solution for it, that does work. It's the same one as listed at the top of this page, but there's also a comment on this article from one of our Vice Presidents which explains a little more about that.
0
Rusty Wilson
@... - appreciate the info - but once again, Zendesk has completely sidestepped the *QUESTION*.
I'll post the question Gwenalle asked as its the same question I asked a few months ago.
Please explain how the number of "false bad" ratings can change dramatically overnight.
In their case - when they changed contract they saw an immediate increase.
In my case - all I have to do is look at the historical metrics to see that in Oct of last year there was a massive, immediate increase.
We made no changes - at this time - we were in a configuration freeze.
I'm sorry but I do not expect that thousands of our customers chose to install an email scanning system on exactly the same day.
There is absolutely something Zendesk did on their side that exacerbated the issues.
I appreciate the pointer to a work around, but the continual avoidance of actually answering the base question is concerning. It means you either don't care, or you don't know.
0
Scott Allison
Rusty Wilson Thanks for the follow up question. First up, I want to make sure you have seen the response from one of our Vice Presidents on the main article about CSAT ratings. I'll copy it here below for others benefit.
I also want to reiterate Pablo Kenney's comment that we at Zendesk have not made changes that would affect CSAT in this way.
To answer your question of why then this is happening, our working hypothesis is that it's third party email security vendors that have changed something, perhaps the algorithms employed by them to assess whether or not they need to click on links, for example.
0
Rusty Wilson
Thank you very much @... and Pablo Kenney.
0
Nathan Purcell
An obvious side effect of this problem is that the Zendesk admins have no method of changing a user's rating.
Adding such a feature clearly allows it to be gamed (only to ones own detriment) but this should definitely be an option in either the UI or API. It's unprofessional to have to explain to a customer the weakness of the CS platform after having asked them for feedback on a negative rating and then to have to ask them to either undo it, or replace their bad rating with a good one.
I intend on adopting the stop-gap approach, but there is more to do here please Zendesk.
0