Why am I receiving unexpected bad satisfaction ratings?

Have more questions? Submit a request

52 Comments

  • Zulq Iqbal
    Comment actions Permalink

    Hi

    We recently have started receiving the "false" ratings from our "customers", as much as 7 in 30 days. Prior to this, in over 3 years, we never received any false ratings in 1800 CSAT responses (because we always follow up on bads to improve our service, just in case you were wondering how I knew that). 

    In all the cases where I have discussed these false bad results with my customers have resulted in the same answer, They all use MessageLabs and they all used the Phishing Defense feature. Surely Zendesk can get in touch with MessageLabs to trust Zendesk URLs to avoid URL checking in the first place rather than "we" having to make sacrifices to change the rating system?

    Not to mention that this has only just started happening for us makes me think something actually changed within Zendesk and no one is admitting it but nonetheless it would be such a shame for us, Zendesk and our customers if we had to openly admit to making the rating system more difficult for everyone.

    Would love to hear more about whether Zendesk would look into this further?

    Thanks

    Zulq

     

    9
  • Mike Squance
    Comment actions Permalink

    Hi,

    We have also have several bad ratings recently in the last month or so where the customers have not intended to provide those ratings. The information about MessageLabs is interesting and we will try to confirm that is the cause of our cases. But like Zulq, we had none until recently.

    I hope Zendesk will consider that suggestion.

    Mike.

    0
  • Michael Abraham
    Comment actions Permalink

    Hi,

    We recently started receiving bad reviews from just certain clients also. We haven't looked into what type of phishing software they are using but I would think something similar is happening. We reached out previously and the client wasn't even aware they were sending review responses. I went ahead and modified the Survey Automation and will see how that goes.

    As mentioned above, did something change recently that we should be aware of? 

     

    Thanks,

    Mike

    0
  • Greg
    Comment actions Permalink

    Hi Michael! As mentioned in the article, this is usually a result of anti-virus software scanning the email and then selecting the last available option, which would be the bad rating. If you reach out to your customers and can confirm that this is not what is happening, send us a ticket to support@zendesk.com and we'll dig into this further for you!

    0
  • Zulq Iqbal
    Comment actions Permalink

    Hi Greg

    Just to let you know, i raised two tickets with your support team and i received the same answer twice, quite similar to your response. No further digging was carried out....

    If you read my comment above, you will see that my experience of this issue is with MessageLabs and their Phishing Defense feature. Im not sure if other customers have experienced the same but when i advised your support team they said they will look into it. That was over 2 months ago. This has been a horrible experience for us and our customers and requesting administrators to amend the CSAT system is frankly disappointing.

    The reason why the one click system works for us is that customers have a quick and easy way to leave feedback. If we remove the direct links, it means we are forcing customers to click twice, maybe three times before they leave feedback and by that point they just cant be bothered. I'm certainly not. Being one of those people who receives around 5 CSATs a day from suppliers i can tell you if i have to click more than once, i wont leave a CSAT.

     

    Zulq

    5
  • Chris Swinney
    Comment actions Permalink

    I personally think that if you want to give a Bad Feedback, then a two click system should be employed. You do NOT want a customer to leave bad feedback without giving a reason (I personally would think that is quite rude). However, we too are suffering from these issues, and the current system is clearly not functional or acceptable. 

    A customer has reported that they have apparently given bad feedback when they have set their Outlook OOO message on. Clearly, something is wrong in the way Zen processes these

    1
  • Nicole - Community Manager
    Comment actions Permalink

    Hi Chris and Zulq - 

    Our team is aware of this issue and has it in their backlog. Unfortunately it may be a while before they're able to get to resolving it as there are several more business-critical things that they have to deal with first. But thank you for raising it and making us aware of this issue.

    0
  • Rob Baker
    Comment actions Permalink

    Since migrating to Zendesk in August, we are now seeing a 2% false negative rating rate where Customers claim they did not "click on anything".  This erodes Customer confidence and leads to very awkward and unnecessary bad CSAT follow-up conversations.  

    5
  • Elizabeth Ronquillo
    Comment actions Permalink

    We are also seeing false negative rating where customers said they did not 'click' on anything. 

    3
  • Crystal Little
    Comment actions Permalink

    We saw this issue a few months ago and have had several again this week. 

    Any update on the Zendesk side for a fix? 

    2
  • Chris Swinney
    Comment actions Permalink

    Unfortunately, we have raised several support tickets with Zendesk about this very thing. Several points come to light from these conversations:

    1. Zendesk do not consider this a bug (which it clearly is)
    2. Zendesk would rather engage paid supported organisations via their public forums than their support channels.
    3. It would seem that the support team have no direct access or effect on the R&D or product team, which IMHO is appalling. 
    4. The support team care little for this particular issue, which seem to affect multiple subscribers and their customers. 

    I would like to say something positive came out of the support tickets, but unfortunately, I can't. Zendesk's position on this is that the product is behaving by design. When I pointed out that the design is then flawed, which in itself is a bug, they simply brushed this aside. A bug in software is an "error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways." I would suggest this issue fits squarely in this bracket.

    To be honest, we have never seen a genuine bad feedback be submitted without an additional comment. Indeed, my suggestion was to allow Zen customers to ensure that no bad feedback be submitted without a comment, as if it is submitted without a comment, it makes the feedback completely pointless. This would stop any systems that follow the embedded links from triggering the negative feedback automatically (although other methods should be in place to ensure this didn't happen either). And yes, we have designed our CST email to include separate links, as per Zendesk’s guidelines.

    One of our most valued customers (who has NEVER knowingly left bad feedback), say this false negative seem to be raised when they switched on some OOO auto responders. 

    This absolutely need to be seen as a bug, and fixed.

    3
  • Nicole - Community Manager
    Comment actions Permalink

    Hi Rob, Zulq, Elizabeth, and Crystal, 

    Thanks for adding your voices to the conversation. We do not have any updates at this time, other than to say that this issue has been raised with the product team that works on the CSAT survey, and it's something they're looking into. 

    The comments here have not been ignored, but as they've been digging into it, it has become clear that there are several challenges related to changing this behavior (such as how to deal with the huge variety of auto responders and filters) and further analysis will be required in order to find the best solution that will work for a majority of users.

    To speak to why it seems like there has been a delay, that team has been hard at work on some other fixes to CSAT that were determined to be more business-critical for a larger number of users. To that end, they've announced that they'll be rolling those out starting next week. You can read the announcement about that here

    I'll continue to engage with them regarding this issue as they go to work planning the next round of improvements, and have flagged this comment thread for review. 


    Chris, I see in your ticket that you're currently engaged with the Voice of the Customer team regarding your specific concerns, so I'll leave them to respond to you there. 

    However, to speak to the questions raised regarding how we collect and handle feedback from users, the support team does provide data and feedback to the product teams, but it's done in aggregate more than on a case-by-case basis. We receive hundreds of pieces of product feedback every day across all of our channels, so we cannot simply fire an email off to the product teams every time we hear something. We collect all of the feedback and responses from all of said channels, aggregate that, and send all of that through to the Voice of the Customer team, who then synthesizes that information, looking for trends, patterns, and business-critical issues that impact a lot of users, and pass that data to product to take into account during their planning sessions. 

     

     

     

    This is one of the reasons that the community is currently the official channel through which to submit product feedback, and why you may have been directed there when providing feedback in the context of a ticket or call.

    Our Customer Advocates are instructed to direct users to the product feedback topics in the community so that users can engage with one another and vote on ideas, which helps us to determine which issues effect many users and which are anomalous to just a few. For example, the related feedback conversation on forcing end-users to provide a reason for a bad satisfaction survey has received 4 votes, in addition to the comments on this thread, so it's not something that has been communicated to us as a high priority for many users. Other aspects of CSAT have received much higher engagement, or been determined to have greater business-critical impact, and have therefore been prioritized for development first - such as the improvements for mobile-responsive surveys being rolled out next week. 

    That project came about in response to a significant number of users being negatively impacted by a non-responsive survey, and the feedback we heard helped us to prioritize that. We do really value and appreciate everyone's input, and it's clear that you care deeply about this issue. Thank you for your comments. 

     

    0
  • Rob Baker
    Comment actions Permalink

    Thanks for the reply Nicole.  TBH, as painful as it is to handle these small number of false negatives, I much prefer that over the behavior having the opposite effect where actual bad CSAT gets silently changed to good.  I also disagree that every rating should require a comment. We use comments as a measure of added Customer engagement.

    :thumbs up: + comment - Above and beyond the call of duty

    :thumbs up: - nicely done

    :thumbs down: - Customer is not happy and it should be obv why in the ticket

    :thumbs down: + comment - Customer is pissed and you better come back to them with an informed follow-up to see how to make the situation better.

    0
  • Josh Fulton
    Comment actions Permalink

    Sharing a workaround that we've employed for this. We started to deliver two versions of satisfaction surveys via automations.

    1. Our default survey is the embedded form within the email because it has the highest response rate: {{satisfaction.rating_section}}
    2. Our alternate survey is an external link to the survey form: {{satisfaction.rating_url}}

    We determine who receives which survey based on the presence of an organization-level tag. If we confirm that a client has a virus checker that is triggering these unexpected bad ratings, we can tag that organization and they'll then receive the external link for future tickets.

    This should allow us to keep response rates high and CSAT scores accurate.

    3
  • Yener Adal
    Comment actions Permalink

    Hi everyone, 

    We're experiencing the same issue with our clients that use MessageLabs. No doubt we've received a few inadvertent 'Good' ratings too but we don't follow up on those ratings. Why would you? It seems great!

    Anyway, our workaround is to adjust the email template to redirect them to a hidden page on our public website with the good or bad satisfaction rating URL appended to it.

    Once our hidden page is hit, we perform a Geo IP address lookup to determine which country they are from. If they are from Australia, we then redirect them to appended URL. Either the good or the bad. If they're outside Australia, then we do nothing.

    Seeing as though 99.9% of the ratings we receive are from Australia this is a good option for us. This still ensures that our customers can provide feedback with a single click and keep our CSAT integrity high.

    This will do for us until Zendesk can sort out a better, more integrated solution. 

    0
  • Karie Wohlgemuth
    Comment actions Permalink

    We have had a similar experience and it is frustrating and very discouraging for our support team. We also have a workflow to follow up on all negative ratings which causes a lot of work that isn't necessary for scores that were never intended. We end up excluding users and organizations from the survey to keep the false positives down and are probably missing an opportunity now to capture true feedback.

    A bigger issue I have, though, is that our data in the system is inaccurate.  I believe Zendesk uses this data for their benchmarks, but it's bad data.  We can't change the ratings even when we get confirmation that the survey response was invalid or inaccurate.  Our team is bonused on their scores which means I have to manually go through responses and manually figure out scores every month and keep track of tickets and side conversations and the like and manually report on how the team is really doing in spreadsheets.  This is a waste of time and technology.  I don't believe that every support rep should be able to change the scores, but why do you not allow an Admin to change an incorrect score so that the scores and results and history are actually accurate (and audit them of course)?

    3
  • Nicole - Community Manager
    Comment actions Permalink

    Thank you for sharing your thoughts, Karie.

    0
  • Rob Baker
    Comment actions Permalink

    To add to what Karie has mentioned, and I had stated previously, we now see a full 50% false negative in our CSAT using built-in Zendesk surveys.  Although our members aren't bonused on CSAT, we too have closed loop feedback mechanisms, KPI targets, and board level visibility.  It not only makes us look bad to have to explain the false negatives, it reflects negatively on Zendesk as our choice to use for this purpose. 

    Notes
    Unsuspended member account (errant Bad rating?)
    IE saga 
    Bad rating due to ZD account setup and not answer
    Member unable to access Community and connect social channels via SSO
    ZD false negative
    Member unable to reconnect their LN account (Service and Product improvement)
    ZD false negative
    Verbose member product feedback 
    ZD false negative
    ZD false negative
    ZD false negative
    ZD false negative

     

    1
  • Tal Admon
    Comment actions Permalink

    To add to the above, we noticed that each time a customer gave a "bad" score without knowing, it was on minutes 0 or 30 after the hour (e.g.: 10:00:02, 16:30:05 PM...)
    When we break the responses per minute of survey response time, we get the following picture:

     

    This raises a few questions because if it was a virus scanner in the majority of cases I'd expect a different split between bad/good scores?

    Also, since Zendesk Support didn't relate to the timing issue - can any other customer please export the CSAT responses and analyze it per minute of response? I'm curious to see if other customers experience the same.

    3
  • Rob Baker
    Comment actions Permalink

    What an astute observation Tal!  Here is the distribution of events by minute of confirmed false negatives for us:

    0
    0
    1
    1
    0
    0

    That definitely looks like a statistical anomaly that would contradict the premise that the false negatives are being caused by email scanners at the time the email is opened or delivered.  Pattern seems to be the same for some tickets we assumed were false negatives, but were not able to prove out.

    0
  • Tal Admon
    Comment actions Permalink

    Adding to my previous comment - *all* the CSAT answers on 0 and 30 minutes don't contain textual comments - adding to the assumption that this is an automated origin.

    @Zendesk - any more data you can pull from your DB about the useragent or referrals of the responses at 0 and 30 minutes vs others?

     

    0
  • Sean Cusick
    Comment actions Permalink

    Hey Tal, It might be best for you to open a ticket with us at support@zendesk.com so that we can investigate further. We have seen instances of virus scanners triggering bad satisfaction scores, but those should be happening uniformly across the hour, as each email is scanned as it is received, then the email is passed on to the intended recipient's inbox. The behavior that you are seeing appears to have a different source. I'd like to look at a cross-section of those specific recipients and their server systems to see if maybe there is something there that we can investigate further. Perhaps there is a periodic function that runs twice an hour on some systems that is causing this. Please mention this article and comment thread when you open the ticket. We appreciate your assistance with this. 

    0
  • Tal Admon
    Comment actions Permalink

    Thanks Sean - Already did, Ticket ID 4476173

    0
  • Sean Cusick
    Comment actions Permalink

    Hey Tal, Thanks for sending that. We do recommend that you follow the recommendations made in that ticket:

    Please let us know if that does not resolve the issue for you. 

     

    0
  • Chris Swinney
    Comment actions Permalink

    @sean, as has been mentioned numerous times, this doesn't resolve the issue.

    0
  • Tal Admon
    Comment actions Permalink

    Thanks, but solution 1 will reduce the response rate %, and solution 2 is not something I'd like to explore at this point, or get a 3rd party addon like Customer Thermometer.

    I think the responsibility of filtering out spam/irrelevant clicks on the CSAT falls on Zendesk (in a similar way you keep the queues clean by filtering out spam or ooo responses).

    The CSAT KPI is one of the most important KPIs we use in the team and I'd like to have confidence that the data is kept sanitized.

     

    3
  • Sean Cusick
    Comment actions Permalink

    Hello all, I will pass this feedback on to our Product and Dev teams. Because this is a relational issue there might not be a complete solution on either side of this problem, though we will see what we can discover. It does seem that there is something programmatic that is opening these links. We might not be able to stop that from happening and we also might not be able to detect a pattern in the types of systems that are causing this, or there might be several inadvertent responses that are mixed together with legitimate responses. I'm not sure that filtering suspect Bad responses alone would maintain the integrity of the survey. We have offered a workaround that does work for some customers. We apologize that this solution might not be the best for you. Our Product team will need to weigh in on this first. If I find out any more then I will update this comment thread. We appreciate all of your input and assistance with this.  

    0
  • Rob Baker
    Comment actions Permalink

    I would prefer that we not mix the two issues.  One has do to with a definite false negative CSAT score problem--that in every instance users we have followed up with have no idea what we are talking (including savvy users that have provided feedback previously), and which suspiciously gravitate to specific time intervals at the top and half-way through the hour.

    The other has to do with a philosophical belief by some that Customers should provide a reason for their dissatisfaction-- presumably to improve the efficiency of the follow-up process rather than as a flag to trigger a follow-up process.  

    Solving the first issue should be the highest priority, and making CSAT require a comment/reason should be a configurable option to solve the second issue.

    2
  • Mark Powell
    Comment actions Permalink

    After raising 2 tickets about this, I followed "Solution One" above. This is really suboptimal though.

    This needs to gain some more traction. It is not right that we cannot trust CSAT. 

    For most businesses CSAT is the #1 KPI to measure successful customer service and if this is not 100% accurate then can we really trust any of the statistics?

    0
  • Sean Cusick
    Comment actions Permalink

    Hello everybody,

    I was able to confirm that ratings appearing in tickets in bulk at the top and half of every hour is the expected behavior. This only relates to the possibility of these ratings being the result of virus scanning in that they were ratings made only by clicking/opening the URL links in the satisfaction email. Any user that adds a comment or subsequently clicks on the Submit button of the landing page to confirm their rating is added immediately, other ratings are stored as intentions and added to tickets twice an hour.

    Rob, you are correct. I should not have made a suggestion for any change in behavior. I was thinking of ways that this issue might be mitigated and had only hoped to offer a helpful idea. I have removed that suggestion from my comment. 

    1

Please sign in to leave a comment.

Powered by Zendesk