Customer satisfaction customization

22 Comments

  • Dan Cooper
    Comment actions Permalink

    Hi Robert,

    Satisfaction surveys used to ask for confirmation before recording the rating. However, many times events wouldn’t get captured because someone would rate a ticket, but wouldn’t confirm and no information would be captured resulting in fewer satisfaction scores overall. Having seen both in action and seeing what you have described in my own Zendesk, I prefer the current way because I get more data than I did before with a second prompt.

    However, that’s just my opinion. You have this posted in the right place for the product team to see. If enough people like your way better it’s possible to sway the direction of the product design.

    0
  • Robert Jerina
    Comment actions Permalink

    I fail to see how inaccurate feedback is better than accurate feedback.

    2
  • Dan Cooper
    Comment actions Permalink

    Hi Robert, 

    If I read into your original post, it sounds like your main problem, is ensuring you have accurate feedback data.  Given what Zendesk provides today, there are tools you can use to improve the data you look at.  It's always nice to have things built in and seemless, but as Zendesk likes to say in their marketing material, "Relationships are complicated" and it's tough to get everyone to follow the rules that create perfect data.  

    We've gone through a similar process as you are going through.  We identified that people were miss-clicking satisfaction ratings.  This happened for both good and bad surveys.  Some people clicked good even though they didn't like what we did.  They just chose to re-open a ticket or open a new ticket in hopes of getting a different agent. Many just refuse to provide bad satisfaction because they don't want to tattle on someone, but they always rate positive for good tickets. 

    Here is what my team did to try to get closer to accurate data.  

    We monitor re-opens and follow up tickets.  Did we actually solve the problem and how do we improve our processes to make sure we solve as many tickets right the first time?  If there seems to be an outlier in our customer base, we reach out directly to understand what we could do better for their needs. 

    A trigger re-opens bad satisfaction tickets and we work to resolve any lingering issues, and collect feedback on what could have gone better.  When we re-solve we use a macro to provide the ticket link again with instructions on how they can update their rating if they are satisfied after our follow up. We can report on these conversions looking at bad to good satisfaction ratings in Insights. 

    Hopefully some of this helps you out in your quest to get more accurate feedback data for your team.

     

     

    1
  • Robert Jerina
    Comment actions Permalink

    I think we would rather customers provide us feedback in the first place on why they feel they are unsatisfied rather than Zendesk just making it as bad because they clicked a link. 95% of our bad satisfaction ratings in the last 12 months are because end users misclick the unsatisfied link in the e-mail. That is a problem. I shouldn't have to work around a poor design to get accurate data the first time around.

    0
  • Dan Cooper
    Comment actions Permalink

    Hi Robert, 

    95% is a really large range for miss clicks and is definitely a problem.  Have you customized your satisfaction notifications?  Ours includes two simple links "Good, I'm satisfied" and "Bad, I'm unsatisfied".  

    If that isn't working, you might also be able to customize the automation that sends your surveys. More information is located here: https://support.zendesk.com/hc/en-us/articles/203662256-Using-customer-satisfaction-rating

    If you look under the header for Using customer satisfaction rating placeholders you can use the placeholders here to separate out your positive and negative links.  Updating the verbiage in your survey may help reduce miss clicks.

     

    0
  • Robert Jerina
    Comment actions Permalink

    When a ticket is solved, the customer get's an e-mail notifying them that it has been solved with the two simple links "Good, I'm satisfied" and "Bad, I'm unsatisfied". We just got another one. Two of them in two days. Even if they are real, we still WANT the customer to be required to tell us why they are unsatisfied. Not only would that fix the problem for the misclicks, but it would make those that are unsatisfied think about and explain to us why they are unsatisfied so we can better follow up with them instead of just calling them blindly.

    I really don't think updating the verbiage is going to solve people accidentally clicking a link. The problem is already that they aren't reading and comprehending they are clicking a link they probably don't want to click. If they aren't reading it in the first place, we can change it to say whatever we want and it isn't going to change someone from accidentally clicking a link.

    1
  • Nichole Harrop
    Comment actions Permalink

    I understand wanting to customize the survey more, however, if you require a customer to leave a comment in regards to why they rated good or bad, won't you have inaccurate data at that point because there will be users who would have otherwise rated "good" with no comment but abandon due to the requirement? Just a thought.

    0
  • Robert Jerina
    Comment actions Permalink

    Nicole: No. That would be incomplete and therefor wouldn't even be recorded, which is fine. If they really want to leave positive or negative feedback, they should be taking the time to help us understand why they are giving that rating. An incomplete survey would be no data instead of inaccurate data, which we are fine with. I don't see a problem with giving us the option to require a message be entered on the survey. If that doesn't jive with others then don't enable it. But we would rather have accurate data instead of inaccurate data. This week so far we've have three unsatisfied links clicked that were mistakes (we verified with the customers). 

    1
  • Andrew J
    Comment actions Permalink

    Hello Robert,

    You could consider trying Mr. Smiley and Mr. Sad.  - that is a very visual CSAT option.  If people are clicking a sad face during or after any interaction with your team, that is a fairly good measure of the customers attitude towards you at that moment - of course it isn't perfect, we get people who don't really realise what they are doing - but it's only a small number.  Hence you never set your CSAT target at 100%.  Depending on what you do and how you want to measure it, you make your target something meaningful to your agents.

    Trying to enforce a comment is never going to work, and may even annoy people into being less satisfied.  Anyone can input junk content to get around a 'text required' field.

    You also might need to be aware that CSAT should be offered AFTER a solution is applied or a request is completed.  If you are getting 95% bads... stop solving tickets.  Instead, try sending an email stating exactly what you have done to answer the request and inviting them to 'Let me know if this is now resolved'

    I'm looking at whether you could make a dummy CSAT that allowed them to create an email reply with comments. Just a concept at this point, but I can see some applications to your type of situation.

    1
  • Robert Jerina
    Comment actions Permalink

    Yeah we definitely don't expect to be at 100%, but when 100% of our negative satisfaction clicks in the last month are all verified miss clicks, that becomes a little enraging. Having more ability to customize how CSAT functions would be beneficial for us. What people need to realize (Zendesk especially) is that not every customer of Zendesk has the same type of support user base and use Zendesk quite differently from customer to customer.

    I don't see a problem with WHEN we offer CSAT links to customers. As I mentioned in the past month ALL of our unsatisfied ratings have been miss clicks by our customers. Randomly clicking around on an e-mail they get not reading anything. WHEN or HOW we offer it to them is NOT going to fix this problem for us. We need a way to make people verify what they are doing when they click these links. It may work differently for you, but this is how it works for US.

    0
  • Andrew J
    Comment actions Permalink

    Hello Robert,

    If your customer base is quite distinct or you use Zendesk differently - you might want to consider sharing a bit more of the scenario.  I certainly do understnad that use varies... A LOT!  I administer an IT Support helpdesk, but also run a retail support one with social media etc.

    Can you quantify the number of mis-clicks?  How many?  If 50 out of 50 people click Bad instead of Good - something is definitely wrong. 

    I guess I can only go so far offering free consultancy, with limited information, but how about only giving them a GOOD link to click for a week or two? You can customize it that much... that way, if they click GOOD and wanted BAD, they can change it - and chances are they might explain why they changed it.

    I'd suggest that the CSAT offering possibly doesnt suit your use case, in which case, I'd be looking to modify it... there is quite a bit you can do - presuming this isn't restricted by your plan.  Easiest way would be to take it outside of Zendesk but capture the ticket number so you can check it up.

     

    0
  • Dan Ross
    Comment actions Permalink

    Robert, have you considered an external service for your CSAT? Many, such as CheckMarket can have surveys that link to zendesk via API and can update the ticket responses. You can control when the survey is sent with triggers and automations and provide the user with a button or link in the email to respond. This might reduce the number of misclicks.

    0
  • Phil Havlik
    Comment actions Permalink

    I say this not to brag, but we have a great track record of maintaining extremely high CSAT ratings which often hovers between no negatives to 1-2 negatives. We work to follow up on any negatives received and attempt to resolve the concerns and have the clients reverse them when possible. In fact, we have an automation established to follow up with anyone who marks unsatisfied.

    We broadcast our ratings both on our marketing site and on our Guide home page to highlight the great work of our support team.

    We have been receiving multiple negative tickets as of late. When we follow up, the clients are completely clueless about the ratings. They never opened the emails containing the surveys and they certainly did not click a survey option one way or another. To the best of our understanding, this is caused by anti-virus software that scans and tests links within the emails. The software scanning and opening the link appears to be returning the unsatisfied ratings and in turn impacting our numbers. 

    All of this is to say if as Daniel mentioned above that confirmation clicks used to be standard, I would like to see sites be able to customize to enable or disable the confirmation link. The alternative workarounds appear to be creating our own URLs and using the API to pull the data (with a CAPTCHA to confirm submission) or using a third party vendor. I would really prefer to work and remain within Zendesk.

    Thanks

    2
  • Nicole - Community Manager
    Comment actions Permalink

    Hey Phil - 

    That sounds like it may be a bug and something our devs should know about, so I'm going to create a ticket for you so they can see if there's a way to fix it. Congrats on your great CSAT scores! 

    1
  • Robert Jerina
    Comment actions Permalink

    Yeah I'm not tooting our own horn either, we actually have a very low amount of negative feedback. Mostly because we're a smaller company working with a few select people inside of other larger companies. Our work is very hands on and customer oriented. So when 90-95% of the negative feedbacks we get over the last six months are either misclicks or mistakes when it definitely hadn't been that way prior to that time... something somewhere changed with how the satisfaction surveys work. We NEVER got a mistaken negative survey prior to the last six months or so we've been getting all these mistaken ones. We follow up with all of our negative feedbacks whether we believe it to be a mistake or not. It sucks because these mistakes ding our really high satisfaction rating. Since we can't rely on the rating being correct, we don't publish this rating because we strive to be as close to 100% satisfaction as we can get. It's increasingly frustrating as time goes on that something changed somewhere and no one at Zendesk seems to think it's a problem and that we have to be doing something wrong.

    We haven't changed the way we do satisfaction ratings since we implemented it in 2015 when we moved to Zendesk. Now we can't rely on it at all and we're paying Zendesk a nice chunk of change for features like this. They should work and be accurate and offer little to no silly little errors like this. Instead I'm sent to this community to make a post and try to get votes for them to fix it so this feature can work how it used to (aka how it should work).

    0
  • Chris
    Comment actions Permalink

    I was told to "me too" on this thread --- but yes. I can't speak to misclicks, but that is something that falls under the umbrella of the following:

    Let your customers customize the satisfaction survey landing page.

    I'm not sure why this is the only element of Zendesk (end-user facing) that is completely locked and static.

     

    At minimum, allow the ability to hide the free-hand comment field, as it appears customers use that to "continue the email chain" (causing chaos) ... which looking at other Zendesk forum posts, is a common phenomenon for many of your customers. Thanks.

    1
  • Mark Powell
    Comment actions Permalink

    @Nicole We have found the same issue as Phil Havlik. We are seeing "bad sat" from some customers and these customers assure us that they do not personally click the "bad sat" hyperlink.  They have standard corporate email protections that call hyperlinks within emails to check for maliciousness. As ‘Bad, Unsatisfied’ is the last option, it is the last URL that is called by their corporate email protections.

    Please could you consider how to fix this and let us know if there is a bug ID that we can follow...

    0
  • Bulent Yazici
    Comment actions Permalink

    We also have the same problem as well. One client consistently giving us "bad" reviews. Spoken with them and they say that they are extremely happy with the tickets and have no idea why they are automatically giving us bad reviews. Haven't even clicked on the links on the emails. But they do have an overzealous email AV scan which must be following the links.

    Any ideas on how to stop this happening would be welcome because they are starting to skew our figures.

    0
  • Mark Powell
    Comment actions Permalink

    This is only a workaround but you can follow this article:

    https://support.zendesk.com/hc/en-us/articles/115012836948-Why-am-I-receiving-unexpected-bad-satisfaction-ratings-

    Unfortunately it means your customers have to make an extra click to make their rating.

    0
  • Aravind Ragunathan
    Comment actions Permalink

    Just by having the ability to make the Comment field mandatory in the Satisfaction page, a lot of 'Unsatisfied' misclicks can be avoided. When a customer clicks the "Bad, I'm Unsatisfied", if I'm able to force a comment, the customer would either leave good comments or give more verbose feedback on why the customer experience was bad. 

    0
  • Andrew J
    Comment actions Permalink

    Last csat link in the email gets the hit, so just make the good link last and enjoy a few accidental good csats. Any objections?

    0
  • Ruth Nogueron
    Comment actions Permalink

    We are definitely interested in additional customization options for the feedback survey - we would like the ability to require a comment or reason selection in order for negative feedback to be submitted. We find that either the end-user selected the bad rating by mistake or their selection of bad was not due to agent error (by selecting one of the reasons or adding a comment, we can quickly decipher if the bad satisfaction is tied to the agent, or they're unhappy for other reasons). Although we follow-up with each bad rating, unfortunately, we don't always get a response back or if we get a reply that they actually were not unsatisfied, the end-user still doesn't always change the rating as requested. Having the reason in black and white helps us to provide the correct backstory when providing ratings reports to our executive team.

    Any chance this is going to go anywhere?

    2

Please sign in to leave a comment.

Powered by Zendesk