Fine Tuning: Taking action on Customer Satisfaction
This Fine Tuning session is about turning Customer Satisfaction surveys into positive actions, including:
- Setting up business rules to manage CSAT
- Best practices and recommendations for processing your feedback, following up on it, and turning bad ratings into good
Zendesk Customer Success, Launch Manager, Aaron Pewtherer has been with Zendesk since July of 2011, starting as a Customer Advocate, then Tier 2 Support, Professional Services, and now Customer Success.
See all the Fine Tuning series discussions .
Part 1: Setting up business rules to manage CSAT
When you enable customer satisfaction, a default automation, Request customer satisfaction rating (System Automation), sends the survey after 24 hours.
This setup works well for most companies that have a longer response cycle. You can change this easily by editing the value in the Ticket: Hours since solved field.
If your response cycles with support are typically shorter (e.g. most tickets take 5 minutes to answer), you might want to reduce the amount of time you wait before surveying customers. If your response cycles are measured in hour or days, 24 hours or longer might be adequate. Adjusting your timing to fit your response cycle can improve your CSAT response rate.
In addition to customizing when the survey is sent, you can also use business rules to limit who it’s sent to. We recommend excluding as few customers as possible. Avoid the temptation to cherry-pick customers or tickets that you think are more likely to give you a positive CSAT. The idea behind CSAT is that asking a wide array of customers, not a selected subset, gives you the most accurate and useful results.
That being said, there are certainly some situations when you might not want the user to get a CSAT survey, like the following examples:
- Misfires : Tickets created by people who are not customers. For example, here at Zendesk, we sometimes get tickets from people mixing up Zendesk and the company using Zendesk that makes the product they need support for.
- Out of scope : Support interactions where the ticket remained open (stale) for an extended period (bug fix, feature request, etc) or where a conversation stalled because of an edge case request.
- Strong silent types : Requesters who specifically ask not to be contacted.
- * Group exception: * Tickets created in specific groups where satisfaction is less of a concern or unwanted, such as Legal, Security, etc.
Agents can add a tag, such as no_csat, to tickets matching the situations above. To prevent CSAT surveys from being sent in these cases, add conditions to your Request customer satisfaction rating automation to exclude tickets with a certain tag or, for the last example, group.
Part 2: Using satisfaction responses
Now that you live in a perfect world where you’ve received a high percentage of valuable feedback from an equal cross-section of customers, what do you do?
Keeping up with feedback
After you’ve started gathering your satisfaction responses, you should review customer feedback on a regular basis. Administrators can view the overall customer satisfaction score and individual ratings and comments in the Customer Satisfaction dashboard.
From the dashboard, administrators can review ratings, click through a rating to review its associated ticket, and export ratings and feedback to share with the rest of the organization. For details, see Viewing customer satisfaction ratings .
Agents can’t access the CSAT dashboard. They can see an overview of satisfaction ratings in the agent dashboard, but can only access ticket-by-ticket details in a view. The view Rated tickets from the last 7 days is automatically activated when you enable CSAT. You can customize this view or create a new one using the Satisfaction column.
Another way to share CSAT information across your organization is by sending it to an external target, such as Yammer or Geckoboard. This helps your entire organization, including departments that don’t typically view tickets in Zendesk, see what customers are saying and stay involved with the support process. For details on how to set this up, see Notifying external targets .
Notifying managers of bad ratings
You can set up a trigger to notify the agent’s manager when a bad CSAT rating comes in. This allows managers to get emails alerting them to potentially problematic tickets. Managers can then review the tickets and follow up with agents as needed.
For example, your trigger conditions might look like this:
Create a custom dropdown ticket field with reason codes corresponding with departments or processes
Customers might leave bad CSAT ratings for a variety of reasons. For example, they might be unhappy with how the agent handled the ticket. Or, they might have been satisfied with the agent’s service but are unhappy with the product’s limitations. To help you follow up efficiently on feedback, you can create a custom drop-down field for the reason:
Agents or managers can review the ticket when the CSAT rating comes in and select the appropriate reason. You can then add the field to a view and review it weekly to identify patterns. For detailed instructions, see Adding and using custom ticket fields .
Following up to turn bad ratings into good
When a bad satisfaction rating comes in, be sure to reach out to the customer to follow up as soon as possible. Apologize for the bad experience and, if possible, fix their original problem. Once you’ve taken steps to address this, give the customer a chance to rate the ticket again.
One way to do this is by using the placeholder {{satisfaction.positive_rating_url}} to send the customer a URL they can click to change the ticket rating. After following up on the bad rating and resolving the issue, agents can add this placeholder to the ticket.
You can also automate this process by adding a tag, such as csat_followup, to tickets with a bad rating that have been followed up on. Then, configure a trigger to send a notification to the customer that gives them the opportunity to change their rating. See Joseph’s community tip, Using triggers to get the most out of customer satisfaction - Re-evaluation , for step-by-step instructions.
-
The door is open!! Bring us your CSAT stories, ideas and concerns. All are welcome!
-
We have been using a 4 hours since-solved time, as most of our tickets are solved in 12-24 hours. This felt like the right balance between the current engagement of the requestor - however we sometimes get users telling us they were survey'd before they saw the email with our actual answer. Any one else have this problem? Does 4 hours after solved seem too short for other organizations?
-
@Anthony: 4 hours seems a little short, as this may account for normal response time, and weekend/holiday. Some inbox clients group together similar emails, resulting in the appearance that the CSAT email came at the same time.
-
Thanks, we actually had been at 1 hour but curious to hear anyone else's experience. Would you consider 12 hours the "minimum" amount of time to wait before sending?
-
@Anthony: I would recommend 24 hours (or 1 business day), to allow for the requester to may have left for the day.
-
We use 48 hours. While many of our tickets (about 85%) can be solved with the first response in the same day the ticket is received, we also receive a fair amount of technical escalations. Many of these are left as pending, but sometimes we solve the ticket with our first response when we are confident in the resolution. If the customer requires additional clarification, though, they may not reopen the ticket immediately. So, we like to give our customers 24 hours to check back in with us after solving a ticket and then prompt them for a satisfaction rating about 24 hours later.
We also receive a lot of requests from "repeat support users". To avoid hounding them for satisfaction ratings, we have filters in place to avoid asking a user more than one (in most months). By asking new support users for their rating, we're ensuring that our new customer interactions are starting off on the right foot. Periodically (every 3 months), we turn the satisfaction rating on for all users to get a complete picture of our customer satisfaction. We use user fields and tags to manage all this.
-
We use 24h, and have a 40% response rate.
It feels a bit frustrating not to be able to specify a time of the day to send the satisfaction form. The insights module has a report where you can see the response rate depending on the hour the survey is sent, but you cannot really change that. It feels a bit like "you could have a best rate if it was all always sent at 8am". We tried to offset a bit (with a delay like 28h or 29h instead of 24h) but it doesn't really change anything in the end. But maybe sending all forms "the next day at 8am" would improve the reponse rate.
That said, 40% is probably enough for an accurate estimation. :)
@Jennifer: very good idea to disable the satisfaction survey for a few month once a user has answered one. I'll try to implement that.
-
@Jennifer. Great tip! Do you use a trigger to update their user profile? How do you know it has been 3 months, and not 3 days from last CSAT survey when you reset?
-
@Tristan. Good point. Although there isn't a "send at XX time" you could use the Task > Due Date field to accomplish something similar. This sets the date to "noon according to your account time" on the ticket. See below:
However, I'm not sure time CSAT sent is the only issue. You can create a report to see which ticket type gets the most responses, and which has the fewest.
Or, send an NPS survey from time to time, to get the overall sentiment of your customers, if they are promoters or detractors.
-
@Aaron, We have custom user fields to indicate which automated emails are sent. While we use the CSAT, we also have a few other automated emails that we send with more of a targeted -account-based marketing angle. So, the user fields help us (and Zendesk) keep track of which emails have been sent to each user.
We don't use a trigger to update the user. Rather, we use the CSAT automation itself. One of the actions is to apply the user field, as well as tag the ticket.
The automation criteria includes screening for the user field as well as the ticket field. That way, if the ticket is reopened and resolved, we don't ask for a satisfaction rating again.
Unfortunately we haven't completely automated the task of cycling the responses every 3 months. Rather, we update the automation to either (1) send if the user hasn't been asked before and the ticket hasn't been asked or (2) send if the ticket hasn't been asked before. The second option is used during our period (3 month) check ins.
I hope that makes sense!
-
@Jennifer. Thanks for the clarification! Nice work to incorporate multiple approaches to the CSAT.
The default automation for CSAT has the condition, "Satisfaction is unoffered" to assure that, per ticket, the survey only goes out once.
Anyone else use a similar approach, to use other conditions to regulate the survey frequency? Or, have alternate messaging?
-
We use 48 hours calendar from solved.
Because we use a lot of Light Agents and they will submit tickets, we exclude our own Org, and if I or the other support rep (the two Enterprise Agents) are the ticket requesters, we exclude that too. We initially confused the heck out of our reps...but it was kinda fun :)
We also have certain exclusions based upon certain ticket types, for instance we collect a lot of media uploads for our yearbook customers and we DO want to know for all the tickets they submit that they were satisfied with the upload time, etc. It's a lot, but they get used to it. Sometimes they submit funny things in the comments to see if I read them.
We do NOT want customers who are submitting their full book digitally through support to get a feedback survey though, because we are just the vehicle their rep is using to get the file here. Again, it would be confusing, so since we have that as a custom field, we exclude those folks. That is done on the ticket, not the user - so if the user changes what he is requesting of support, he would get a survey.
We also use a "do_not_contact" tag, mostly when we want no more emails to go to the customer for an interaction - and we use it in this survey situation sometimes.
You can tell by the tonality of an exchange that no matter what you do, you're just barely going to solve this issue, and the customer is not going to be satisfied with anything more you do, and you need to be done and not bother them any more.
Since I run the show [RHIP!!], one customer's rating is not going to make or break scores. I don't post them anywhere. I do know how well we are doing overall and I send those Happy Grams to our reps when they are mentioned. I also respond to every Bad Satisfaction as an opportunity...and most times it was something another vendor in the chain of events did - but to the customer it doesn't matter, so I always take that opening to personally help them a bit further.
Diane
-
@Diane. Excellent points! Thanks for reminding me about excluding an organization. We do that internally at Zendesk, so agents do not get unnecessary survey notifications, since agents are not allowed to rate tickets.
-
@ALL ... Part 2: Using satisfaction responses ... has been unleashed! :)
-
These articles are a big help! Our company is just setting up for Zendesk. I was wondering if we can add another customer option besides just Good or Bad. If I wanted to add an "It was okay" middle ground option how would I do that?
-
@Jon: Our pleasure! The CSAT feature was designed by asking our customers, what they were trying to get out of their surveys. Overwhelming, the response was, to get a response! Thus, the simplest method, with least friction, is: Good/Bad, with optional comment.
What does "it was okay" mean to you? Can you action on that? Provide better training and feedback to your agents with that?
Personally, when I see a survey with more than a few questions, or multiple pages, I don't complete the survey.
However, if more detailed surveys is important to your workflow, take a look at some official integrations: https://www.zendesk.com/apps#surveys-feedback
-
@Aaron, I completely agree, the most important thing about the CSAT surveys is for customers to answer them.
That being said, and though we agree with the simplicity logic of Good/Bad answers, we do feel that these surveys aren't really visually appealing and can go unnoticed by the client. We have a very low response rate and are looking into how to improve that.
We were wondering if there's any way to personalise these surveys a bit more, design wise. For example, being able to choose a background image that resonates with our brand, changing the colours of the options offered (good=green, bad=red), were a few things that we would like to be able to do.
Not sure if something like this is possible?
-
Ability to customize the CSAT landing page is not an option at this time.
However, have you considered adding Mr. Smiley and Mr. Sad to the notification? https://support.zendesk.com/hc/en-us/community/posts/211668888-Up-spec-your-C-Sat-with-Mr-Smiley-and-Mr-Sad-
-
@Aaron, thanks for the tip!
That will do for the moment, though it would be great if in the future we could customize this a bit more...
Please sign in to leave a comment.
19 Comments