Using Liquid Markup to A/B Test Your Triggers
已于 2019年4月22日 发布
Here’s the scenario: You want to make a change to how one of your workflows operates. Maybe you want to try out some new messaging for when a customer writes in about a particular issue. Or maybe you want to try routing tickets to a different group of associates. The easy part is doing it, but how do you validate whether or not the test was successful?
Normally, you would need to run one option for a few weeks or months, then run the other for the same amount of time. Not only is that an awful long time to spend on a single test, it doesn’t isolate the variable, leaving you with poor data to draw conclusions from. Instead of running consecutive tests, you can use a little bit of Liquid Markup and JSON to do an actual, concurrent A/B test in Zendesk.
Our Case
In our case, we require users to choose an issue type when submitting a ticket to our Zendesk. We wanted to test splitting tickets for one issue type between our own pre-written email responses and our partner's machine learning system. We knew our re-open rate and set an ambitious goal to beat that rate.
The Process
Step 1: Create an HTTP Target
Before you start building anything, you need to create an HTTP Target that points back to your own Zendesk account.
- Navigate to Admin > Extensions.
- Create a new HTTP Target extension.
- Enter https://subdomain.zendesk.com/api/v2/tickets/update_many.json?ids={{ticket.id}} as your URL
- Select PUT as the target method.
- Select JSON as the content type.
- Add your username and password to the Basic Authentication section, then click Save.
This has a drawback, though: if the user whose username and password you're using to authenticate this request leaves the company or otherwise has their Zendesk access deactivated, your target will stop working. If at all possible, dedicate one of your Zendesk seats for these kinds of callback requests to prevent outages. (Or come up with a transition plan for when your admins change.)
Also, note that we’re using the “tickets/update_many” endpoint here, rather than the regular tickets endpoint. The reason for that is because we'll be adding tags to the ticket, which requires the update_many endpoint.
Step 2: Create or A/B Test trigger
Now that you have a JSON notifier in place, we need to build a trigger that places tickets into your A/B test.
- Create a new trigger and add “Ticket > Is > Created” as a condition to the ALL section.
- Optionally, if you don’t want all of your tickets to get tossed into the A/B test, add any other extra criteria. In our case, we’re only A/B testing on tickets submitted with a particular tag, so we added a tag condition.
- Set your action to Notify Target and choose the HTTP Target you just set up.
- Add this JSON response:
{% assign randomizer = ticket.id | modulo:2 %}
{% case randomizer %}
{% when 0 %}
{“ticket”: {“additional_tags”:[“control”]}}
{% when 1 %}
{“ticket”: {“additional_tags”:[“experiment”]}}
{% endcase %}
Wait, What?
Let’s break down what that JSON payload actually does. Liquid is a markup language built by Shopify and written in Ruby that allows you to incorporate a handful of logical expressions into your Zendesk triggers. In this case, we’re using Liquid to accomplish a few things:
First, we’re using Randomizer to assign a random number value to the newly created ticket. We’ve specified modulo:2
to tell Randomizer that there are two possible values for it to create.
When Randomizer settles on 0, we’re calling that ticket the Control and adding a control
tag to the tag list using the additional_tags
endpoint. The Control is the current behavior that you’re testing against. This should act as normal, working through your existing workflow or process.
When Randomizer settles on 1, we’re calling that ticket the Experiment and adding an experiment
tag to the tag list using the additional_tags
endpoint. The Experiment will be using your new behavior.
This effectively splits your ticket volume, 50/50, into two buckets that can be routed in two different ways. Want to weight the test more? Increase the modulo
number and add more when statements. They don’t need to be different — you could add modulo:4
and have three different “when” statements that give you the control and only one that gives you the experiment (splitting your volume 75/25), like this:
{% assign randomizer = ticket.id | modulo:4 %}
{% case randomizer %}
{% when 0 %}
{“ticket”: {“additional_tags”:[“control”]}}
{% when 1 %}
{“ticket”: {“additional_tags”:[“control”]}}
{% when 2 %}
{“ticket”: {“additional_tags”:[“control”]}}
{% when 3 %}
{“ticket”: {“additional_tags”:[“experiment”]}}
{% endcase %}
Step 3: Add Your New Behavior
Now that you’ve split your volume, you can create a trigger that only acts on that experiment tag. You may want to update any existing triggers that would otherwise act on these tickets to not fire on the experiment tag, just to be safe.
In our case, because we wanted to test our partner's response system, we created a new trigger with their JSON target and added the experiment
tag as a required condition. We also added the control
tag to the existing trigger, to prevent it from firing on the experiment.
That's a relatively simple use-case. Anything that can act on a ticket based on the presence of a tag—triggers, automations, SLAs, skills—can be tested with this method.
Reporting on Your Test
The beauty of this solution is that you can easily report on the control and experiment for as long as the test is running. In Insights or Explore, whatever you happen to be using, build a report based on your success metrics. Is it reopens? Average handle time? Decide on your KPI and build a report that shows data for both the control and experiment.
For our test, we wanted to track total tickets received and total reopens for both our control and our experiment. From that, we can derive the percentage reopened. If the experiment’s reopen percentage is less than the control’s reopen percentage, we’ve got ourselves a winner. Otherwise, we go back to the drawing board.
1
21 条评论
登录再写评论。