Recent searches
No recent searches
Tip: calculating Next Reply Time (non-messaging/chat channels)
Posted Oct 24, 2023
Hi everyone!
This method got me extremely close to an accurate Next Reply Time metric in Explore and without having to rely on ticket status updates.
It only applies to the usual async channels, however (i.e. doesn't apply to messaging or chat as this depends on trigger 'Comment' conditions).
Disclaimer on accuracy
Please bear in mind that this is a 'nice-to-have'. My own tests came pretty close to reality as I confirmed things across many individual tickets: my NRT durations diverged approx. 2% on average from reality in a few cases. That said, please note that things can differ between accounts and depend on external factors (e.g. how agents adhere to conventional usage of ticket statuses and updates; Explore's own calculation limitations; etc).
In other words, I highly recommend you take some time to properly analyze the data and confirm things on your end.
The setup
This configuration requires a custom field and a few triggers to be setup in Support, and then a standard calculated metric in Explore (under the Updates history dataset).
Custom field
Includes five values: active FRT, fulfilled FRT, active NRT, inactive NRT, and done:
Triggers
These triggers should change our custom field to the appropriate value, depending on the scenario:
- Activate FRT. When an inbound ticket is created, activate FTT
- Fulfill FRT. When an agent updates the ticket with the first public comment (ie. there is still no NRT applicable)
- Activate NRT. When the end-user replies and the custom field's value is 'FRT fulfilled' or 'NRT inactive'
- NRT inactive. When active and an agent responds (and the custom field's value is 'NRT active'
- Set to ‘Done’. If a ticket is updated as Solved and any drop-down option is active, fulfilled or inactive, set the drop-down value to 'done'
Explore metric
Create the following standard calculated metric (I'm multiplying by 60 to convert it to seconds, so that I can display the format as duration, but that is optional of course):
IF [Changes - Field name]="⏱️ Reply Metric"
AND ([Changes - Previous value] = "kpi_reply_nrt-active"
OR [Changes - Previous value] = "kpi_reply_done")
AND ([Changes - New value] = "kpi_reply_nrt-inactive"
OR [Changes - New value] = "kpi_reply_done")
THEN VALUE(Field changes time (min))*60
ENDIF
Limitations
- Can’t calculate this NRT for Messaging/Chat tickets
- The calculation isn’t retroactive, it will only apply to new tickets where those triggers have fired
- The NRT calculation isn’t 100% accurate because (1) some seconds are excluded from Explore calculations (details in the article linked below) and (2) there can be situations where our triggers will not measure very specific scenarios and therefore contribute to an incorrect time measurement or absence of firing our triggers
- Can’t calculate NRT in business hours but please do feel free to upvote this feature request!
For a more in-depth explanation and analysis, examples of the triggers above and more, please check out this blog article (external link to my blog).
I'd love to know how it goes for you, of course, so please do share your results below 🙂
Happy Zendeskin'!
1
3 comments
Jennifer Rowe
Excellent, Pedro Rodrigues! Thanks for sharing this.
0
Erin Willis
Hey Pedro Rodrigues
I'm trying to create NRT Brackets using the SLA dataset. I'm wondering how accurate it might be if the NRT policy is applied to the same ticket more than once. If the first reopen the NRT is 6 hours but on the second reply its 12, how would this be accounted for? Any suggestions?
0
Pedro Rodrigues
Hi Erin Willis, it would depend on how you choose to build and visualize your report, but in this case there would be two instances of the metric being counted: one for the first "0-6" bracket, another one for the third "8-15" bracket.
Here's your standard attribute applied to a real example. The following SLA ticket had multiple reopens and agent replies, therefore we can expect several instances of the metric:
We can see that there are sixteen first bracket completions VS one third bracket completion. Three of these completions have really long durations which will certainly impact the average.
So if we try to build a report showing only these brackets, we will see one average calculated for each bracket:
As you can see, the first bracket shows only the average completion time for all the metric instances where the NRT for that agent reply matched 0-6 hours. If not for those three specific longer replies, this average would be under 10 minutes.
Let's modify your metric for a second look at this:
The result:
As you can see, because there is one instance of the NRT metric that is 260 minutes, there is now a new bracket being shown for that completion.
To conclude, this kind of reporting can be a bit "foggy" in terms of a more granular analysis, especially if you have many reopens per ticket and agent replies with very different completions.
Additionally...
If we're monitoring these brackets, then I'd say it's always a good idea to build additional reporting to try and identify the specific events just to make sure no outlier situation is left unattended.
For example, let's try to track the global individual instances of a metric without having to filter per ticket ID.
We can use the SLA policy unique ID (a string combo of ticket ID + SLA policy ID) and join it with the SLA event ID. We'll obtain a unique string for "SLA event unique ID"):
We can now check all our tickets, filter for the NRT metric and obtain a list of the "worst offenders" in terms of events:
That first row shows a ticket with a reply time of 19 days. Imagine that ticket has another reply whose NRT is 30 minutes... the ticket ID's average NRT would be 9.2k minutes (6 days), which is a lot different from reality... but if we have this table with the unique event IDs for NRT, we can identify the specific event an NRT reached unexpected values.
So just as food for thought, of course, you can build your brackets chart that would consider all the events to calculate the average completions, and complement it with this kind of table where you'd see the "Top N offending events", for example.
Hope this helps!
0