Pesquisas recentes


Sem pesquisas recentes

Enable reporting on AI feature usage



Publicado 02 de jan. de 2024

Overview

We'd benefit from being able to report on which tickets used which AI features like text expansion and tone shift. 

User Stories

As a customer service manager, I want to use Explore to compare metrics like CSAT and Handle Time based on different AI Features used so that I can understand the effectiveness of those features.

As a customer service QA analyst, I want to see if an agent used AI text-editing features so I can better educate agents who sent a suboptimal response.

Latest Impact

Some leaders on our team had hesitations about enabling the EAP without the ability to identify which tickets used these AI features.

Workarounds

Our best workaround would be to have agents use macros/tags when using AI features to help us track usage. We do not currently plan to do that due to the additional effort it would require.

Ideal Solution

  • When viewing ticket events, below comments, there's a display of AI text-editing features used for the comment. This would solve the use case of a QA user needing understand AI usage for a specific ticket.
  • In Explore, there are attributes for AI feature usage that we could use to filter metrics by. I'm not sure if the best structure for this would be a Ticket Field Change in the Ticket Updates dataset or something else.

6

10

10 comentários

Oficial

image avatar

Jake Bantz

Zendesk Product Manager

Hey everyone!

I just wanted to check in here that Explore reporting for the agent generative AI features is something we're actively working on. We're looking to release in the next couple of months.

Please stay tuned to our product announcements.

1


image avatar

Tara McCann

Community Product Feedback Specialist

Hi Alan, 
 
We appreciate you posting your feedback here. 
 
This has been logged for our PM team to review. For others who may be interested in this feature request, please add your support by upvoting this post and/or adding your use case to the comments below. Thank you again!

2


Absolutely.   We need some way to track which tickets had an interation with the generative AI features so they can be located and provided to our QA team can evaluate that these tools are being used properly in our organization.

Thank you.

2


I am looking for a way to report on the "was this helpful" feature when end users are interacting with the generative ai bot.  Any suggestions on how to get this information would be greatly appreciated. If we cant report on helpfulness of suggestions how to we make our responses better.  I feel like I am missing a key component.

1


+1 on this. We'd love to see the amount of tickets which Generative AI was used, and be able to build reports on which agents are using it more frequently, and any impacts to CSAT.

1


image avatar

Jake Bantz

Zendesk Product Manager

Tina Yates since you're looking at some enhancements around the EAP generative bot experience, I highly recommend to start a new post or see if you can find a similar thread in this topic so our bots team can address your feedback.

0


Hi, I'd also benefit from this.

To expand on this a bit, I'm particularly interested in these metrics:

How many times per month did our staff click 'Summarise' in the intelligence panel? This would alos help us check internal adoption, and utlimately value maximisation of the AI addon

How many times was 'enhance writing' generated? and Tone shift

This would help us check internal adoption, and utlimately value maximisation of the AI addon. Also the tone shift could help us with feedback on the cultural considerations of the AI

How many times was a public reply sent using un-edited Gen AI via Enhance? A mix of checking adoption and also a QA indicator

How many times was a public reply sent using edited Gen AI via Enhance? similar to above reasons

How many times per month was a suggested macro used?, and vs non suggested? This would help us check internal adoption, but also quality of the macro recommendations

How many times did the Zendesk Chatbot generate a reply (summarising help centre article) vs follow the default rule based response? This would help us check if customers are appreciating the summary vs the default 3 article carousel. Also would help us test and validate where we have a rule, intent or answer in the flow, that could be unintentionally blocking a generative reply being generated

 

 

1


Hi Jake Bantz , Thank you for implementing the reporting on AI usage last year.

I have a part 2 to ask about please, either as a feature request or some knowledge

We use Explore datapoints for AI Usage Types: 

  • Expand
  • Make more formal / Make more friendly
  • Summarize

 

However, we are also keenly interested in the datapoints for AI Usage Types:

  • Auto Assist usage
  • Suggested First Reply usage (or partial usage)
  • Suggested Macro usage
  • Similar tickets usage
  • Merging Suggestion usage
  • Quick Answer generation , thumbs up, thumbs down

I wondered if there is a current way in the dataset, or via tags? Or if these items are on the release roadmap?

 

 

 

0


image avatar

Pedro Cerqueira

Zendesk Product Manager

Hi Tim Sulzberger ,

Just wanted to let you know we are working to build Explore dashboards where you will be able to track AI usage for:

  • Auto Assist
  • Suggested First Replies
  • Suggested Macros
  • Similar tickets
  • Merging Suggestions
  • Quick Answers

We expect these to be available during H1.

1


Thanks Pedro, that is good to hear. If there are any other AI usage  reporting tips or other AI reporting enhancements to share I'd be interested to see. 

One in particular is what type of triggers and event tagging we might have available as a workaround for some of this, if there is a usage event that we can leverage or similar.

0


Por favor, entrar para comentar.

Não encontrou o que estava procurando?

Nova publicação