Lack of tracking for reporting

5 Kommentare

  • Vinicius Henrique da Silva
    Zendesk Luminary

    I completely agree with that.

    I didn't find any way  to assess the impact and quality check the use of Generative AI.


  • Ryan Boyer

    I agree with this feedback. From an article perspective, it is not apparent that the changes came from AI. The edits (in Revisions or View History) will just say the actual agent that clicked Save/Publish. This presents an inherent problem with auditing who is actually making the text updates (AI or the user).

  • Michael Yuen

    Another vote for the ability to track if Generative AI features have been used. (In particular, the Enhance Writing feature). We can see if agents have applied macros; it would be similarly helpful to see if agents have used these AI features, too.

  • Jake Bantz
    Zendesk Product Manager

    Reporting against these agent centric generative features is certainly something we are looking into. I see that it's important here in knowing which agents are using the features, and on which tickets. But are there also particular metrics you would like to see? A couple of examples I've heard so far are first reply time and resolution time, but what other metrics or info would be useful to assess the impact of the generative agent tools?

  • John Ellery

    Jake Bantz Customer satisfaction score would also be really interesting. Agent is 87% overall but tickets with Expand usage are 89% vs tickets without at 85% for example. 


Bitte melden Sie sich an, um einen Kommentar zu hinterlassen.

Powered by Zendesk