Recent searches
No recent searches
Lack of tracking for reporting
Posted Oct 27, 2023
It seems there is a lack of trackability when using Summarize, Expand and Tone Shift. This makes it difficult to assess the impact and quality check the use of Generative AI. I understand that applying e.g. a tag when using these features may not necessarily mean that the feature was used for the reply, e.g. you could apply Expand and then revert back to your original reply. Nonetheless, we somehow need clear trackability here or we would just set these features loose without any control or measurement mechanisms in place.
As an example, I'm trying to assess the impact from the trial and I have to rely on agents remembering to using the features - I cannot track if the features have been used so I can't remind agents who are not using them. At the end of the trial I cannot with certainty say that there was an impact because I have no clue if agents have used the features. Why would we commit to paying a good amount of money for this if we cannot assess ROI properly?
6
5 comments
Vinicius Henrique da Silva
I completely agree with that.
I didn't find any way to assess the impact and quality check the use of Generative AI.
1
Ryan Boyer
I agree with this feedback. From an article perspective, it is not apparent that the changes came from AI. The edits (in Revisions or View History) will just say the actual agent that clicked Save/Publish. This presents an inherent problem with auditing who is actually making the text updates (AI or the user).
0
Michael Yuen
Another vote for the ability to track if Generative AI features have been used. (In particular, the Enhance Writing feature). We can see if agents have applied macros; it would be similarly helpful to see if agents have used these AI features, too.
0
Jake Bantz
Reporting against these agent centric generative features is certainly something we are looking into. I see that it's important here in knowing which agents are using the features, and on which tickets. But are there also particular metrics you would like to see? A couple of examples I've heard so far are first reply time and resolution time, but what other metrics or info would be useful to assess the impact of the generative agent tools?
0
John Ellery
Jake Bantz Customer satisfaction score would also be really interesting. Agent is 87% overall but tickets with Expand usage are 89% vs tickets without at 85% for example.
0