Recent searches


No recent searches

Zendesk Team's Avatar

Zendesk Team

Joined Oct 28, 2021

·

Last activity Jan 28, 2025

Following

0

Followers

2

Total activity

278

Votes

0

Subscriptions

0

ACTIVITY OVERVIEW

Latest activity by Zendesk Team

Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

A subscription to our status page will keep you up to date on ongoing incidents.

Subscribing

For the best user experience, we recommend you follow the below steps when setting up your subscription:

  1. Click the Subscribe to updates button in the top right corner of the status page
  2. Enter the email that you would like the updates to be sent to, then hit Subscribe via email
  3. Click Select none to unselect all of the boxes, then select:
    • Dashboard
    • Backend Integration, if your dialogues use API integration
    • Your CRM platform
  4. If you use Giosg, select the following instead:
    • Dashboard
    • Backend Integration, if your dialogues use API integration
    • Public API
  5. Click Save
  6. Confirm your subscription in the email you should've just received

And now you're subscribed! Anytime our status changes, you will be updated.

Manage Subscription

To manage your status page subscription, follow these steps:

  1. Click the Subscribe to updates button in the top right corner of the status page
  2. Enter the email that you're currently subscribed with, then click Subscribe via email
  3. Remove/add any current or possible subscriptions. If you want to unsubscribe entirely, you can also click Unsubscribe from updates

 

Edited Feb 10, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

Everyone likes having an experience that is visually appealing and easy for visitors to use, however, rich messaging, such as buttons and carousels, is not supported by Salesforce natively.

We believe the user experience is so important that we have created our own plug-in and scripts for you. By installing our plug-in (please reach out to your CSM to request the plug-in) you can leverage these dynamic designs as rich messages in your dialogue flows. This can then be implemented within your Salesforce embedded chat experience.

The customizations this unlocks include;

These capabilities are also customizable to your brand so they can match your brand's look and feel.
In this article, we will cover how to leverage all the rich options within our dialogue builder.

In order to achieve these rich options, please follow the enabling Salesforce Rich Messaging Plugin Instructions with a salesforce admin.

 

Buttons

Buttons were already available with Salesforce, so what's the difference with ours?

They are customizable and can be formatted to suit your brand and the experience you want to design. Take a look at the different possibilities below.

Type & How to Use Visual Example

Default Vertical

 

How to use

This will be the standard view if no version is specified.

Screenshot_2023-01-24_at_09.50.30.png

Horizontal Binary Buttons

 

How to use

Use &&version1&& in the AI agent message before the button.

Screenshot_2023-02-03_at_17.19.28.png

Screenshot_2023-01-24_at_09.52.37.png

Horizontal Row Buttons

 

How to use

Use &&version2&& in the AI agent message before the button.

 

Screenshot_2023-02-03_at_17.17.16.png

Screenshot_2023-01-24_at_09.50.42.png

CSAT - Stars

 

How to use

Use &&version3&& in the AI agent message before the buttons.

The buttons should be 1 - 5 + “not now”.

Screenshot_2023-02-03_at_17.23.00.png

Screenshot_2023-01-24_at_13.26.23.png

CSAT - Emojis

 

How to use

Use &&version4&& in the AI agent message before the buttons.

Screenshot_2023-02-03_at_17.23.37.png

Screenshot_2023-01-24_at_13.26.34.png

 

Carousels

Carousels allow you to display more content in a visual manner with the visitor being able to choose the respective option that is relevant to them, but they are larger, more dynamic, and visual compared to buttons.
Below you can find the versions we unlock with our plug-in. 

 

Type & How to use Visual Example

Default Carousel

 

How to use

This could be used with or without images.

When using images, you can add an array of image URLs to create a “collage”.

Supports regular buttons + external buttons.

Screenshot_2023-01-24_at_10.09.15.png

Wide and Dark Carousel

 

How to use
The image acts as a background for the entire card. Supports multiple images per card, similar to the default carousel. 

Use &&version2&& in the title of the first card of the carousel
This also works with dynamic carousels

Screenshot_2023-01-31_at_12.55.03.png

Screenshot_2023-01-24_at_09.53.07.png

Imageless, formatted offer-style Carousel

 

How to use

could typically be used for carousels that show multiple “packages”, levels, or other offers.

Use &&version3&& in the title of the first card of the carousel (would work also with dynamic carousels).

Use the following HTML structure in the description of the card


7.99€/month


1 Week fre Trial

then 9.99€/month. Cancel anytime



Screenshot_2023-01-31_at_12.56.02.png

Chat.png

 

HTML Examples

In case the options we covered thus far aren't enough, we've got you covered with being able to convert HTML into really cool messages as well as just transforming text. 

 

Type & How to use Visual Example

External Link Button


How to use

When using a single “external link” button, it will render like in the example to the left

Screenshot_2023-01-24_at_13.33.33.png

Embedded Media

 

How to use

Put the HTML in the AI agent message without other content

>

>

Screenshot_2023-01-24_at_13.32.20.png

Full-Width Button

How to use

To stretch the button to the width of the entire AI agent, you need to add the class "btn-full"

Some text

Screenshot_2023-01-24_at_13.32.35.png

Information Box with Buttons

How to use

Custom AI agent message with two levels of text hierarchies and a button.
This can be used to provide important information (like delivery date), some side information, and a big CTA (external link)

Could be combined with the V2 button as well - this time with the version within the AI agent message.


🚚 Delivered on 30 June 2022


📦 4 item(s) shipped by Ultimate




Haven't received it yet? Your parcel may have been delivered to a neighbor or pickup-point.

Screenshot_2023-02-03_at_17.21.21.png

Screenshot_2023-01-24_at_13.31.46.png

 

This can, of course, be customized to your heart's content and the HTML components are an inspiration for whatever you might want to achieve.

Edited Feb 11, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

Rich messaging, such as buttons and carousels, are not supported by Salesforce. Therefore, we have created our own scripts for you to add in Salesforce as static resources to enable buttons as rich messages in dialogue flows. This should be done by a Salesforce Admin.

Plug-in

These scripts enable all the customization features and the CSS to render rich messaging. 

Download the Plug-in

  1. Download the assets.zip from here 
  2. Unzip the zip file and you'll find a folder called rich-message-plugin and you will see two files:
    • CustomEvents_AddButtons.js
    • CustomEvents_AddButtons_Stylesheet.css

Upload the Plug-in

The two files need to be added as static resources. To do so:

  1. In Salesforce, go to Setup > Custom Code > Visualforce Pages -> Developer Console (this opens a new window)
  2. In the new window, select File > New > Static Resource
  3. Set the fields as below:
    • Name: CustomEvents_AddButtons
      • Note - (You can rename this if you wish and this is the reference you will use in the widget code later on and you don't need the .js or .css respectively)
    • MIME Type: text/javascript or text/css, depending on which file you are uploading
  4. Click Submit
  5. Repeat steps 2-4 for the file CustomEvents_AddButtons_Stylesheet.css

Edit the chat widget to reference custom plug-in

  1. In your website's source code, locate the snap-in widget snippet
  2. Add the following lines aboveembedded_svc.init():
embedded_svc.settings.externalScripts = ["CustomEvents_AddButtons"];
embedded_svc.settings.externalStyles = ["CustomEvents_AddButtons_Stylesheet"];

 

Lightning Component

The lightning component enables the typing indicator and replaces the standard chat message component with one that can understand HTML logic.

Download the Lightning component

Within the zip folder you downloaded earlier, there is a rich-message-lightning-web-component folder.

Install and Deploy the Component

If this is your first time deploying a web component - check out the trialhead guide here.

Install

  1. Create the project in VS Code - you will need the Salesforce extension enabled and Salesforce CLI downloaded

    1. Create a project by selecting SFDX: Create Project from View > Command Palette

      Accept the standard template and give it the project name lwcchatpack.
    2. Under force-app/main/default, right-click the lwc folder and select SFDX: Create Lightning Web Component.

      vs_code_salesforce.png
    3. Enter lwcchatpack for the name of the new component and accept the defaults

    4. Replace the files that were auto-created with what was downloaded from the Google Drive and add the new css file. Save everything

Deploy

    1. Under View > Command Palette, enter SFDX: Authorize an Org.
      When prompted, accept the Project Default and press Enter to accept the default alias. If prompted to allow access, click Allow.

    2. Right-click on the default folder and select SFDX: Deploy this Source to Org.

If deploying for your code editor doesn't work, you can do it from CLI by entering the following

cd Filepath/where/project/lives
sfdx force:source:deploy -p force-app/main/default

 

Update Salesforce Settings

For this new component to be utilized, it needs to be enabled on the chat widget.

This is controlled through the "embedded service" which is already being used by you today (this is the widget you’re using).

To update the settings navigate to Settings > Embedded Service Deployment select the relevant widget, click View, then edit the chat settings.

You will then see the "Customize with Lightning component" section where you can replace the default "chat message" component with the one we've installed above - remember to use the lwcchatpack project name that we used earlier.

Screenshot_2023-01-25_at_10.22.57.png

Remove HTML from Chat Transcript

Now that we had added the capability to have all this cool customization and rich options, we want to make sure the transcript of the chat is still readable.

We can change that so that the HTML is rendered by adding a Chat Transcript Trigger.

Setup > Object Manager > Chat Transcript > Trigger > New

trigger regexReplace on LiveChatTranscript (before insert, before update) {
for(LiveChatTranscript chatTranscript : Trigger.new){
if(String.isNotBlank(chatTranscript.Body)){
chatTranscript.Body=chatTranscript.Body.replaceAll('<','<').replaceAll('>','>').replaceAll('"','"').replaceAll('&','&');
}
}
}
image__37_.png


Recommendations 

Adjust the chat box width

For a better experience, we recommend increasing the width of the chat box. As a minimum, we would suggest 350px however 450px would be ideal.

To adjust this, in the widget code find the below line and set the value you would like.

embedded_svc.settings.widgetWidth = 

Adjust the default color and font

Within the css file in the widget code - find the color selector, by default it will be purple, but this can be adjusted in the CustomEvents_AddButtons_Stylesheet.css we uploaded in the beginning. 

In case you ever need to overwrite what the color you have set, for example, you want to have yellow stars but your brand purple for the rest of the colors you just need to add the tagimportant;.

Edited Feb 11, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

A/B testing is a mechanism that empowers a separation of visitors to differentiate the experience. A/B Testing helps you understand the impact of changes to the AI agent experience on your most valuable CX KPIs before removing a previous version - thus making data-driven iterations.

There are a few ways to achieve A/B testing within the AI agents -Advanced add-on.

API/Label based separation

Based on a field coming from an API or setting a label within the dialogue flow you can then use this within a conditional block to take visitors down different paths. 

Examples of separation criteria could be based on:

Whether someone triggers a default reply, after that point they get different replies.

If buttons are used after the welcome message rather than intent recognition, different messages are shown. 

From the CRM you can use any field you want to divide the group. It could be a tailored choice like customer status or something more random such as location.

AI agent/channel-based separation

Differentiating experiences based on channels is already a recommendation that we make. However, the communication style can be A/B tested across social channels either by having different AI agents or by separating by the messaging source, for example, Facebook or Whatsapp.

Traffic_split API 

Note - If you would like to utilize this capability, reach out to your CSM to enable this feature.

This is run using an integration, which we are faking so no actual data is transferred as the logic is hosted in our dashboard, called trafficSplit to compile content-based A/B testing. The fake integration is required to support the randomization of control groups association.

Setting up the division of groups

This fake integration uses a parameter called split.

The parameter [split] will dynamically distribute your user to the amount and share of control groups of your choosing - you don’t need to add the share up to 100 yourself, just ensure they are proportional. Below you can find some examples of split proportions.

1 = 1 control group of 100%
1,1 = 2 control groups with an equal share of 50% each
1,1,1 = 3 control groups with an equal share of 33.333333333333% each
1,2,1 = 3 control groups, one of 50% share, the other 2 of 25% each - the control group and 1 variant will have the 25% split.

You can also set them up as percentages (i.e. 50,50), important is the relation to each other.

The groups will always be named in this fashion: First group is [control], second [variant_1], third [variant_2] ad infinitum.

The first group will always be the [control] group.

Dialogue set-up

You can set this parameter latest upon the fake integration call; to ensure your user base is evenly distributed without bias it’s recommended though to add this to your Welcome Reply, if supported by the CRM, however, it can be set on the individual reply or replies you want to run tests on.

  1. Set the split parameter as a string on the conversation data and a label to identify the conversations that make it to the splitting of visitors.
  2. Add an API Integration block and select trafficSplit as the integration source
  3. Collect parameter split: If you have not set this parameter already, latest you would select it in the Collect Parameters branch. Unless you need it, you can hide this branch by deleting the AI agent message and collapsing the collection.
    Scenario results: Use however you please. This scenario does only one thing - assign your conversation to a control group.
    Scenario apiError: This is super unlikely to trigger as it’s not a real API, however, just make sure to add a fallback so that the customer experience is seamless and the AI agent can continue to function even if the fake integration is inhibited. You can give out a Welcome Reply as any other, just make sure you set all required params that might be required to advance in dialogues later on.

  4. Save your parameter to the conversation data by adding a label for value {{variant}}.

Screenshot_2023-01-31_at_10.26.47.png

Within the scenario success path, is also recommended to add a conversation document label to identify the entire batch of conversations that were assigned to a control group during your A/B test.

Use trafficSplit Variants

Technically you can branch off immediately after your fake integration results have been applied - but you don’t have to. Now that you have a parameter [split] with the individual [variant] outcomes of [control], [variant_1], [variant_2] etc. you can branch off this at any time you’d like via Conditional Blocks. 

Screenshot_2023-01-31_at_10.18.13.png

In this trafficSplit, we give out three different solutions to the customer - two different self-service links, and one in-AI agent API. All users will be assigned to a conditional block based on their randomly assigned {{variant}} outcome. The Fallback is here to support an ApiError edge case - build it out in a way that is seamless to the customers, and tag it in conversation logs to easily locate and troubleshoot down the road.

Now you will only have to define a success metric, i.e. CSAT or AI agent-Handled, and run your [variant] results against it using a label set on the variant paths. Alternatively, via Tableau.

Edited Feb 10, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

Optimizing an AI agent means analyzing the AI agent to determine how to create an even better AI model to make the AI agent understand better and improve the dialogues to create an even better customer experience. 

There are a few areas to check and some strategies to implement to improve performance.

Analytics

The Analytics area is a great place to see where improvements could be done. 

Key metrics to focus on are: AI agent Understood %, AI agent-Handled %, Escalation %, Automation % - specifically how they relate to one another, as well as Intent-specific content analytics.

Improve AI Agent Understanding

Confusion matrix

Have a look at the Confusion Matrix to see if there is significant confusion between Intents. Fix confusion by moving expressions or training intents more to make them stronger.

If there is a lot of confusion between two Intents, we can also think about merging them into one and use conditional blocks to guide the users through different paths inside the dialogue.

One example of this is could be "My account has been locked" vs " I'm locked out of my account". Here, the first expression is about the company locking the customer's account due to many incorrect password attempts or similar whilst the second expression is about the customer having forgotten their password or username.

Another example is courier issues vs issues with my deliveries, two different topics, and answers, which can get easily confused by the AI agent.  

 

Intents

Make sure you have enough expressions in your Intents overall.

The best practice in a chat AI agent is 50-300 and in a ticket AI agent 80-300. The number of expressions should be reflective of the frequency of the Intent. Make sure your intent structure is smart by getting rid of useless Intents if those exist. 

Filter in Conversation Logs with not understood, read through them, and get an understanding if it’s a training issue (we have the intents but they don’t get recognized) or are you missing Intents. If that is the case, you can think about creating new ones. Every ‘not understood’ message does not need an intent, but if repetitive ones are found, which could be answered, one should be created. 

In a ticket AI agent, we need to take into consideration the noisiness of incoming messages. With this, we mean forwarded messages, signatures, disclaimers, etc that emails often have. We can do that by using entities to sanitize emails. When we sanitize an email, the noise won't be taken into consideration in the AI model. We also need to make sure the expressions of the intents match the incoming data, so expressions also need to be cleaned up when we start sanitizing incoming messages.

 

Improve Deflection Rate


Read through conversation logs to see how well dialogues work and pay attention to:

  • Do users break dialogues?
    If yes, can you add intent listening or free text to guide them through the flow?
    They might break a dialogue e.g. not using buttons or asking where to find their order number mid-escalation flow.
  • Are they not understanding the instructions? Try making the message shorter and easier to follow. People don’t always read long messages. 
  • Are you missing key information for them to go through the flow, think about what the agent would do in the situation it gets escalated, and add as much of that into the dialogue, if possible?

Activities to improve deflection:

  • Adjust the Default Reply to help manage expectations and guide the visitor to flows that cause more confusion.
  • Conduct Content Coverage Analysis to identify new potential Intents that can be automated
  • Can an API integration automate more conversations? Identify new, suitable use cases for an API to increase automation/deflection rate. 

Custom Resolution Rate

Resolutions States are a great tool to use to understand where your AI agent can be performing better, especially when looking at trends over time. Using the states of escalated and not resolved you will be able to identify which conversations are most troublesome, by filtering for them in the Conversation Logs. 

Smarter Dialogues

Backend Integrations

Think about adding BE integrations to fetch data the AI agent could easily provide to users. This is obviously not the case for every customer, but if relevant. 

Conditional Blocks

You can also use conditional blocks to jump from one flow to the middle part of another flow, directly providing the correct answer if certain keywords are recognized, set the AI agent to reply differently the second and third time to make it smarter, reduce repetitiveness and provide less guidance before escalation if they have been through the flow before. 

confidence_score

Use native parameter: confidence_score as a fall-back in replies where the AI agent might feel less confident: in case confidence_score is below 90%, the AI agent can confirm customer intention in a more straightforward way (“Gotcha. Just to make sure I understood you correctly: you’ve lost your passwords and are wanting to generate a new one. Is that correct?”).

For Ticket Automation AI agents, in case the AI agent feels less confident about a topic, you might want to exclude the reply and trigger just the actions instead. 

Escalation Templates

To streamline Intent replies it can be better, faster, and easier to manage the escalation process in one centralized place, rather than in each specific flow. This should utilize Operating Hours and Team Availability to manage expectations and escalate in the appropriate manner.

Note - Team availability is not available for every CRM

A/B Testing

A/B testing is a great way to optimize dialogue flows with data-driven decisions. There are a few ways to achieve this - to learn about the different options check out the how-to guide

Edited Feb 10, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced
This article applies only to language support for expression-based AI agents. For details about broader language support, see Languages supported by AI agents - Advanced.

Starting off with a single language is typically how most people choose to start their AI agent journey. However, as you grow, you may wish to increase the number of language replies you want the AI agent to respond in.

What you need to consider:

Determine which languages you need

This might sound like a no-brainer but it is good to first make a list of the markets the AI agent will be taking care of and think about the languages used in those markets as this would affect not only the structure of your dialogues but also the copywriting.

For example, would you like to use standardized German for the entire DACH area (Germany, Austria, and Switzerland), or is it more personable to use different Germans for different countries?

Prioritize the languages and decide a default on

Once you have got a list of languages needed, time to prioritize and decide which one is going to move the needle and start with that one.

Think about which language has the largest scalability potential (i.e. reusable processes/dialogues) or impact (i.e. biggest support message volume = most support data).

Treat active/inactive as your best friend

It could be a nightmare to keep track of which reply in what language is ready to go and which ones are not.

By setting replies as active and inactive, you can easily differentiate between what is ready to go live and not. (Bye! cluttered Template Replies and hi! better content management!). In addition, setting languages active and inactive adds another layer that brings you the ease of mind.

Here's what a proposed workflow looks like: 

  1. Language added

  2. Build dialogues

  3. During the building and review process, set replies as active after each final review. (Don’t worry, nobody will see it until the language is set to active). 

  4. Set language as active to launch (It’s ok if there are some infrequent intents that have. inactive replies, they will remain invisible to chat visitors)

 

Edited Feb 10, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

If you find yourself asking the question "Why is this message being recognized as intent A and not B?" then you're in the right place. Particularly if your AI agent has been live for a while.

The reason why a AI agent is confused between two Intents is usually that the trained expressions under this pair of intents are overlapping. This is when Confusion Matrix comes in.

Confusion Matrix is a great tool to help you understand if the AI model of your AI agent is performing well in terms of intent recognization, and solve the confusion by moving trained expressions between an Intent pair. It is there to show possible inconsistencies in manually trained expressions.

This article covers the following topics:

Read more about it in Confusion Matrix Explained.

 

Where is Confusion Matrix? 

From the left side menu, go to Training Center > Confusion Matrix. Once there, you will see two tabs on top: List of Issues and Confusion Matrix. We suggest starting from List of Issues.

Screenshot_2023-02-01_at_16.34.34.png

How to use Confusion Matrix?

In List of issues, sort by Priority and start from the high and medium ones. Click Solve Issue > Manage Expressions to get to the expressions management view (see below).

Solve_Issue_Confusion_Matrix.gif

 

The purpose here is to make sure the expressions under the two intents are not overlapped by moving expressions between intents. They can be moved in bulk or one by one.

With our AI superpower, we have made this easier for you by highlighting the expressions that need to be managed. Here are the options of actionable needed from you after going through the highlighted messages carefully:

  • Fix 1: Untrain them or move them to the other intent
  • Fix 2: Create a new intent with those expressions
  • Fix 3: Merge the intents if you realize they should be the same
  • Fix 4: Help your AI model learn by training more expressions to those two confused intents. Only do this if you are absolutely certain the two intents are very different and should be separated

Once you're done, click Mark as Solved to keep track of your progress. 

Checkmarks are gone once a new model is trained.

Highlighted expressions

Highlighted expressions are the expressions that confuse your AI model. Expressions are highlighted when:

  1. The highlighted expressions were trained to incorrectly
  2. The highlighted expressions were trained with the correct intent but their words are very similar to another intent’s expressions.

Search for a specific intent

If you want to know if and how a specific intent causes confusion with other intents, you can search for it in the search bar in the top right corner.

Advanced filters

You define the confusion level here more precisely. In addition, if there is more than one intent that you would like to view, they can all be selected here.

All the features can be accessed via the Confusion Matrix tab as well by clicking on one of the cells 

 

List of Issues vs. Confusion Matrix

Both List of Issues and Confusion Matrix offer a clear overview of which intents are  "confused", meaning potentially having overlapping expressions. 

The only difference is that List of Issues gives you a clear idea of which ones should be tackled first by automatically sorting the issues based on priority High, Medium, and Low.

Whereas in Confusion Matrix put an emphasis on how well the model is. A good model, like the example below, should have a dark line running across the table diagonally. You can read more about it in Confusion Matrix Explained.

 

Edited Feb 10, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

You can safely test your AI agents and dialogues with the testing options provided within the platform. By using the “Test AI agent” and “Test Dialogue” buttons at the top of the Dashboard, you can go through the dialogue flows you have set up for your AI agents and test:

  • If a message can be recognized by the AI agent
  • If the built dialogue flows work as expected

When clicking “Test AI agent” or “Test Dialogue”, a chat window will pop up in the bottom right corner. You can then start chatting with the AI agent as a visitor to understand the AI agent's behavior.

Testing AI agents

With “Test AI agent” you can look at the end-to-end experience of interacting with the AI agent, always starting from the Welcome Reply (in the case of chat AI agents), triggering the different responses based on the identified intent.

Testing dialogues

The “Test Dialogue“ option is only available while editing dialogues in the Dialogue Builder. This option allows you to start the testing process from the dialogue you are editing.

Testing branches

Branch testing allows you to start testing from a specific block in the Dialogue Builder. All blocks can be used as a starting point for testing, with the exception of "Visitor Message" and "Link to another Reply" blocks. You can start testing branches with the "Test branch" button in the header of each block.

When you click on “Test Dialogue” or "Test branch", the Session Parameters modal will automatically open. You can add parameters and define their values here, so you don’t have to add them manually as a visitor message or rely on API calls each time you test your dialogue. As an example, if a node expects an order ID as a parameter, you can set that up as a session parameter. The params are stored locally, separately for each dialogue and user.

You can also choose to “Test Without Parameters”, in this case, the already set-up parameters will be ignored, and you don’t have to delete them. You can trigger the Session Parameters modal from the widget too.

Finding test conversations

Once you are done testing in the test widget, you can navigate to the Conversation Logs to find your test conversations. You can also use the “Open in Conversation Logs” option in the test widget to jump to the conversation.

In the Conversation Logs, test conversations are labeled, and you can apply filters to show only test conversations or to hide them.

View predicted intents for visitor messages

If you want to test what intent is detected for a specific visitor message, you can do it while testing the AI agent, with any test mode, in both ticket and chat AI agents. To do that, hover on the visitor message you would like to see the prediction for and click on the "View predicted intents" button. In the "Top Predicted Intents" modal, you will see how confident the AI agent is in identifying the top intents for the visitor message.

Edited Feb 10, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

During the testing phase, you may wish to have a safe environment to test the AI agent without having it live for customers and impacting the day-to-day work of your human agents. The way to do this is to create a test email for your testers to email to send their inquiries to, so only those requests that are sent to that test email are responded to. 

 

Pre-requisites 

AI agent Group Created 

CRM Integration completed in AI agents - Advanced

 

Creating an Email for Zendesk

To create a new email address for your Zendesk Support account you will need to do the following:

  1. In Admin Center, click Channels in the sidebar, then select Talk and email > Email.
  2. In the Support addresses section, click Add address, then select Create new Zendesk address.

    add_address_-_Zendesk.png

  3. Enter an address you'd like to use for receiving support requests.
  4. Click Create now.

Create a Test View

To add a view in Zendesk, there is a tutorial here

You will want it to have the conditions of when sent to the newly created test email address, assign it to the AI agent group.

Create a Trigger

Add a Trigger to ensure that only this email address goes to the AI agent and other agents aren't notified. 

This should say that when a ticket is created for the test email and is on the channels of email or webform. Ensure that the checkbox for notifying all agents is unchecked to avoid spamming the full support team.

Once you have completed the test, you can just change the email back to the normal email address you would use. 

Edited Feb 10, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments


Zendesk Team created an article,

ArticleUsing AI agents - Advanced
Add-on AI agents - Advanced

Testing your AI agent is a very important step in your onboarding journey to ensure your AI agent is ready to graduate and become a fully functioning member of your customer support team. 

To ensure you are ready to launch your test should include the following phases:

Note - To do this exercise as a group offline, we have a handout attached at the bottom of the page.

Prepare the Test Environment

There are 3 main ways to test your AI agent. The first and second options would have a better outcome, however, the third is available when the others are not available. 

  1. Connect to CRM and Launch to a Test Environment
  2. Connecting to your CRM with a Test Email Trigger
  3. Using the Test AI agent button 

Connect to CRM and Launch to a Test Environment

We recommend connecting to your CRM and embedding the widget onto a sandbox, staging, or testing environment of your website or help center. This will provide as much of a 'real' experience as possible - where you can assess your AI agent's impact. This means testing the Actions and Triggers you have set up in their entirety. To ensure your CRM reporting is not impacted, create a group with a test email to have the tickets handled separately from the day-to-day.

Connecting to your CRM with a Test Email Trigger

If you don’t have a testing environment you can still test by connecting to your CRM but will need to be creative as to where you locate the widget or have it routed off of a rule based on employee email addresses.

You can follow the instructions to set this up in Zendesk Support here.

Use the Test AI agent button

This is where you can quickly review dialogues and the conversation logs to track recognition capabilities and quickly assess dialogue structure. We have an article here on how to use the Test AI agent button here.

Prepare your Testers

How do you determine who should be a tester?

Ideally, you don't want the same people that built the flows, as they will be biased based on knowing how the flows are built, and may not be able to see issues such as typos or formatting if they created them.

Then it comes down to finding value for the testers to help - people are more likely to dedicate time to help test if they understand the value. Therefore we would recommend 2 types of stakeholders, process-focused and experience-focused groups. This ensures that you get specific feedback from those respective groups on two main factors that they are better positioned to offer and have a vested interest in it. These people can come from wherever in the business, however, for experience, we think operations and training type team members are best and for experience, members from social and marketing teams are able to best assess whether it aligns to brand image, tone of voice and can anticipate what can happen if the AI agent makes a mistake. 

An additional person to include from your support team to help with the validation would be people who are relatively new to the team. Why? 

They have a unique attribute that can make this task especially good for them, which is a fresh set of eyes. They are not influenced by past events and are likely knee-deep in the documentation so can assess where there are discrepancies to ensure everything is up-to-date and correct. In addition to this, they get to learn your company's tone of voice and the way requests will be responded to in the first line of defense - which will accelerate their onboarding process.

What do they need to know? 

It is important to brief your testers on the intents the AI agent understands and what exactly you are asking them to review. Especially if you ask them to focus on the experience and tone of voice, you may wish to share with them the persona you built earlier in the onboarding to ensure they match. Testing can be as robust and detailed as you like with a team as large or small as you like, but we do have some tried and tested rules we would recommend you ask your testers to follow.

Golden Rules
❌ Don’t troll the AI agent.
E.g. “How to take the best selfie with my trainers”
✅ DO simplify your questions and ask one at a time
✅ DO ask the same question in different ways
✅ DO try to get through entire dialog flows and test their different stages and options (i.e. when the AI agent fails to recognize your issue right at the beginning, record this, but then do another run trying to get past that point and deeper into the dialog flow).
✅ DO take screenshots to accompany your recommendations and feedback.
✅ DO take screenshots of any and all scenarios in which the AI agents fail to understand your query and/or simply deliver a bad user experience.

Collecting and Iterating on Feedback

Collecting feedback is the most important part of the testing process. The more detailed the feedback the easier the AI agent builders' lives will be easier, first to make improvements but also they can quickly add more expressions based on how each of your testers would ask about that topic. This is all with the end goal of getting your AI agent ready to be set live.

The feedback should be collected with screenshots, on whether the response was accurate, matched the persona tone, and if it met the expectations to identify areas for improvement.
You will want to create a shared file with your testers and your builders so that they can track the progress of the test, see feedback from others to perhaps upvote and save duplications, and track whether a resolution has been implemented.

In addition to this, your builders can review the conversations that happened in the Conversation Logs to ensure everything happened as intended, such as actions.

Once all feedback has been reviewed and you feel happy with your AI agent's performance, they are ready to be launched and become a full member of your support team.

Edited Feb 10, 2025 · Zendesk Team

0

Followers

1

Vote

0

Comments