We can change that so that the HTML is rendered by adding a Chat Transcript Trigger.
Setup > Object Manager > Chat Transcript > Trigger > New
Recent searches
No recent searches
Joined Oct 28, 2021
·
Last activity Jan 28, 2025
Following
0
Followers
2
Total activity
278
Votes
0
Subscriptions
0
Latest activity by Zendesk Team
Zendesk Team created an article,
A subscription to our status page will keep you up to date on ongoing incidents.
For the best user experience, we recommend you follow the below steps when setting up your subscription:
And now you're subscribed! Anytime our status changes, you will be updated.
To manage your status page subscription, follow these steps:
Edited Feb 10, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
Everyone likes having an experience that is visually appealing and easy for visitors to use, however, rich messaging, such as buttons and carousels, is not supported by Salesforce natively.
We believe the user experience is so important that we have created our own plug-in and scripts for you. By installing our plug-in (please reach out to your CSM to request the plug-in) you can leverage these dynamic designs as rich messages in your dialogue flows. This can then be implemented within your Salesforce embedded chat experience.
The customizations this unlocks include;
These capabilities are also customizable to your brand so they can match your brand's look and feel.
In this article, we will cover how to leverage all the rich options within our dialogue builder.
In order to achieve these rich options, please follow the enabling Salesforce Rich Messaging Plugin Instructions with a salesforce admin.
Buttons were already available with Salesforce, so what's the difference with ours?
They are customizable and can be formatted to suit your brand and the experience you want to design. Take a look at the different possibilities below.
Type & How to Use | Visual Example |
Default Vertical
How to use This will be the standard view if no version is specified. |
![]() |
Horizontal Binary Buttons
How to use Use |
![]() |
Horizontal Row Buttons
How to use Use
|
![]() |
CSAT - Stars
How to use Use The buttons should be 1 - 5 + “not now”. |
![]() |
CSAT - Emojis
How to use Use |
![]() |
Carousels allow you to display more content in a visual manner with the visitor being able to choose the respective option that is relevant to them, but they are larger, more dynamic, and visual compared to buttons.
Below you can find the versions we unlock with our plug-in.
Type & How to use | Visual Example |
Default Carousel
How to use This could be used with or without images. When using images, you can add an array of image URLs to create a “collage”. Supports regular buttons + external buttons. |
![]() |
Wide and Dark Carousel
How to use Use |
![]() |
Imageless, formatted offer-style Carousel
How to use could typically be used for carousels that show multiple “packages”, levels, or other offers. Use Use the following HTML structure in the description of the card
|
![]() |
In case the options we covered thus far aren't enough, we've got you covered with being able to convert HTML into really cool messages as well as just transforming text.
Type & How to use | Visual Example |
External Link Button
When using a single “external link” button, it will render like in the example to the left |
|
Embedded Media
How to use Put the HTML in the AI agent message without other content
|
![]() |
Full-Width Button How to use To stretch the button to the width of the entire AI agent, you need to add the class Some text |
![]() |
Information Box with Buttons How to use Custom AI agent message with two levels of text hierarchies and a button. Could be combined with the V2 button as well - this time with the version within the AI agent message.
|
![]() |
This can, of course, be customized to your heart's content and the HTML components are an inspiration for whatever you might want to achieve.
Edited Feb 11, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
Rich messaging, such as buttons and carousels, are not supported by Salesforce. Therefore, we have created our own scripts for you to add in Salesforce as static resources to enable buttons as rich messages in dialogue flows. This should be done by a Salesforce Admin.
These scripts enable all the customization features and the CSS to render rich messaging.
The two files need to be added as static resources. To do so:
embedded_svc.init()
:embedded_svc.settings.externalScripts = ["CustomEvents_AddButtons"];
embedded_svc.settings.externalStyles = ["CustomEvents_AddButtons_Stylesheet"];
The lightning component enables the typing indicator and replaces the standard chat message component with one that can understand HTML logic.
Within the zip folder you downloaded earlier, there is a rich-message-lightning-web-component folder.
If this is your first time deploying a web component - check out the trialhead guide here.
Create the project in VS Code - you will need the Salesforce extension enabled and Salesforce CLI downloaded
Create a project by selecting SFDX: Create Project from View > Command Palette
Accept the standard template and give it the project namelwcchatpack
.Under force-app/main/default, right-click the lwc folder and select SFDX: Create Lightning Web Component.
Enter lwcchatpack
for the name of the new component and accept the defaults
Replace the files that were auto-created with what was downloaded from the Google Drive and add the new css file. Save everything
Under View > Command Palette, enter SFDX: Authorize an Org.
When prompted, accept the Project Default and press Enter to accept the default alias. If prompted to allow access, click Allow.
Right-click on the default folder and select SFDX: Deploy this Source to Org.
If deploying for your code editor doesn't work, you can do it from CLI by entering the following
cd Filepath/where/project/lives
sfdx force:source:deploy -p force-app/main/default
For this new component to be utilized, it needs to be enabled on the chat widget.
This is controlled through the "embedded service" which is already being used by you today (this is the widget you’re using).
To update the settings navigate to Settings > Embedded Service Deployment select the relevant widget, click View, then edit the chat settings.
You will then see the "Customize with Lightning component" section where you can replace the default "chat message" component with the one we've installed above - remember to use the lwcchatpack project name that we used earlier.
Now that we had added the capability to have all this cool customization and rich options, we want to make sure the transcript of the chat is still readable.
We can change that so that the HTML is rendered by adding a Chat Transcript Trigger.
Setup > Object Manager > Chat Transcript > Trigger > New
trigger regexReplace on LiveChatTranscript (before insert, before update) {
for(LiveChatTranscript chatTranscript : Trigger.new){
if(String.isNotBlank(chatTranscript.Body)){
chatTranscript.Body=chatTranscript.Body.replaceAll('<','<').replaceAll('>','>').replaceAll('"','"').replaceAll('&','&');
}
}
}
For a better experience, we recommend increasing the width of the chat box. As a minimum, we would suggest 350px however 450px would be ideal.
To adjust this, in the widget code find the below line and set the value you would like.
embedded_svc.settings.widgetWidth =
Within the css file in the widget code - find the color selector, by default it will be purple, but this can be adjusted in the CustomEvents_AddButtons_Stylesheet.css we uploaded in the beginning.
In case you ever need to overwrite what the color you have set, for example, you want to have yellow stars but your brand purple for the rest of the colors you just need to add the tagimportant;
.
Edited Feb 11, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
A/B testing is a mechanism that empowers a separation of visitors to differentiate the experience. A/B Testing helps you understand the impact of changes to the AI agent experience on your most valuable CX KPIs before removing a previous version - thus making data-driven iterations.
There are a few ways to achieve A/B testing within the AI agents -Advanced add-on.
Based on a field coming from an API or setting a label within the dialogue flow you can then use this within a conditional block to take visitors down different paths.
Examples of separation criteria could be based on:
Whether someone triggers a default reply, after that point they get different replies.
If buttons are used after the welcome message rather than intent recognition, different messages are shown.
From the CRM you can use any field you want to divide the group. It could be a tailored choice like customer status or something more random such as location.
Differentiating experiences based on channels is already a recommendation that we make. However, the communication style can be A/B tested across social channels either by having different AI agents or by separating by the messaging source, for example, Facebook or Whatsapp.
Note - If you would like to utilize this capability, reach out to your CSM to enable this feature.
This is run using an integration, which we are faking so no actual data is transferred as the logic is hosted in our dashboard, called trafficSplit to compile content-based A/B testing. The fake integration is required to support the randomization of control groups association.
This fake integration uses a parameter called split.
The parameter [split] will dynamically distribute your user to the amount and share of control groups of your choosing - you don’t need to add the share up to 100 yourself, just ensure they are proportional. Below you can find some examples of split proportions.
1 = 1 control group of 100%
1,1 = 2 control groups with an equal share of 50% each
1,1,1 = 3 control groups with an equal share of 33.333333333333% each
1,2,1 = 3 control groups, one of 50% share, the other 2 of 25% each - the control group and 1 variant will have the 25% split.
You can also set them up as percentages (i.e. 50,50), important is the relation to each other.
The groups will always be named in this fashion: First group is [control], second [variant_1], third [variant_2] ad infinitum.
The first group will always be the [control] group.
You can set this parameter latest upon the fake integration call; to ensure your user base is evenly distributed without bias it’s recommended though to add this to your Welcome Reply, if supported by the CRM, however, it can be set on the individual reply or replies you want to run tests on.
Collect parameter split: If you have not set this parameter already, latest you would select it in the Collect Parameters branch. Unless you need it, you can hide this branch by deleting the AI agent message and collapsing the collection.
Scenario results: Use however you please. This scenario does only one thing - assign your conversation to a control group.
Scenario apiError: This is super unlikely to trigger as it’s not a real API, however, just make sure to add a fallback so that the customer experience is seamless and the AI agent can continue to function even if the fake integration is inhibited. You can give out a Welcome Reply as any other, just make sure you set all required params that might be required to advance in dialogues later on.
Within the scenario success path, is also recommended to add a conversation document label to identify the entire batch of conversations that were assigned to a control group during your A/B test.
Technically you can branch off immediately after your fake integration results have been applied - but you don’t have to. Now that you have a parameter [split] with the individual [variant] outcomes of [control], [variant_1], [variant_2] etc. you can branch off this at any time you’d like via Conditional Blocks.
In this trafficSplit, we give out three different solutions to the customer - two different self-service links, and one in-AI agent API. All users will be assigned to a conditional block based on their randomly assigned {{variant}} outcome. The Fallback is here to support an ApiError edge case - build it out in a way that is seamless to the customers, and tag it in conversation logs to easily locate and troubleshoot down the road.
Now you will only have to define a success metric, i.e. CSAT or AI agent-Handled, and run your [variant] results against it using a label set on the variant paths. Alternatively, via Tableau.
Edited Feb 10, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
Optimizing an AI agent means analyzing the AI agent to determine how to create an even better AI model to make the AI agent understand better and improve the dialogues to create an even better customer experience.
There are a few areas to check and some strategies to implement to improve performance.
The Analytics area is a great place to see where improvements could be done.
Key metrics to focus on are: AI agent Understood %, AI agent-Handled %, Escalation %, Automation % - specifically how they relate to one another, as well as Intent-specific content analytics.
Have a look at the Confusion Matrix to see if there is significant confusion between Intents. Fix confusion by moving expressions or training intents more to make them stronger.
If there is a lot of confusion between two Intents, we can also think about merging them into one and use conditional blocks to guide the users through different paths inside the dialogue.
One example of this is could be "My account has been locked" vs " I'm locked out of my account". Here, the first expression is about the company locking the customer's account due to many incorrect password attempts or similar whilst the second expression is about the customer having forgotten their password or username.
Another example is courier issues vs issues with my deliveries, two different topics, and answers, which can get easily confused by the AI agent.
Make sure you have enough expressions in your Intents overall.
The best practice in a chat AI agent is 50-300 and in a ticket AI agent 80-300. The number of expressions should be reflective of the frequency of the Intent. Make sure your intent structure is smart by getting rid of useless Intents if those exist.
Filter in Conversation Logs with not understood, read through them, and get an understanding if it’s a training issue (we have the intents but they don’t get recognized) or are you missing Intents. If that is the case, you can think about creating new ones. Every ‘not understood’ message does not need an intent, but if repetitive ones are found, which could be answered, one should be created.
In a ticket AI agent, we need to take into consideration the noisiness of incoming messages. With this, we mean forwarded messages, signatures, disclaimers, etc that emails often have. We can do that by using entities to sanitize emails. When we sanitize an email, the noise won't be taken into consideration in the AI model. We also need to make sure the expressions of the intents match the incoming data, so expressions also need to be cleaned up when we start sanitizing incoming messages.
Read through conversation logs to see how well dialogues work and pay attention to:
Activities to improve deflection:
Resolutions States are a great tool to use to understand where your AI agent can be performing better, especially when looking at trends over time. Using the states of escalated and not resolved you will be able to identify which conversations are most troublesome, by filtering for them in the Conversation Logs.
Think about adding BE integrations to fetch data the AI agent could easily provide to users. This is obviously not the case for every customer, but if relevant.
You can also use conditional blocks to jump from one flow to the middle part of another flow, directly providing the correct answer if certain keywords are recognized, set the AI agent to reply differently the second and third time to make it smarter, reduce repetitiveness and provide less guidance before escalation if they have been through the flow before.
Use native parameter: confidence_score as a fall-back in replies where the AI agent might feel less confident: in case confidence_score is below 90%, the AI agent can confirm customer intention in a more straightforward way (“Gotcha. Just to make sure I understood you correctly: you’ve lost your passwords and are wanting to generate a new one. Is that correct?”).
For Ticket Automation AI agents, in case the AI agent feels less confident about a topic, you might want to exclude the reply and trigger just the actions instead.
To streamline Intent replies it can be better, faster, and easier to manage the escalation process in one centralized place, rather than in each specific flow. This should utilize Operating Hours and Team Availability to manage expectations and escalate in the appropriate manner.
Note - Team availability is not available for every CRM
A/B testing is a great way to optimize dialogue flows with data-driven decisions. There are a few ways to achieve this - to learn about the different options check out the how-to guide.
Edited Feb 10, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
Starting off with a single language is typically how most people choose to start their AI agent journey. However, as you grow, you may wish to increase the number of language replies you want the AI agent to respond in.
What you need to consider:
This might sound like a no-brainer but it is good to first make a list of the markets the AI agent will be taking care of and think about the languages used in those markets as this would affect not only the structure of your dialogues but also the copywriting.
For example, would you like to use standardized German for the entire DACH area (Germany, Austria, and Switzerland), or is it more personable to use different Germans for different countries?
Once you have got a list of languages needed, time to prioritize and decide which one is going to move the needle and start with that one.
Think about which language has the largest scalability potential (i.e. reusable processes/dialogues) or impact (i.e. biggest support message volume = most support data).
It could be a nightmare to keep track of which reply in what language is ready to go and which ones are not.
By setting replies as active and inactive, you can easily differentiate between what is ready to go live and not. (Bye! cluttered Template Replies and hi! better content management!). In addition, setting languages active and inactive adds another layer that brings you the ease of mind.
Here's what a proposed workflow looks like:
Language added
Build dialogues
During the building and review process, set replies as active after each final review. (Don’t worry, nobody will see it until the language is set to active).
Set language as active to launch (It’s ok if there are some infrequent intents that have. inactive replies, they will remain invisible to chat visitors)
Edited Feb 10, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
If you find yourself asking the question "Why is this message being recognized as intent A and not B?" then you're in the right place. Particularly if your AI agent has been live for a while.
The reason why a AI agent is confused between two Intents is usually that the trained expressions under this pair of intents are overlapping. This is when Confusion Matrix comes in.
Confusion Matrix is a great tool to help you understand if the AI model of your AI agent is performing well in terms of intent recognization, and solve the confusion by moving trained expressions between an Intent pair. It is there to show possible inconsistencies in manually trained expressions.
This article covers the following topics:
Read more about it in Confusion Matrix Explained.
From the left side menu, go to Training Center > Confusion Matrix. Once there, you will see two tabs on top: List of Issues and Confusion Matrix. We suggest starting from List of Issues.
In List of issues, sort by Priority and start from the high and medium ones. Click Solve Issue > Manage Expressions to get to the expressions management view (see below).
The purpose here is to make sure the expressions under the two intents are not overlapped by moving expressions between intents. They can be moved in bulk or one by one.
With our AI superpower, we have made this easier for you by highlighting the expressions that need to be managed. Here are the options of actionable needed from you after going through the highlighted messages carefully:
Once you're done, click Mark as Solved to keep track of your progress.
Highlighted expressions are the expressions that confuse your AI model. Expressions are highlighted when:
If you want to know if and how a specific intent causes confusion with other intents, you can search for it in the search bar in the top right corner.
You define the confusion level here more precisely. In addition, if there is more than one intent that you would like to view, they can all be selected here.
All the features can be accessed via the Confusion Matrix tab as well by clicking on one of the cells
Both List of Issues and Confusion Matrix offer a clear overview of which intents are "confused", meaning potentially having overlapping expressions.
The only difference is that List of Issues gives you a clear idea of which ones should be tackled first by automatically sorting the issues based on priority High, Medium, and Low.
Whereas in Confusion Matrix put an emphasis on how well the model is. A good model, like the example below, should have a dark line running across the table diagonally. You can read more about it in Confusion Matrix Explained.
Edited Feb 10, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
You can safely test your AI agents and dialogues with the testing options provided within the platform. By using the “Test AI agent” and “Test Dialogue” buttons at the top of the Dashboard, you can go through the dialogue flows you have set up for your AI agents and test:
When clicking “Test AI agent” or “Test Dialogue”, a chat window will pop up in the bottom right corner. You can then start chatting with the AI agent as a visitor to understand the AI agent's behavior.
With “Test AI agent” you can look at the end-to-end experience of interacting with the AI agent, always starting from the Welcome Reply (in the case of chat AI agents), triggering the different responses based on the identified intent.
The “Test Dialogue“ option is only available while editing dialogues in the Dialogue Builder. This option allows you to start the testing process from the dialogue you are editing.
Branch testing allows you to start testing from a specific block in the Dialogue Builder. All blocks can be used as a starting point for testing, with the exception of "Visitor Message" and "Link to another Reply" blocks. You can start testing branches with the "Test branch" button in the header of each block.
When you click on “Test Dialogue” or "Test branch", the Session Parameters modal will automatically open. You can add parameters and define their values here, so you don’t have to add them manually as a visitor message or rely on API calls each time you test your dialogue. As an example, if a node expects an order ID as a parameter, you can set that up as a session parameter. The params are stored locally, separately for each dialogue and user.
You can also choose to “Test Without Parameters”, in this case, the already set-up parameters will be ignored, and you don’t have to delete them. You can trigger the Session Parameters modal from the widget too.
Once you are done testing in the test widget, you can navigate to the Conversation Logs to find your test conversations. You can also use the “Open in Conversation Logs” option in the test widget to jump to the conversation.
In the Conversation Logs, test conversations are labeled, and you can apply filters to show only test conversations or to hide them.
If you want to test what intent is detected for a specific visitor message, you can do it while testing the AI agent, with any test mode, in both ticket and chat AI agents. To do that, hover on the visitor message you would like to see the prediction for and click on the "View predicted intents" button. In the "Top Predicted Intents" modal, you will see how confident the AI agent is in identifying the top intents for the visitor message.
Edited Feb 10, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
During the testing phase, you may wish to have a safe environment to test the AI agent without having it live for customers and impacting the day-to-day work of your human agents. The way to do this is to create a test email for your testers to email to send their inquiries to, so only those requests that are sent to that test email are responded to.
CRM Integration completed in AI agents - Advanced
To create a new email address for your Zendesk Support account you will need to do the following:
To add a view in Zendesk, there is a tutorial here
You will want it to have the conditions of when sent to the newly created test email address, assign it to the AI agent group.
Add a Trigger to ensure that only this email address goes to the AI agent and other agents aren't notified.
This should say that when a ticket is created for the test email and is on the channels of email or webform. Ensure that the checkbox for notifying all agents is unchecked to avoid spamming the full support team.
Once you have completed the test, you can just change the email back to the normal email address you would use.
Edited Feb 10, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments
Zendesk Team created an article,
Testing your AI agent is a very important step in your onboarding journey to ensure your AI agent is ready to graduate and become a fully functioning member of your customer support team.
To ensure you are ready to launch your test should include the following phases:
Note - To do this exercise as a group offline, we have a handout attached at the bottom of the page.
There are 3 main ways to test your AI agent. The first and second options would have a better outcome, however, the third is available when the others are not available.
We recommend connecting to your CRM and embedding the widget onto a sandbox, staging, or testing environment of your website or help center. This will provide as much of a 'real' experience as possible - where you can assess your AI agent's impact. This means testing the Actions and Triggers you have set up in their entirety. To ensure your CRM reporting is not impacted, create a group with a test email to have the tickets handled separately from the day-to-day.
If you don’t have a testing environment you can still test by connecting to your CRM but will need to be creative as to where you locate the widget or have it routed off of a rule based on employee email addresses.
You can follow the instructions to set this up in Zendesk Support here.
This is where you can quickly review dialogues and the conversation logs to track recognition capabilities and quickly assess dialogue structure. We have an article here on how to use the Test AI agent button here.
Ideally, you don't want the same people that built the flows, as they will be biased based on knowing how the flows are built, and may not be able to see issues such as typos or formatting if they created them.
Then it comes down to finding value for the testers to help - people are more likely to dedicate time to help test if they understand the value. Therefore we would recommend 2 types of stakeholders, process-focused and experience-focused groups. This ensures that you get specific feedback from those respective groups on two main factors that they are better positioned to offer and have a vested interest in it. These people can come from wherever in the business, however, for experience, we think operations and training type team members are best and for experience, members from social and marketing teams are able to best assess whether it aligns to brand image, tone of voice and can anticipate what can happen if the AI agent makes a mistake.
An additional person to include from your support team to help with the validation would be people who are relatively new to the team. Why?
They have a unique attribute that can make this task especially good for them, which is a fresh set of eyes. They are not influenced by past events and are likely knee-deep in the documentation so can assess where there are discrepancies to ensure everything is up-to-date and correct. In addition to this, they get to learn your company's tone of voice and the way requests will be responded to in the first line of defense - which will accelerate their onboarding process.
It is important to brief your testers on the intents the AI agent understands and what exactly you are asking them to review. Especially if you ask them to focus on the experience and tone of voice, you may wish to share with them the persona you built earlier in the onboarding to ensure they match. Testing can be as robust and detailed as you like with a team as large or small as you like, but we do have some tried and tested rules we would recommend you ask your testers to follow.
Golden Rules
❌ Don’t troll the AI agent.
E.g. “How to take the best selfie with my trainers”
✅ DO simplify your questions and ask one at a time
✅ DO ask the same question in different ways
✅ DO try to get through entire dialog flows and test their different stages and options (i.e. when the AI agent fails to recognize your issue right at the beginning, record this, but then do another run trying to get past that point and deeper into the dialog flow).
✅ DO take screenshots to accompany your recommendations and feedback.
✅ DO take screenshots of any and all scenarios in which the AI agents fail to understand your query and/or simply deliver a bad user experience.
Collecting feedback is the most important part of the testing process. The more detailed the feedback the easier the AI agent builders' lives will be easier, first to make improvements but also they can quickly add more expressions based on how each of your testers would ask about that topic. This is all with the end goal of getting your AI agent ready to be set live.
The feedback should be collected with screenshots, on whether the response was accurate, matched the persona tone, and if it met the expectations to identify areas for improvement.
You will want to create a shared file with your testers and your builders so that they can track the progress of the test, see feedback from others to perhaps upvote and save duplications, and track whether a resolution has been implemented.
In addition to this, your builders can review the conversations that happened in the Conversation Logs to ensure everything happened as intended, such as actions.
Once all feedback has been reviewed and you feel happy with your AI agent's performance, they are ready to be launched and become a full member of your support team.
Edited Feb 10, 2025 · Zendesk Team
0
Followers
1
Vote
0
Comments