This article includes the following topics:
- How is natural language processed?
- How are articles selected for recommendations?
- Common misconceptions
Related articles:
How is natural language processed?
AI agents use artificial intelligence to evaluate articles, which means that it is able to mimic human behavior. The AI agent uses natural language processing (NLP) to read every article in your help center and to understand the main concept behind each article. It then takes all the concepts from all the articles and places them on a map. Each concept gets its very own “address” on the map so that it lives near other, similar concepts. However, instead of just city, street, and zip code, this address has 500 parts. Whenever a new question comes in, the AI agent does its best to understand the concept that the question is asking about and use the map to determine the closest existing article.
For example, here are some concepts that might be extracted from a few questions:
Question | Possible concept |
---|---|
How do I dump my tickets to a file? | Exporting Data |
I’m locked out of my account | Account Access / Password Reset |
How do I create a crane? | Folding Origami Birds |
Note that the AI agent automatically detects the language used in an email by combining the subject and description and using language prediction. This may cause suggestions to appear in a language that doesn't match the one set in the end user's profile.
How are articles selected for recommendation?
When an incoming question closely matches with an existing article, they become “neighbors” on the map (as described above) and it’s clear that the AI agent should recommend the article. However, when the closest match is a few streets over, or in a nearby neighborhood, it becomes less certain that the concepts are related.
The data science team at Zendesk carefully monitors and has finely tuned this over time by adjusting a “threshold knob”. This threshold is not adjustable by admin or agents, it’s only accessible to the Zendesk development teams. The threshold knob is a global control, meaning it affects all accounts. It's used to determine how closely two concepts must be on the concept map to be considered similar concepts.
When the threshold knob is turned up, the AI agent becomes more conservative and will recommend fewer articles but the recommendations are more likely to be relevant to the question. However, this also means there will be more questions without any recommended articles or help center content. When the threshold knob is turned down, more content is presented, but it's less likely to be relevant to the end user.
Common misconceptions
There are some common misconceptions that can lead to confusion. In this section, we’ll address these misconceptions and hopefully clear some things up.
- Does the AI agent learn based on end-user feedback? Isn’t that where the machine learning comes in?
- Is AI-powered search always better than a keyword search?
- Can I “train” the AI agent by asking the same question and answer over and over again, and responding with “Yes” or “No” to mark an article as relevant or irrelevant?
- If I add labels to my articles, is that like adding a keyword to the article? Can this be done to boost how often an article is suggested?
- If I can’t use the “improve answers” button to improve performance, how can I improve performance?
Does the AI agent learn based on end-user feedback? Isn’t that where the machine learning comes in?
Although it's powered by a machine learning model, this does not mean that the AI agent is constantly learning. The model does not incorporate feedback in real-time from end users or agents. Therefore, the feedback has no influence on which articles are recommended.
The end user feedback is captured and used in a number of ways:
- It is displayed to agents to provide additional context on what articles were viewed, marked as “not helpful,” or used to resolve a case
- It is exposed in reporting for admins to track performance
- It is evaluated by the data science team at Zendesk
If you see that the incorrect articles are repeatedly being recommended, the best thing to do is modify the titles and the first 75 words of the articles to make the main concept more clear. You can also create a list of articles to draw from by using labels so that suggestions come from a sub-set of articles.
Is AI-powered search always better than a keyword search?
Overall, AI-powered article recommendations are more accurate and relevant than a keyword search, especially when the question is asked as a full sentence (instead of one to three words).
Can I “train” the AI agent by asking the same question and answer over and over again, and responding with “Yes” or “No” to mark an article as relevant or irrelevant?
No. The AI agent will consistently recommend the same articles regardless of any feedback from agents or end users. It is specifically built so it doesn’t require any training to get started. It’s already pre-trained to understand natural language. If you test out a phrase or question and the wrong articles are recommended, the best thing to do is modify the titles and the first 75 words of the articles to make the main concept more clear.
If I add labels to my articles, is that like adding a keyword to the article? Can this be done to boost how often an article is suggested?
Labels are a great way to create a list of approved articles to pull from. However, labels do not have an influence on the weight given to each article. See Best Practices: Using labels to optimize your article recommendations.
If I can’t use the “improve answers” button to improve performance, how can I improve performance?
The best way to improve AI agent performance is to consider the following:
- Monitor your autoreply with articles activity: Use Explore to see which articles are your best and worst-performing.
- Consider the structure of existing articles: Look at your help center articles and make sure that the content is concise and well organized. Each title should be phrased as a short sentence or a question.
- Use Content Cues: Use machine learning technology and article usage data to help you discover opportunities and tasks that will improve the health of your knowledge base.