Generative AI is shaking up the world of customer support. Here’s how to ensure your customer service help center is set up for success to help you see the most value from this game-changing tech.
This article contains the following topics:
- Generative AI in a customer service setting
- What happens when a knowledge source is fed to a generative AI-powered AI agent
- Tips for formatting your help center from our in-house AI researchers
- Help center architecture best practices for generative AI
- Formatting your articles for optimal clarity
Related articles:
- Optimizing your help center content for AI agents
- 5 strategies for building up your help center content for AI
Generative AI in a customer service setting
Large language models (LLMs) and generative AI have taken the world by storm. The natural, conversational experience these technologies provide has truly raised the bar. And having seen how human-like AI-powered interactions can be, customers now expect the same in customer support settings.
One of the most innovative use cases for generative AI in customer support is instantly pulling information from a knowledge source like your help center. Plug this tech into your help center and you’ll be having more accurate and human-like support conversations in minutes. By connecting an LLM to your help center or FAQ page, you can instantly serve the most up-to-date support information to your customers, no training required.
But to see the most value from generative AI agents, it’s essential that the data the LLM has access to is presented as concisely and coherently as possible. To help you get your customer service help center ready for generative AI, there are some best practices to follow.
Let's start with what actually happens when you feed generative AI a knowledge source, then we'll go over overall help center architecture best practices, and then we'll drill down into more detailed formatting tips.
What happens when a knowledge source is fed to a generative AI agent
When you feed your knowledge source into a generative AI-powered AI agent, the text of the knowledge source is intelligently broken down into “chunks” of text. This is the first step in a framework called Retrieval-Augmented Generation (RAG). RAG enables your LLM to access information beyond its original training data, such as your carefully crafted help center articles.
These chunks are then stored in a database that’s organized by semantic meaning. When a user message is sent to the AI agent, the meaning of that message is compared with the meaning of the chunks in the database to surface the best match. That information is then used by your AI agent (in accordance with its instructions, persona and tone of voice, and safety guardrails) to answer the user’s message.
Here’s how generative AI uses your help center to answer customer questions in more detail:
- Breaking it down: Initially, your help center is imported and segmented into what we call "chunks." These chunks vary in size, tailored to capture both the length and the intrinsic meaning of your content.
- Understanding through numbers: Each chunk then receives its unique numerical signature—a vector representing the semantic meaning of the chunk. Essentially, it’s translating your text into a mathematical language that the AI agent can understand and store efficiently in a vector database.
- Matching wits: When a user poses a question to your AI agent, the system compares the semantic meaning of the question with these stored vectors to find the best match. This process ensures that the most relevant chunks are retrieved to provide precise and informed answers.
- Et voilà: Finally, your AI agent uses the retrieved information to answer the user’s query, in accordance with its instructions, persona and tone of voice, and safety guardrails.
By understanding the main point here—that chunks are the basis of your AI agent's replies—you can better prepare your help center to be more compatible and effective when integrated into your generative AI agent.
And while LLMs and RAG are at the forefront of today's technological advancements, capturing well-deserved attention with their innovative capabilities, we don’t have Artificial General Intelligence (that is, AI that can carry out all tasks that a human can) just yet. So as you integrate generative AI into your workflow, remember that it draws its insights entirely from the text chunks created from your connected knowledge sources rather than browsing or carrying out research in the background.
Tips for formatting your help center from our in-house AI researchers
Before we get into general best practices, here’s our AI researchers’ top two tips for prepping your help center:
- Make sure each article directly answers a customer question. Not only will this help the LLM perform better, it’ll make life easier for your human users too when they search through your help center for an answer.
- Align questions and topics with their solutions. If the question (e.g., "How do I XYZ?") or topic (e.g., "Steps to perform XYZ") appears only in the title, it might not always remain attached to its corresponding answers or instructions during the chunking process. So to keep the context clear, it’s a smart move to repeat the question or the key statement near the steps or information in the body of the article. This helps ensure that each chunk of the article is comprehensive and remains useful on its own.
This practice not only keeps the question linked to its answer within the chunk, but also improves the likelihood that the retrieval system will present complete and contextually accurate responses.
Help center architecture best practices for generative AI
If you're using a generative AI-powered AI agent and you want it to function at its peak, it's crucial to refine the architecture of your help center. Here are some straightforward guidelines to get you started:
- Eliminate redundancies: Sift through your content and remove any duplicate or conflicting information. Remember, the accuracy of the AI agent hinges on the quality of the data it receives. Always prioritize the most recent and pertinent content.
- Depth and focus: Structure your articles to be hyper-focused. Each support topic should be thoroughly covered within your help center. Unlike humans, generative AI agents cannot browse external web pages or follow links to gather additional context. Hence, it's vital that all necessary information is self-contained.
- Label your content: Implement labels on your help center content. This becomes particularly beneficial when you want to tailor content visibility based on user attributes, like geographical location, using search rules.
- Opt for text: Ensure there is a text-only version of every article. Since generative AI interprets text and saves its meaning into a database, anything in your help center that’s not text (images, videos, or diagrams) will neither be read nor saved in the database.
Formatting your articles for optimal clarity
An organized structure can significantly enhance the accessibility and usability of your content. Remember, each chunk is stored and retrieved based on its meaning, so the clearer that is, the better.
- Use clear hierarchy: Use titles and subtitles effectively, and structure the content with action-oriented steps. As per the tip above, avoid separating topics or questions from their answers.
- Avoid nested instructions: If multiple solutions exist for a problem, present each as a separate instruction rather than sub-steps within a broader step. This clarity will aid both your users and the LLM in finding solutions quickly.
- Include introductions: Each article should begin with an introduction that outlines the relevance and the problems it aims to solve. This is beneficial for human users currently, and will become increasingly relevant for LLMs as their contextual understanding evolves.
- Keep it simple: Keep paragraphs short, focused on answering specific questions directly or explaining topics concisely. Likewise, sentences should be direct and to the point. This also aids in better translations.
- Structure lists: Use bullet points to list facts or tips, and number your steps when detailing a process. Worth noting: although text-based tables (actual text, not an image) can be read by generative AI agents, typically they're harder for an LLM to understand and assign meaning to than information laid out in normal sentences. So it’s better to stick to natural language everywhere you can.
- Clarify terminology: Always spell out terms in full, with the abbreviation in parentheses when first introduced. This ensures clarity for all users.
- Know your audience: If your help center (or part of it) has been created for your own agents rather than a customer-facing resource, then consider whether the information needs to be rewritten in a way that’s appropriate for your AI agent to deliver to your customers.
As with implementing any new technology, a little bit of prep goes a long way. So get your customer service help center ready before plugging in a generative AI solution. You’ll reap the rewards: faster time-to-value and more accurate automated support.
And finally, it's important to remember that a generative AI agent isn't a silver bullet that can solve all of your support issues. Instead, this technology should be used as part of a broader, well-planned CX strategy: where generative AI, situation-based automation, and (of course) human agents all play to their strengths and work together to deliver the best experience for your customers.
0 comments