Creating, Flagging, and Publishing with Knowledge Capture Oh My!
I wanted to share how our team is doing Knowledge Sharing today. We have a lot going on in our knowledge process, so I'm not going to list out specific steps on how to build this, but I'll be giving a high level overview of most of our process. Some of the steps may make a lot of sense for you and your teams, others might not. However, I hope that this helps some of you improve your knowledge experience.
We've had variations of this since before it was even a concept being looked at within Zendesk. Our team worked closely with our account rep to introduce them to Knowledge Centered Support (KCS) methodologies and we heard that many teams at Zendesk completed this training themselves and started their own knowledge journey, over the following years there have been many versions of this app.
We are just getting to a place were we can use the features in Zendesk, but the tips should still apply to your teams with the newest release from the Knowledge team.
We've also done all kinds of crazy stuff like writing articles in the ticket description and using the Ticket to Help Center app to create content. Don't do that, it was rough. We've also had dedicated sections for parts of our workflow to hold articles (this even included archived articles before we could actually archive articles) and we've kept some of this around.
Our workflows require some setup in Zendesk to support them.
- We have a Category for all new articles to pass through. This category has two sections: Unconfirmed and In Review. Unconfirmed can be managed by all agents. End users cannot see these articles. In Review and all other sections can only be managed by Guide managers. Visibility is determined by user segments. In Review is still limited to agent eyes only.
- We have two templates for Question and Answer format. One is for internal articles, the other is for external articles.
- We use the Zendesk features for knowledge sharing.
- We have a customized Web Widget used solely for article feedback on our articles.
- We have a Form called Knowledge Article
- We have a role that allows for some of our agents to Manage Guide
- We have a group that includes our knowledge managers
- We have Triggers in place to automatically set web widget, and knowledge capture tickets to the Knowledge Article Form, and set to the group to our knowledge group.
Creating Article Workflow
Most articles are created from Knowledge. This is done using a one of our templates. All agents can only create tickets in Unconfirmed. We create all articles as internal, published articles. We don’t use drafts because we want our articles to be constantly iterated on through our flagging process.
We support “good enough” article creation. If an article doesn’t exist, we’d rather an article exist than be perfect. An example article might be:
How do I solve this problem?
Ask Daniel for help.
It’s not the solution, but it is a solution to get the agent on the right step.
Only our Knowledge team can create articles in sections outside of Unconfirmed. If an article needs to be pushed into the Help Center faster, an agent can create a Knowledge Article ticket to request the article is updated and published in a certain category/section.
Flagging Article Workflow
We promote flagging a lot. If an article needs improvement we put the ownership on the entire team to help improve knowledge. Agents can add comments to an article and a ticket is created to update the article. If an article is in Unconfirmed, the agent can update the article directly.
Each flagged article creates a ticket. This ticket is routed you our Knowledge team who reviews each article for duplicates, style guide, template, compliance, etc. They apply the changes indicated by the flag. If the article still seems like it is incomplete, they may elect to leave it until another flag comes in, or if they know the answer, they can resolve it themselves.
The reason we don’t push for 100% perfection is that some articles aren’t used that often. We want to spend our energy on content that is in use, so we make updates based on current feedback and only on feedback. This works for us since most of our articles are internal facing only.
In addition, we have the web widget embedded into Zendesk. We have it hidden by default, but it shows up by clicking a link at the bottom of each article that asks for Article Feedback. This allows for us to capture the URL of the article when the web widget ticket is created and allows for non-agents to provide feedback on articles. In these cases, we’ll make the article as perfect as we can since it’s externally facing. Web widget tickets are routed via trigger to the right form, and group for knowledge updates.
Sometimes newly created tickets need to go quickly from unconfirmed into more prominent section. We allow our team to submit Knowledge Article tickets to request this. In addition, we have workflows in place to review our Unconfirmed sections for the most viewed articles. If they are being used, we clean them up and move them into the greater knowledge base. If an article has too few views, we don’t put the effort in.
If an article needs additional review from a subject matter expert, it may sit in the In Review section for a while. This section is internal, but it is locked down for edits while it is primed for release to the broader Help Center. Subject Matter Experts don't have to be on the Knowledge team, they just have to know if the information in the article is correct and valid.
For external content, we have a style guide on how articles should look. In this case, we check all the boxes and make sure articles look their best before publishing them.
Linking and Reporting
This is the area where we are looking to gain the most.
We look at link rate, flag rate, and create rate to understand our knowledge usage. We also started doing some tracking with Link Accuracy. This isn’t automatic in Zendesk, but we are pulling a few random tickets out to validate that the knowledge process was followed appropriately. Based on these random samples, we’ll give a score for Link Accuracy to our agents.
We also have a service level set on our article tickets. We use Pausable Update of 3 days. This promotes active improvement on our articles over time and allows us to pause the service level clock if we just can’t progress an article ticket. Our knowledge team has these interwoven into their standard tickets.
One thing that is lacking is view counts. Link metrics get us by a bit with these because most of our articles are used by agents, however, we do have a limited view into our customer viewing habits. Google Analytics is a solution, but I’d rather see a baked in ability to see views so I could rank which articles are worth working on.
A lot of our processes are about not spending time on articles with low usage. This is important for my team because we are much smaller than we used to be. Adding knowledge workflows on top of normal work can add a lot of overhead. Our message is that knowledge isn’t in addition to our work, but a core part of how we work.
Daniel, you are amazing! Thanks so much for sharing this.
I'm sure it'll be helpful to other users who are setting up the app.
Daniel, what about detaching articles? Or how do you proceed in this scenario:
1. The agent finds some symptom, but there is no article for this, so he created a new one.
2. Later agent finds additional symptom and existing article, that is being linked to the ticket.
How you proceed in such cases?
We link multiple articles as needed and don’t fret the incorrect links on most tickets. In many cases those articles were used by our agent, even if it wasn’t the “true” answer.
We recently launched a link accuracy initiative to review a random sample of tickets for each agent and we build a score per agent from that sample. This is mostly manual but we feel it’s important to do the process right, and while we can’t check every ticket, we can sample and get an idea on if we need to train more or not. We literally are at our first review stage for this so we haven’t scored the first batch, but our goal is to identify opportunities and coach towards reducing the gap of inaccurate links.
This is really helpful, Daniel. Thank you!
Maybe you'd like to share your workflows as well, Zac!? :)
Just some part of how QA works in our case:
We are following RQI principles to evaluate agents activity in KCS, that includes Link Accuracy, Participation, we also check article across style guide and content standard. We use PlayVox module for that and tags to distinct and review tickets with new/known articles and tickets without articles. I can describe it in details if it someone would need it.
2 Daniel: could you please describe reporting more: reports example and how you analyze them, how it helps you. That's the most interesting part I believe :)
Our reporting is really simple now. Our biggest challenge has been with adoption. We've gone through probably 3-4 relaunches of KCS in Zendesk. Early on our challenges were in the technology. We had very complex processes to get knowledge created and we tried too hard to be fully KCS compliant. We learned that it's okay to do some of it and to move really slow to get it right. It helped our team get more involved and didn't push expectations onto the team too fast like we did in earlier attempts. The Knowledge Capture app also makes it so much easier in Zendesk to manage our workflows. It's not perfect, but it's much better than we had a few years back when we were really trying to hack the system to work for us.
As I said before, we look at link rate, flag rate, and create rate. I also have service levels for Knowledge Articles that I watch, but I look at that as an equal to our other tickets so I roll that into an overall SLA score. Link Accuracy is next for us. Aside from Link Accuracy, the new reporting with the KC app appears to be influenced by some of our feedback on what we look at, so for my team it has the metrics we were looking at from the agent level.
We also look at article usage. Which articles have the most links, or which section has the most links. This is good information to help drive where to focus attention for proactive work. However, we are still new in this space with our workflows and do have some opportunities here. I'd love for a built in view metric to help us out here.
In the future we may look at going beyond link accuracy and digging into an Article Quality Index which should include content standard, style guide, etc. However, we are okay going slow for now. It helps that most articles are internal to us. If we had a huge need for external content, we'd likely need to move faster on improving workflows and grading in those spaces. I don't know that I'd ever expect reporting direct from Zendesk at this level, once you get into quality the variations are enormous and no one does it the same. However, I could see workflows where "quality" tickets are created with links to an article or ticket to review. Then we could pull those tickets and the fields within for a quality score.
Thank you Daniel. This is very insightful.
As a fellow KCS advocate, I thank you for helping forge a way ahead for the methodology in Zendesk. We're in the early phases of an Adoption (about to launch our Wave 1 team) and clearly have benefitted from your expert feedback for the Knowledge Capture App.
Hopefully, we'll see Guide KCSv6 certified one day. We're clearly headed in the right direction as a community.
Glad it’s helping Ryan! My team is a customer of our Knowledge Management team within our own organization so we work closely with them to refine our process. We have other knowledge bases internally for different teams and my team is the primary driver for the methodology in Zendesk and we’ve had to learn a lot. We are also a small team which can make some of the adoption harder because we don’t have the advantage of scale. Even having 10 people that really WANT to do KCS in a large organization can be huge. For us 10 people is what we have to work with so adoption and doing things right this time is really important for us.
Nice post! Thanks for sharing your implementation details, Daniel.
Great insight Daniel, thank you for sharing!
Some great insight, but this article does not indicate how to set up flagging for new articles. I've been able to successfully test ticket generation for feedback on existing articles, but not for new articles altogether.
How can I set this up so a new article generated using the KC app generates a ticket, and saves the article as a draft until an admin can review and publish?
If you'd like to have your Agents suggest new articles be created we would recommend creating an article template for that. Within that article you can list the information that you're asking for those Agents to provide as the foundation for those articles which your team can fill out in the same manner as they would when flagging an existing article for review.
Your team can then route the flags from that Request Article Creation template to the appropriate team to have them work on creating those initially.
I've switched careers since I originally posted and am not using Knowledge Capture in my new role (yet). However, if I'm understanding you correctly you are looking for a ticket to be created on new article creation from the app. This was a feature that I would have loved as well, and when I inquired about it a few months back I was told that it wasn't on the roadmap. I suspect there are other goals to add article workflow components into Guide which in the long run could be better than a ticket based system which I can understand. I was part of an earlier beta of the Knowledge Capture app that did have the ticket functionality for new articles and can attest that it was very useful for our workflows. If Zendesk can pull off a full article workflow in Guide, I'm all for them taking the steps to do so.
In the short term, I think you'd may be able to implement a workflow with your admin to review the Manage Articles section of Guide for new draft content and review/publish regularly.
I also am intrigued with James idea. It may be worth looking into flagging the template for a new article. That is something I probably would have sunk some time into assessing that for us if I was still working in an instance that had Knowledge Capture.
Thanks for the great information, Geoff.
Can someone explain the steps for flagging an article or point me to an article that has the steps? My searches have brought up articles that talk about flagging, but so far I have not found one that explains exactly how to flag.
Here's the article with steps:
Let us know if you need anything else.
Iniciar sesión para dejar un comentario.