- The search crawler lets you implement federated search without developer resources. You can set up multiple crawlers in your help center to crawl and index different content in the same or different websites.
- The Federated Search API is available if you have developer resources. This REST API that lets you ingest records of your external content into the Zendesk search indexes to implement federated search in your help center.
After you set up federated search, you need to configure the content that you want to include in help center search results (see Including external content in your help center search results).
Setting up the search crawler for federated search for your help center
The search crawler lets you implement federated search in your help center without developer resources. You can set up multiple crawlers in your help center to crawl and index different content in the same or different websites.
To set up the search crawler for federated search
-
In Guide, click the Settings icon (
) in the sidebar, then click Search settings.
- Under Crawlers, click Manage.
- Click Add Crawler.
- In Name this crawler, enter the following:
- Name that you want to assign to the crawler. This is an internal name that identifies your search crawler on the crawler management list.
-
Owner who is the Guide admin responsible for crawler maintenance and troubleshooting. By default, the crawler owner is the user creating the crawler, however you can change this name to any Guide admin.
Crawler owners receive email notifications both when the crawler runs successfully and when there are error notifications, such as problems with domain verification, processing the sitemap, or crawling pages.
- In Add the website you want to crawl, verify ownership of your domain by configuring the following:
- Website URL - Enter the URL of the website that you want to crawl.
-
Domain ownership verification - Click Copy to copy the HTML tag onto your clipboard, then paste the tag into the <head> section in the HTML code of your site's non-authenticated home page. You can do this after you complete the crawler setup and you can always find the verification tag on the edit crawler page. See Managing search crawlers.
Note: Do not remove the tag once it is in place, as the crawler needs to complete successful domain verification each time it runs.
- In Add a sitemap, in Sitemap URL, enter the URL for the sitemap you want the crawler to use when crawling your site.
The sitemap must follow the sitemaps XML protocol and contain a list of all pages within the site that you want to crawl. The sitemap can be the standard sitemap containing all the pages of the site or it can be a dedicated sitemap that lists the pages that you want to crawl. All sitemaps must be hosted on the domain that the crawler is configured to crawl. The search crawler does not support sitemap indexes.
You can set up multiple crawlers on the same site that each use different sitemaps defining the pages you want the search crawler to crawl.
- In Add filters to help people find this content, configure the source and type filters used to filter search results by your end users. Source refers to the origin of the external content, such as a forum, issue tracker, or learning management system. Type refers to the kind of content, such as blog post, tech note, or bug report.
- Source - Click the arrow, then select a source from the list or select + Create new source to add a name that describes where this content lives.
- Type - Click the arrow, then select a type from the list or select + Create new type to add a name that describes what kind of content this is.
- Click Finish.
The search crawler is created and pending. Within 24 hours the crawler will verify ownership of the domain and then fetch and parse the specified sitemap. Once the sitemap processing succeeds, the crawler begins to crawl the pages and index its content. If the crawler fails either during domain verification or while processing the sitemap, the crawler owner will receive an email notification with troubleshooting tips to help resolve the issue. The crawler will try again in 24 hours.Note: Zendesk/External-Content is the user agent for the search crawler. To prevent the crawler from failing due to a firewall blocking requests, whitelist (or allow-list) Zendesk/External-Content.
After you set up the search crawler, you need to select the content that you want to include and exclude in help center search results. See Including external content in your help center search results.
You can also include external content in search results in the knowledge section of the context panel for agents. See Configuring Knowledge in the context panel.
Using the API to configure federated search for your help center
Your developers can set up federated search in your help center using the Federated Search API. This method requires that your developers build and maintain a middleware layer to integrate the service or site that hosts the external content and the help center.
- Build your own integration with the Zendesk REST API then ingest the content you want to show up in your search results. See the Federated Search API reference documentation.
After you set up federated search, you need to select the content that you want to include and exclude in help center search results. See Including external content in your help center search results.
You can also include external content in search results in the knowledge section of the context panel for agents. See Configuring Knowledge in the context panel.