20:39 UTC | 13:39 PT
We saw a 10 minute period of intermittent accessibility and search issues on Pod 14. The services have now recovered.
An unexpected spike in search queries of increasing length resulted in a very expensive operation for Elasticsearch. This caused a high load in some data nodes in Elasticsearch which then resulted in a spike in service latency. In order to prevent this from happening again in the future, we are adding additional capacity and data nodes to pod 14.
FOR MORE INFORMATION
For current system status information about your Zendesk, check out our system status page. During an incident, you can also receive status updates by following @ZendeskOps on Twitter. The summary of our post-mortem investigation is usually posted here a few days after the incident has ended. If you have additional questions about this incident, please log a ticket with us.