14:40 UTC | 07:40 PT
We're happy report the performance issues on POD 14 have been fully resolved. Please reach out to us if you're still experiencing issues.
14:19 UTC | 07:19 PT
POD 14 service is currently stable after applying mitigation steps. We continue to monitor closely as we implement a permanent fix.
14:00 UTC | 07:00 PT
We have identified the cause of issues impacting Pod 14 and are actively working on a fix.
13:31 UTC | 06:31 PT
We are investigating potential causes responsible for the performance issues affecting POD14, more to follow.
13:14 UTC | 06:14 PT
We are currently investigating reports of performance issues affecting POD14, more to follow.
On June 25th, 2018 at 12:55 UTC, we discovered that the database master in one of the database clusters on Pod 14 was overwhelmed by slow running queries, which resulted in increased latency in services that depend on the shared database. This was due to an unusual endpoint pattern usage that allowed a part of the code to generate more expensive queries than necessary, overwhelming the master database. In an attempt to mitigate the impact on the accounts located on Pod 14, our Engineering team killed the queries which successfully stabilized performance from our database monitoring perspective. From that point on the general performance kept improving until testing proved that optimal performance was restored at 13:08 UTC. The All-Clear however was only called at 14:37 UTC once a fix was deployed on all Pods to prevent this specific pattern of queries from impacting any Zendesk account in the future. The fix aims to improve load balancing on our master and slaves databases as well as reducing the queries' weight.
FOR MORE INFORMATION
For current system status information about your Zendesk, check out our system status page. During an incident, you can also receive status updates by following @ZendeskOps on Twitter. The summary of our post-mortem investigation is usually posted here a few days after the incident has ended. If you have additional questions about this incident, please log a ticket with us.