This Content Moderation report provides transparency to Zendesk service recipients about the content moderation activities Zendesk has engaged in during the prior calendar year (from January 1, 2025, through December 31, 2025), as required by the EU Digital Services Act.
If you have any questions regarding this report, or wish to request a copy of a prior year's report, please contact dsa@zendesk.com.
Description of Zendesk-initiated content moderation
Zendesk does not generally engage in proactive content moderation. As a service provider to other businesses, we allow and expect our Subscribers to engage in content moderation in line with the Zendesk User Content and Conduct Policy, and their own policies.
Zendesk uses automated tools to identify and/or filter certain content. These tools are primarily used in the spam context – emails sent to Zendesk customer accounts are run through email spam filters and the spam verdict is used to determine whether incoming emails create tickets or are held in a review queue for customers to review. In addition, comments made on Help Center articles, as well as posts and comments in our Community product are run through an automated spam filter to detect obvious spam. Zendesk also leverages automated tools to identify and suspend fraudulent accounts by looking at indicators of fraud.
Zendesk does engage in reactive content moderation upon receiving notice from legal authorities, through reports submitted by members of the public, or through escalations that occur internally. Our content moderation activities are aimed to align with the Zendesk User Content and Conduct Policy, and are conducted by a centralized team within Zendesk of abuse analysts tasked with addressing abuse on Zendesk. This team receives general training upon hire, as well as periodic training on Digital Services Act compliance.
Content moderation primarily consists of terminating customer accounts, which effectively removes the content, but may also consist of removing the offending content.
Zendesk-initiated content moderation by reason
Reason
Number
Spam/phishing
4,189
Detection methods used
Detection methods
Number
AI detection tool
1,324
Abuse analyst observations
2,865
Type of restriction applied
Type of restriction applied
Number
Removal from the Zendesk Service
4,189
Appeals for redress from affected service recipients
Number of complaints received through internal complaint-handling systems