Current as of: January 1, 2026

About our Content Moderation Report

This Content Moderation report provides transparency to Zendesk service recipients about the content moderation activities Zendesk has engaged in during the prior calendar year (from January 1, 2025, through December 31, 2025), as required by the EU Digital Services Act.

If you have any questions regarding this report, or wish to request a copy of a prior year's report, please contact dsa@zendesk.com.

This report includes the following sections:

  • Orders from EU Member State authorities
  • Notices submitted by individuals
  • Zendesk-initiated content moderation
  • Appeals for redress from affected service recipients

Orders from EU Member State authorities

Orders by type of alleged illegal content

Type of illegal content Number of orders
None 0

Orders by Member State

Member State Number of orders
None 0

Median response time

Median time to confirm receipt of the order to the authority issuing the order N/A
Median time to inform the authority issuing the order, or any other authority specified in the order, of its receipt, and to give effect to the order N/A

Notice submitted by individuals

Notices by type of alleged illegal content

Type of allegedly illegal content Number of notices
Intellectual Property (IP) 1,256
Fraud 4
Other illegal conduct 1

Notices submitted by trusted flaggers

Notices submitted by trusted flaggers 0 notices

Basis for actions taken

Basis of Action Number
Actions taken based on violation of User Content and Conduct Policy and Law 822

Notices processed using automated means

Notices processed using automated means 0

Median response time

Median time needed for taking action on notice 1 day

Zendesk-initiated content moderation

Description of Zendesk-initiated content moderation

Zendesk does not generally engage in proactive content moderation. As a service provider to other businesses, we allow and expect our Subscribers to engage in content moderation in line with the Zendesk User Content and Conduct Policy, and their own policies.

Zendesk uses automated tools to identify and/or filter certain content. These tools are primarily used in the spam context – emails sent to Zendesk customer accounts are run through email spam filters and the spam verdict is used to determine whether incoming emails create tickets or are held in a review queue for customers to review. In addition, comments made on Help Center articles, as well as posts and comments in our Community product are run through an automated spam filter to detect obvious spam. Zendesk also leverages automated tools to identify and suspend fraudulent accounts by looking at indicators of fraud.

Zendesk does engage in reactive content moderation upon receiving notice from legal authorities, through reports submitted by members of the public, or through escalations that occur internally. Our content moderation activities are aimed to align with the Zendesk User Content and Conduct Policy, and are conducted by a centralized team within Zendesk of abuse analysts tasked with addressing abuse on Zendesk. This team receives general training upon hire, as well as periodic training on Digital Services Act compliance.

Content moderation primarily consists of terminating customer accounts, which effectively removes the content, but may also consist of removing the offending content.

Zendesk-initiated content moderation by reason

Reason Number
Spam/phishing 4,189

Detection methods used

Detection methods Number
AI detection tool 1,324
Abuse analyst observations 2,865

Type of restriction applied

Type of restriction applied Number
Removal from the Zendesk Service 4,189

Appeals for redress from affected service recipients

Number of complaints received through internal complaint-handling systems 0

 

 

Powered by Zendesk