Current as of: January 1, 2025
About our Content Moderation Report
This EU Content Moderation report provides transparency to Zendesk service recipients about the content moderation activities Zendesk has engaged in during the prior calendar year (from January 1, 2024, through December 31, 2024), as required by the EU Digital Services Act.
If you have any questions regarding this report, please contact dsa@zendesk.com.
This report includes the following sections:
- Orders from EU Member State authorities
- Notices submitted by individuals
- Zendesk-initiated content moderation
- Appeals for redress from affected service recipients
Orders from EU Member State authorities
Orders by type of alleged illegal content
Type of illegal content |
Number of orders |
Fraud | 1 |
Orders by Member State
Member State | Number of orders |
France | 1 |
Median response time
Median time to confirm receipt of the order to the authority issuing the order | 1 day |
Median time to inform the authority issuing the order, or any other authority specified in the order, of its receipt, and to give effect to the order | 1 day |
Notice submitted by individuals
Notices by type of alleged illegal content
Type of allegedly illegal content | Number of notices |
Intellectual Property (IP) | 332 |
Fraud | 9 |
Other illegal conduct | 2 |
Notices submitted by trusted flaggers
Notices submitted by trusted flaggers | 0 notices |
Basis for actions taken
Basis of Action | Number |
Actions taken based on violation of User Content and Conduct Policy and Law | 150 |
Notices processed using automated means
Notices processed using automated means | 0 |
Median response time
Median time needed for taking action on notice | 3.7 days |
Zendesk-initiated content moderation
Description of Zendesk-initiated content moderation
Zendesk does not generally engage in proactive content moderation. As a service provider to other businesses, we allow and expect our Subscribers to engage in content moderation in line with the Zendesk User Content and Conduct Policy, and their own policies.
Zendesk uses automated tools to identify and/or filter certain content. These tools are primarily used in the spam context – emails sent to Zendesk customer accounts are run through email spam filters and the spam verdict is used to determine whether incoming emails create tickets or are held in a review queue for customers to review. In addition, comments made on Help Center articles, as well as posts and comments in our Community product are run through an automated spam filter to detect obvious spam. Zendesk also leverages automated tools to identify and suspend fraudulent accounts by looking at indicators of fraud.
Zendesk does engage in reactive content moderation upon receiving notice from legal authorities, through reports submitted by members of the public, or through escalations that occur internally. Our content moderation activities are aimed to align with the Zendesk User Content and Conduct Policy, and are conducted by a centralized team within Zendesk of abuse analysts tasked with addressing abuse on Zendesk. This team receives general training upon hire, as well as periodic training on Digital Services Act compliance.
Content moderation primarily consists of terminating customer accounts, which effectively removes the content, but may also consist of removing the offending content.
Zendesk-initiated content moderation by reason
Reason | Number |
Spam/phishing | 803 |
Detection methods used
Detection methods | Number |
AI detection tool | 340 |
Abuse analyst observations | 463 |
Appeals for redress from affected service recipients
Number of complaints received through internal complaint-handling systems | 0 |
0 comments