WeChat DSA Transparency Report 2025

WeChat DSA Transparency Report 2025

Published on: 14 February 2025

This transparency report is published by Tencent International Service Europe B.V. (the “Company”) in relation to the offering of WeChat messaging and calling and WeChat Moments (collectively, the “Services”) to European Union (“EU”) users. This transparency report is responsive to the obligations under Article 15 of Regulation (EU) 2022/2065 (the Digital Services Act, or “DSA”) and contains information regarding content moderation engaged in by the Company during the period between 17 February 2024 to 31 December 2024 (the “Relevant Period”).

  1. Introduction

1.1. The Company offers the Services to EU users to encourage sharing, expression and communication between them. At the same time, we are committed to protecting the safety and privacy of our users and to this end, we perform content moderation on the Services in accordance with the WeChat Community Guidelines.

  1. Content moderation engaged in by the Company

Content moderation policies

2.1. The WeChat Community Guidelines explain the types of content and behaviour we allow and prohibit on the Services, and are updated from time to time in response to new behaviours and risks. All users of the Services (including those located in the EU) are required to abide by these guidelines, which prohibit or restrict the following types of content or behaviour:

      • fraud or scams
      • nudity or sexual content
      • hateful, spam or other inappropriate behaviour
      • violent content
      • behaviour that undermines account integrity
      • intellectual property infringement
      • violations of minor safety
      • terrorism, violent extremism and other criminal behaviour
      • personal data violation
      • other inappropriate content

Reporting violating content or behaviour

2.2. We encourage users of the Services to report any potential violations of the WeChat Community Guidelines to us via the in-app user reporting function on the WeChat app. Once we receive a user report, our content moderation team will review the user report to determine whether there has been a violation of the WeChat Community Guidelines. We will only take action against a reported user if we determine that there has been such a violation. A reported user whom we action against will receive a notice of the action(s) taken and brief reasons for such action(s), and will have the opportunity to appeal against our decision by clicking on the appeal button within such notice.

2.3. In addition to in-app reporting channels, users and non-users can report any violating or illegal content or behaviour that they come across on the Services via the feedback form located in the WeChat Help Center. The same feedback form can also be used to raise questions about the WeChat Community Guidelines.

Enforcement of the WeChat Community Guidelines

2.4. We detect violations of the WeChat Community Guidelines through user reports as well as proactive identification of violating content or behaviour. We enforce the WeChat Community Guidelines with the help of our content moderation team and technology. Our content moderation team undergoes regular and ad-hoc training to ensure that they are equipped with the necessary knowledge and tools to perform content moderation in accordance with the relevant policies. To ensure a high level of consistency and accuracy in our content moderation, we have internal escalation processes whereby our content moderation team can escalate more complex issues to their team leaders or other relevant experts, such as our trust and safety or legal team, where appropriate.

2.5. We also use automated tools, including machine learning models and logic-based rules, to identify and moderate violating content such as nudity, fraud and gambling. These tools take into account various factors to determine whether to action against a certain content or user which has violated our content moderation policies. For example, our automated tools assign scores to violating content and behaviours of users, and those scores are tabulated to trigger appropriate enforcement actions. We are continuing to improve the accuracy of our automated tools so that we can more effectively detect and moderate violating content and behaviour, while also reducing the number of incorrect content moderation decisions. We calculate the error rate of these automated tools by taking the number of content moderation decisions that have been reversed upon appeal by users against the total number of content moderation decisions made, and this rate is less than 1%.

2.6. We may remove or change the visibility of any content on the Services that violates the WeChat Community Guidelines. In cases of severe or repeated violations by a user, we may suspend or terminate the violating user’s access to part or all of the Services, restrict access to any data, information, media or other content that the user or other users submit, upload, transmit or display in connection with the use of the Services or take any other appropriate actions in accordance with the WeChat Terms of Serviceand the WeChat Acceptable Use Policy.

Content moderation data

2.7. During the Relevant Period, the Company moderated content on the Services as follows:

Table 1: Instances of content moderation performed by the Company on the Services during the Relevant Period based on violation category (detected via user reports)

Category

Number of content moderation decisions

Type of action taken

Account suspension/ termination

Function restriction

Warning

Hateful, spam and other inappropriate behaviour

29,167

11,199

14,255

3,713

Fraud or scams

16,188

6,162

9,495

531

Account integrity

2,045

813

1,167

65

Minor safety

451

220

190

41

Terrorism, violent extremism and other criminal behaviour

7

3

2

2

Nudity or sexual content

7

3

3

1

Intellectual property infringement

12

2

10

0

Violent content

0

0

0

0

Personal data violation

0

0

0

0

Other inappropriate content 1

6,289

3,452

2,416

421

Grand total

54,166

21,854

27,538

4,774

1. As set out in the Community Guidelines

Table 2: Instances of content moderation performed by the Company on the Services during the Relevant Period based on violation category (detected via automated tools)

Category

Number of content moderation decisions

Type of action taken

Account suspension/ termination

Function restriction

Warning

Hateful, spam and other inappropriate behaviour

18,184

14,640

3,544

0

Fraud or scams

50,548

46,090

4,458

0

Account integrity

586,559

230,341

356,218

0

Minor safety

0

0

0

0

Terrorism, violent extremism and other criminal behaviour

0

0

0

0

Nudity or sexual content

3,097

3,092

5

0

Intellectual property infringement

0

0

0

0

Violent content

0

0

0

0

Personal data violation

0

0

0

0

Other inappropriate content 2

40,375

4,301

32,051

4,023

Grand total

698,763

298,464

396,276

4,023

2. As set out in the Community Guidelines

3. Notices submitted on presence of illegal content

3.1. During the Relevant Period, 12 notices were submitted to us to notify us of the presence of illegal content on the Services. None of these notices resulted in further actions on our end, as they did not contain sufficient information. No reports were received from trusted flaggers during the Relevant Period.

4. Orders received pursuant to the DSA

4.1. During the Relevant Period, the Company did not receive any orders from judicial or administrative authorities relating to the DSA.