TikTok is announcing updates to its Community Guidelines designed to make the app a safer environment for users. The company revised the guidelines to make them easier to understand and added new policies on AI and climate misinformation. TikTok also added more details about its existing policies regarding civil and electoral integrity and age restrictions.
The updated guidelines, which go into effect on April 21, come as TikTok CEO Shou Zi Chew is set to appear before Congress on March 23 amid growing security concerns.
As part of the updated guidelines, synthetic or manipulated media that depict realistic scenes must be clearly disclosed. The company says this can be done through the use of a sticker or caption, such as “synthetic,” “fake,” “not real,” or “altered.” TikTok notes that while it welcomes the creativity that AI unlocks, the technology can make it difficult to distinguish between fact and fiction, which can lead to risks.
“We do not allow synthetic media that contains the likeness of a real private figure,” the new guidelines read. “While we give more leeway to public figures, we don’t want them to be subject to abuse, or people to be misled by political or financial issues. We do not allow synthetic media of public figures if the content is used for endorsement or violates any other policy. This includes a ban on hate speech, sexual exploitation and serious forms of harassment.”
TikTok defines synthetic media as content created or modified by AI technology that contains “highly realistic” digitally created content of real people, such as a video of a real person speaking words that have been modified or altered. The Company defines a public figure as anyone 18 years of age or older in an important public role, such as a government official, politician, business executive or celebrity.
Under this policy, TikTok does not allow synthetic media with realistic scenes that are not prominently revealed or labeled in the video. The app also doesn’t allow material that has been edited or combined in a way that could mislead someone about real world events.
Synthetic media showing a public figure in certain contexts, including educational content, are allowed. For example, the app allows for videos of a celebrity doing a TikTok dance or a historical figure appearing in a history lesson.
The new guidelines for AI go beyond TikTok’s current policy on the technology, which simply states that it will not allow synthetic or manipulated media that mislead users by distorting the truth of events and cause significant harm to the subject of the AI. video, other persons or society .
The updated policy around AI comes as the technology has become trending due to the popularity of products such as DALL-E 2 and ChatGPT. Researchers and misinformation experts have sounded the alarm about AI and the danger posed by the technology, so it’s no surprise that TikTok is updating its policies to add specific guidelines related to AI.
“The world is changing,” Julie de Bailliencourt, TikTok’s global head of product policy, said during a briefing with reporters. “Our society is changing. We see new trends come and go, and we believe we need to update these guidelines regularly to meet the expectations of people who come to work with us.”
TikTok has also added a new section under its disinformation policy to address climate disinformation. The updated guidelines note that TikTok will not allow misinformation about climate change that “undermines established scientific consensus,” such as denying the existence of climate change or the factors contributing to it. However, the app does allow discussion of climate change, such as the pros or cons of certain policies or technologies, or personal views on specific weather conditions, as long as it doesn’t undermine scientific consensus.
The new policy comes as a report from last year revealed that TikTok search results were flooded with misinformation about climate change. The report found that if a user searched for “climate change,” the app provided results related to climate science denial, for example.
Add transparency to existing policies
“The first thing you notice when you open the new version of the Community Guidelines is that they look quite a bit different,” Justin Erlich, TikTok’s global head of issue policy and partnerships, said at the press conference. “That’s because we’ve completely revised the way we organize our rules into different topic areas with clear headings. In the new format, we have a short section of each rule where we explain what we do and don’t allow along with the rationale.”
The company added more details about how it protects civic and electoral integrity, as well as its approach to government, politician and political party accounts.
TikTok’s current community guidelines simply state that it does not allow content that misleads community members about elections or other civic processes. The new guidelines elaborate on TikTok not allowing misinformation about civic and election processes, including misinformation about how to vote, register to vote, candidate eligibility requirements, the processes to count ballots and certify elections, and the final outcome of an election. Unverified claims about the outcome of an election that are still unfolding and may be false or misleading will not appear in the app’s For You feed.
The guidelines outline that while TikTok treats content from government and politician accounts the same as any other account, it takes a different approach to content-level enforcement for these accounts. For example, these public interest accounts are banned for any serious content violation, such as threats of violence. Repeated content violations that are less severe will be temporarily ineligible to appear in the For You feed. In some cases, they may be temporarily restricted from posting new content.
TikTok has also added a new section explaining age restrictions for young users. The app openly restricts mature content so that it is only viewed by adults 18 and over, and the guidelines now further explain what content is restricted.
Age-restricted content includes significant body exposure by adults, seductive performances by adults, sexualized poses by adults, innuendos to sexual activity by adults, blood from humans and animals, consumption of excessive amounts of alcohol and tobacco by adults, activities that appear to lead to physical injury, and cosmetic surgery without risk warnings.
The new section comes as TikTok introduced a “Content Levels” system last year designed to prevent overtly adult-themed content from reaching users ages 13 to 17 last year.
The guidelines explain that content created by an account belonging to a user under the age of 16 is not eligible to appear in the app’s For You feed. Moderate body exposure of young people, along with intimate kissing or sexualized poses by young people, are not eligible to appear in the For You feed. The policy means that this content will still be available and visible in the app if you follow the user or go directly to their profile.
TikTok has also added a new section to its scam policy to note that using multiple accounts is not allowed to deliberately circumvent the rules. The anti-circumvention policy does not allow attempts to avoid an account ban by spreading content violations across multiple accounts. The app also does not allow the use of alternate accounts to continue the violating behavior that previously resulted in a ban on another account.
The company says it will provide additional training to its moderators over the coming months to effectively enforce these updated rules as they roll out.
The updated community guidelines come as the Biden administration continues its efforts to ban the app in the US. The Biden government has backed a bipartisan bill that could ban the app in the country and recently threatened a ban if the company doesn’t split with its Chinese ownership.