Snapchat adds new parental controls that block “sensitive” and “suggestive” content for teens

Posted on

Snapchat launched parental controls on its app last year through its new “Family Center” feature. Today, the company announced via a post on its online Privacy and Safety Hub that it will now add content filtering capabilities that allow parents to prevent teens from being exposed to content that has been identified as sensitive or suggestive.

To enable the feature, parents can enable the “Restrict Sensitive Content” filter in Snapchat’s Family Center. Once enabled, teens will no longer see the blocked content on Stories and Spotlight – the short video section of the platform. The text below the switch indicates that enabling this filter will not affect content shared in Chat, Snaps, and Search.

In addition to this change, Snapchat is also publishing its content guidelines for the first time to help creators on Stories and Spotlight better understand what kinds of posts can be recommended on its platform and what content is now considered “sensitive” under the community guidelines. The platform said it had shared these guidelines with a range of creators under the Snap Stars program and with its media partners, but now the company is making them available to everyone through a page on its website.

The company already bans content such as hateful content, terrorism, violent extremism, illegal activities, harmful false or misleading information, harassment and bullying, threats of violence and more on its platform. But now the guidelines specify what content is considered “sensitive” under different categories. This is content that may be eligible for recommendation, but may be blocked for teen users under these new controls, or for others in the app based on their age, location, or personal preferences.

For example, under the category of sexual content, Snap explains that content is considered “sensitive” if it contains “all nudity, as well as all images of sexual activity, even if clothed, and even if the images are not real” (as in the case of AI images, as well as “explicit language” describing sexual acts and other matters related to sex, such as sex work, taboos, genitals, sex toys, “overtly suggestive imagery”, “insensitive or degrading sexual content” and “manipulated media.”

It addresses what is considered sensitive in other categories, including harassment, distressing or violent content, false or misleading information, illegal or regulated activities, hateful content, terrorism and violent extremism, and commercial content (public solicitation to purchase from unapproved makers). This includes a range of content, such as depictions of drugs, bait for engagement (“wait for it”), self-harm, body modifications, gore, violence in the news, graphic images of human physical illness, animal suffering, sensational coverage of distribution incidents, such as violent or sex crimes, dangerous behavior and much, much more.

The changes come long after a congressional hearing in 2021 where Snap was grilled about showing adult-related content in the app’s Discover feed, such as invites to sexualized video games and articles about going to bars or porn. As senators rightly pointed out, Snap’s app was listed as 12+ on the App Store, but the content it shared was clearly aimed at a more mature audience. Even the video games advertised were in some cases judged to be aimed at older users.

“We hope these new tools and guidelines help parents, caregivers, trusted adults and teens not only personalize their Snapchat experience, but empower them to have productive conversations about their online experiences,” the social media company said in a blog post. .

While the new feature could go a long way in limiting sensitive content from teen viewers in some areas, it doesn’t address one of the areas Congress had mentioned: the Discover feed. Here, Snap offers content from publishers, including those who publish content that may be considered “sensitive” under the guidelines. It’s honestly a lot of clickbait. And yet this area is not addressed with the new controls.

In addition, the feature requires parents to take action by turning on a switch they probably don’t know about.

In short, this is yet another example of how the lack of laws and regulations governing social media companies has led to self-monitoring that doesn’t go far enough to protect young users from harm.

In addition to the content controls, Snap said it’s working on adding tools to give parents more “visibility and control” around teens’ use of the new My AI chatbot.

Last month, the social network launched this chatbot powered by Open AI’s GPT technology under the Snapchat+ plan. Incidentally, Snapchat’s announcement comes after the chatbot went rogue while chatting with a Washington Post columnist who was pretending to be a teenager. The bot allegedly advised the columnist to hide the smell of weed and alcohol during a birthday party. Separately, researchers at the Center for Humane Technology found that the bot provided sex advice a user pretending to be 13 years old.

The additional tools aimed at the chatbot have not yet been rolled out.

Leave a Reply

Your email address will not be published. Required fields are marked *