Snapchat is launching new tools, including an age-appropriate filter and parental insights to make the AI chatbot experience safer.
Days after Snapchat launched its GPT-powered chatbot for Snapchat+ subscribers, a Washington Post report highlighted that the bot was responding in an insecure and inappropriate manner.
The social giant said it found after launch that people were trying to “trick the chatbot into providing answers that didn’t follow our guidelines.” So Snapchat is launching a few tools to keep AI responses in check.
Snap has built in a new age filter that lets AI know the users’ date of birth and provides them with age-appropriate answers. The company said the chatbot will “consistently take their age into account” when talking to users.
Snap also plans to provide more insight to parents or guardians about children’s interactions with the bot in the Family Center, which launched last August, in the coming weeks. The new feature will share whether their teens interact with the AI and the frequency of those interactions. Both the guardian and the teens must sign up to use Family Center to use these parental controls.
In a blog post, Snap explained that the My AI chatbot is not a “true friend” and to improve answers it uses conversation history. Users are also informed about data retention when they start the chat with the bot.
The company said the bot gave only 0.01% of responses in a “non-compliant” language. Snap will count any comment that contains references to violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying, hate speech, derogatory or biased statements, racism, misogyny, or marginalizing underrepresented groups as “non-conforming.”
The social network stated that in most cases, these inappropriate comments were the result of parroting whatever users said. It also noted that the company will temporarily block access to AI bots for a user who misuses the service.
“We will continue to use these lessons to improve My AI. This data also helps us deploy a new system to limit misuse of My AI. We are adding OpenAI’s moderation technology to our existing toolset, which allows us to assess the severity of potentially harmful content and temporarily restrict Snapchatters’ access to My AI if they misuse the service,” Snap said.
Given the proliferation of AI-powered tools, many people are concerned about their security and privacy. Last week, an ethics group called the Center for Artificial Intelligence and Digital Policy wrote a letter to the FTC urging the agency to halt the rollout of OpenAI’s GPT-4 technology, accusing the new technology of being “biased , deceptive and a risk to privacy”. and public safety.”
Last month, Senator Michael Bennet also wrote a letter to OpenAI, Meta, Google, Microsoft, and Snap expressing concerns about the safety of generative AI tools used by teens.
It is now clear that these new chatbot models are susceptible to malicious input and, in turn, provide inappropriate output. While tech companies may want to roll out these tools quickly, they need to make sure there are enough guardrails around them that prevent the chatbots from going rogue.