Snapchat may have failed to properly assess the privacy risks to children of its artificial intelligence chatbot, Britain’s data watchdog said Friday, adding that it would consider the company’s response before making a final enforcement decision.
The Information Commissioner’s Office (ICO) said that if the US company fails to adequately address the regulator’s concerns, “My AI”, which was launched in April, could be banned in Britain.
“The preliminary findings of our investigation indicate a concerning failure by Snap to adequately identify and assess privacy risks to children and other users before launching ‘My AI,’” said Information Commissioner John Edwards.
The findings do not necessarily mean that the instant messaging app used largely by young people has broken UK data protection law or that the ICO will ultimately issue an enforcement notice, the regulator said.
Snapchat said it was reviewing the ICO notice and that it was committed to user privacy.
“My AI went through a robust legal and privacy review process before being made publicly available,” a Snap spokesperson said. “We will continue to work constructively with the ICO to ensure they are comfortable with our risk assessment procedures.”
The ICO is investigating how “My AI” processes the personal data of Snapchat’s approximately 21 million UK users, including children aged 13 to 17.
“My AI” is powered by OpenAI’s ChatGPT, the most famous example of generative AI, which policymakers worldwide are looking for ways to regulate in the face of privacy and security concerns.
Social media platforms, including Snapchat, require users to be 13 or older, but have had mixed success in keeping children off their platforms.
Reuters reported in August that the regulator was gathering information to determine whether Snapchat was doing enough to remove underage users from its platform.
(Except for the headline, this story has not been edited by DailyExpertNews staff and is published from a syndicated feed.)