Snapchat (NASDAQ: SNAP) has received a warning from the U.K. Information Commissioner’s Office (ICO) over a potential failure by the platform to address privacy risks stemming from the launch of its artificial intelligence (AI) chatbot.
The ICO issued a preliminary enforcement notice against Snap, Inc and Snap Group Limited—Snapchat’s parent companies, hinting at the start of a more extensive investigation. According to ICO, there are concerns that the company failed to conduct a thorough risk assessment with its launch of an AI chatbot, specifically ignoring privacy risks.
Going forward, the ICO will focus its investigation on the privacy risks faced by Snapchat’s younger users between the ages of 13 to 17. While the ICO says the warning is merely provisional, a final enforcement notice against Snapchat will force the company to abandon its AI chatbot pending the conclusion of a new risk assessment.
In February, Snapchat unveiled ‘My AI,” an AI chatbot built using OpenAI’s ChatGPT-4, integrated into the social messaging platform with a range of functionality for users. After an initial testing stage, the integration was shipped out to a larger audience, earning it plaudits as the pioneer in linking generative AI technology with a messaging platform.
“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI,” Information Minister John Edwards said. “Today’s preliminary enforcement notice shows we will take action in order to protect UK consumers’ privacy rights.”
In response to the notice, Snapchat put up a spirited defense, claiming that its AI offering was subjected to multiple legal and privacy review processes. A spokesperson added that the firm will begin interfacing with the ICO to reach an amicable resolution.
The notice against Snapchat follows a general ICO warning to enterprises keen on integrating generative AI into their processes to put user privacy at the forefront of their operations.
“Spend time at the outset to understand how AI is using personal information, mitigate any risks you become aware of, and then roll out your AI approach with confidence that it won’t upset customers or regulators,” said Stephen Almond, ICO’s Director on Regulatory Risk.
In April, the regulatory watchdog issued a similar warning to AI developers, urging strict adherence to extant data handling and collection rules.
Setting the pace on AI regulation
The U.K. has revealed its intention to be the leading light for AI regulation, taking preliminary steps to achieve its lofty ambitions. To draft a blueprint, the government launched an AI task force in April with up to $1 billion set aside to ensure safe and innovative usage of the technology.
The U.K.’s Competition and Markets Authority (CMA) has waded into the ecosystem to prevent a monopoly by large AI developers in the space. Lawmakers in the country are pining for a global approach toward regulation “to advance a shared international understanding of the challenges of AI.”
The U.K. government is expected to host a global AI Summit in November amid calls by the opposition to handle AI with an iron fist. Regulators in the U.K. are increasing their scrutiny over AI in the same manner as the heightened supervision of digital currencies by the Financial Conduct Authority.
Watch: Konstantinos Sgantzos Talks AI and BSV Blockchain with CoinGeek
New to blockchain? Check out CoinGeek’s Blockchain for Beginners section, the ultimate resource guide to learn more about blockchain technology.