This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
Back

Blog

| 3 minute read

Investigation into chatbot - a 'warning shot for industry'

The ICO has concluded its investigation into Snap Inc.’s use of its 'My AI' chatbot feature on the Snapchat platform.

The My AI chatbot is described by Snap Inc. as an experimental, friendly, chatbot, enabling Snapchat users to raise queries via text, image or audio (voice notes) and to receive responses generated by OpenAI's ChatGPT. My AI can answer trivia questions, recommend restaurants based on the user's location and respond in a group chat as if it were another user of the platform. 

Timeline of events

Shortly after its launch in February 2023, the ICO opened an investigation into My AI following concerns that Snap Inc. had not met its legal obligation to adequately assess the data protection risks posed by the new chatbot. 

Whilst Snap Inc. had produced four versions of a Data Privacy Impact Assessment (DPIA) during the course of 2023, the DPIA contained only a cursory and higher-level risk assessment, in tabular format, which failed to include any explanation of the basis for its conclusions in respect of the likelihood and severity of harm and the overall risk level. This criticism was levied by the ICO at the fourth version of Snap Inc.'s DPIA, whilst the first version of Snap Inc.'s DPIA was produced only three days before My AI was launched.

The ICO's investigation concluded Snap Inc. had failed to carry out a DPIA which complied with the requirements of Article 35(7) UK GDPR before its launch. If the ICO had escalated matters beyond its preliminary enforcement notice and adopted its final enforcement notice, it could have required Snap Inc. to stop processing the personal data of Snapchat users in the UK for any purpose connected to My AI until a revised DPIA had been carried out.

Assessing the data protection risks

The assessment of data protection risk is particularly important in this context involving the use of innovative technology and the processing of personal data of 13 to 17-year-old children. 

External commentators have previously raised concerns that:

  • 'Young users … may not understand the implications of data collection or make educated decisions about what information is safe to share' (BBC Bitesize)
  • ‘As interactions with AI can feel like conversations with real people, young people may rely too heavily on the chatbot for emotional support or entertainment’ (ChildNet.com)
  • ‘It is not always clear to children they are conversing with a computer and some of the responses were ’creepy'' (CNN.com)

Following a year of engagement, the ICO has now concluded its investigation and determined that Snap Inc. has taken ‘significant steps to carry out a more thorough review of the risks posed by My AI and demonstrate to [the ICO] that it had implemented appropriate mitigations’. The ICO is now satisfied the revised DPIA complies with the requirements of Article 35 UK GDPR. In particular, the fifth version of the DPIA now:

  • contains a significantly more detailed breakdown of the processing operations performed for the purposes of generating My AI’s responses;
  • considers the extent to which Snap Inc.’s use of generative AI technology differs from its existing processing activities and thus impacts the necessity and proportionality of its processing related to My AI, including addressing how the use of such technology could impact the nature and volume of special category data shared by users;
  • adopts a structured and detailed assessment of the risks posed by the processing activities carried out in connection with My AI, with specific consideration of the risks posed to 13-17-year-old users; and
  • clearly identifies measures which Snap Inc. envisaged would address the risks to the rights and freedoms of users of My AI, including an explanation of how and to what extent such measures would mitigate the risks it had identified.

Consequently, there are no grounds to issue an enforcement notice which would require Snap Inc. to take, or refrain from taking, specified steps to bring its processing operations into compliance with Article 35 UK GDPR. 

Points to note

The ICO's final decision is a helpful reminder that generative AI remains a key priority for the regulator and that the industry must innovate responsibly. An effective DPIA allows you to identify and fix problems early, demonstrating good information governance and bringing broader benefits for individuals and your organisation.

Data controllers should consider the higher standards of care that apply to children's data and digital services and be prepared to make the necessary, operational changes to comply with UK GDPR. Failure to carry out a DPIA when required may leave you open to enforcement action, including a fine of up to £8.7 million, or 2% global annual turnover if higher.

If you would like assistance with completing your DPIA or to discuss the proposed roll-out of a chatbot function, contact me to help plan your next steps.

“Our investigation into ‘My AI’ should act as a warning shot for industry. Organisations developing or using generative AI must consider data protection from the outset, including rigorously assessing and mitigating risks to people’s rights and freedoms before bringing products to market. We will continue to monitor organisations’ risk assessments and use the full range of our enforcement powers – including fines – to protect the public from harm.”

To make sure you receive all of our latest insights, subscribe here.

Tags

data protection, chatbot, digital services, children, information commissioner, data privacy impact assessment, regulation, generative ai, charities, education, health and social care, housing, local government, social business