Out-Law Analysis 1 min. read

Businesses looking to use AI systems must consider data privacy risks


Businesses adopting artificial intelligence (AI) need to be proactive about data privacy and ensure data collection, processing and storage are in line with both internal policy and data protection laws.

The way in which AI systems acquire data from multiple sources could present major privacy risks for companies. While some data directly provided by users might be obtained with their consent, in many instances, data collected through behind-the-scenes methods like cookies and tracking technologies is largely obtained without individuals’ consent or knowledge. The uncertainty around the acquisition of data is especially evident in the widely adopted generative AI system, ChatGPT.

User prompts are a particular concern for organisations using generative AI systems, as these systems may learn from users’ questions and instructions, and store user prompts in the AI system’s database. When users of these systems are unaware that their prompts may be stored and used to answer questions from others asking similar questions, businesses face the risk of confidential or commercially sensitive information being leaked.

Proactive steps for businesses adopting AI systems

Businesses adopting AI should review the AI system’s terms of use and privacy policy. They should also ensure that the collection, processing and storage of data by the AI system are compliant with the business’ internal privacy policy, as well as the applicable data protection laws.  Businesses should:

  • determine whether individuals have been duly informed and whether the appropriate consent has been obtained;
  • consider how the data will be used and who else is likely to access the data. For example, businesses should be aware of the possibility that data could be shared with other organisations or be made available to the vendor’s researchers or partner organisations;
  • assess the AI system’s data security measures, and work towards putting their own cybersecurity protocol and crisis management plan in place, in the event of a data breach;
  • consider whether the business’ cybersecurity protocol and crisis management plan have taken into account the use of the AI system; and
  • determine how to best correct outdated or inaccurate information, and explore whether any personal data collected can be anonymised.

Practically, depending on the extent to which they decide to use the AI system, it is important that businesses review and update their website data privacy notices accordingly.

They should also consider how to best maintain human oversight in, and provide employee training on, AI processes - especially in sensitive areas - to address errors and unexpected outcomes. By implementing internal guidelines and standards for AI applications, businesses can ensure fairness, transparency, and accountability.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.