Out-Law / Your Daily Need-To-Know

Organisations should put in place policies with regard to the use of AI tools to guard against the risk of data breaches, the Dutch Data Protection Authority (Autoriteit Persoonsgegevens, AP) has warned.

According to a recent communication by the AP, there has been an increase of reports of data breaches due to the use of AI chatbots in the workplace such as ChatGPT and Copilot in the Netherlands.

The authority said that AI tools are very commonly and increasingly used by both employers and employees and can bring efficiency benefits to organisations. Among other things, they can be utilised to answer customer requests or summarise large files. However, the use of AI tools also brings risks, including from a privacy and data protection perspective.

According to the AP, AI chatbots are also being used at the employee's own initiative, beyond the employer's instructions or policies. And in many businesses, there are no policies in place yet. For example, employees have shared personal data of patients and customers with a chatbot that uses AI.

Stephanie Dekker, an employment law expert of Pinsent Masons based in Amsterdam, said: “Employers should develop policies around what is and is not allowed and which AI tools may be used by their employees. This is to protect personal data, company trade secrets and confidential information. Given the rapidly occurring developments in this area, it is also important to ensure that your employees are aware of the risks involved in unauthorised use of AI.”

Entering sensitive personal data or confidential information into an AI tool such as a chatbot allows the companies providing the chatbot unauthorised access to said information. This means that sensitive information can end up in the hands of tech companies without the knowledge or consent of the data subjects involved. Even more, the data or business confidential information entered into an AI tool may be used for training purposes, making this data or information available to others as well.

If there is personal data involved, this may lead to a data breach within the meaning of the EU’s General Data Protection Regulation. In the event of a data breach, organisations are often required to notify both the Dutch data protection authority and affected individuals. However, employers and employees are generally not always aware of when there is a data breach and what they should do in terms of reporting.

Nienke Kingma, an Amsterdam-based data protection expert at Pinsent Masons, said: “Given the risks of potential data breaches, it’s recommended to also include examples of data breaches relating to the unauthorised use of personal data in such AI tools in your data breach policy and training programme to raise awareness. If the company does not yet have a data breach policy in place, this would be a great opportunity to do so now. Especially now data breach notifications following from the use of AI tools have the DPA’s attention. Not adhering to the data breach notification requirements under the GDPR may lead to enforcement by the Dutch DPA, including the imposition of administrative fines.”

In one of the data breaches reported to the AP, an employee of a medical practice had entered medical data of patients into an AI chatbot - contrary to agreements. “Medical data is highly sensitive data and gets extra protection in the law for a reason. Just sharing that data with a tech company is a major violation of the privacy of the people involved,” the AP said.

The AP also received a report from a telecoms company, where an employee had entered a file including customer addresses into an AI chatbot.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.