Out-Law News 1 min. read
13 May 2024, 5:28 am
At a Privacy Awareness Week (PAW) event held at Pinsent Masons in Melbourne, Veronica Scott, a privacy and cyber law expert at Pinsent Masons, highlighted the importance of human-centred privacy impact assessment tools in helping businesses identify and manage the privacy compliance risks involved in the adoption and use of these tools.
“Businesses seeking to adopt generative AI tools need to be aware of, and navigate, legal risks – including around privacy. They should carefully consider the opportunities and benefits against the risks they are taking on as users of the technologies and be clear which use cases are acceptable and the problems they are trying to solve,” Scott said.
“There are a number of existing laws that must be complied with depending on the context and the decisions the business is relying on the technology to make. Workplace use cases have their own unique legal risks. The fundamental principles of privacy compliance are a good starting place. Leveraging human-centred privacy impact assessment tools will be important to effectively identify the impacts and the new or amplified risks and how to manage them.”
The event featured an expert panel discussion on this year’s PAW theme of ‘Privacy and Technology: improving transparency, accountability and security’ with a focus on AI in the workplace.
Scott chaired the event with panellists Victorian Privacy and Data Protection Deputy Commissioner Rachel Dixon and Pinsent Masons’ technology law expert James Arnott and employment law expert Ben McKinley.
The discussion covered the need for businesses to invest early in ensuring the data sets they used were trustworthy and high-quality, and securing their sensitive systems with up-to-date access controls. It also covered the importance of implementing effective employee training and upskilling, and developing clear policies on the use of AI in the workplace.
McKinley said the deployment of AI required an understanding that risks could emerge across multiple areas of the enterprise which are not just limited to data use and privacy, and may include supply chain risks and workplace disputes.
Arnott highlighted the potential risk of overreliance on AI models. He said that as their investment in such tools increased, organisations should have an ‘exit strategy’ in place to ensure they can continue to operate should the need to stop using these technologies occur.