Out-Law Analysis 2 min. read

Businesses should consider data protection principles when working with AI vendors


Businesses should be proactive in managing sensitive data when adopting and developing artificial intelligence (AI) systems, including by taking data protection rules in the applicable jurisdiction into account.

There are several key considerations that businesses must consider when implementing fast moving technologies. 

AI regulatory guidelines  

It is important that local regulatory guidelines on AI adoption are taken into account. 

In Hong Kong SAR, for example, the Office of the Privacy Commission for Personal Data (PCPD) has published two sets of guidelines for companies that are procuring external AI services or developing in-house systems. 

There is currently no comprehensive legislation directly addressing AI in Hong Kong SAR. However, these guidelines offer a contractual framework and highlight key issues when adopting AI. 

Role of parties 

The role of all parties involved in the outsourcing of AI developer to a vendor must be clearly defined, including identifying the data user and the data processor in the arrangement.

It is vital that who has control and ownership over the raw data and the derived data is clearly defined. 

Use of data

Companies should generally avoid sharing sensitive data such as personal data or confidential data to AI vendors, and only share such data, as anonymised data, with AI vendors if necessary.

Sensitive data can be used during the design stage to develop and train the AI product, but it must always be used within the scope agreed between the parties. Therefore, parties should set out clearly the usage parameters in the contract.

If the use of personal data is unavoidable, and its use diverges from the original purpose of collection, then separate consent from individuals may be required.

Data transfer and sharing

A company anticipating cross-border transfer of data may be required to enter into a data transfer agreement or include data transfer clauses in any AI agreement.

Mainland Chinese law imposes certain restrictions on cross-border data transfer, requiring a China-based processor to enter into a standalone standard contract with the overseas recipient of the transfer.

These contracts are supposed to set out the obligations of each party in protecting the data transferred. If an AI service is provided through a cloud platform or a data centre hosted overseas, the extent of liability of the parties could be impacted.   

Data security 

Companies should have contractual terms in place that hold the AI vendor accountable for protecting sensitive data. When negotiating the contract, it’s also crucial to consider how personal data is processed and moved throughout the AI system’s ecosystem, so as to ensure that there are adequate safeguards to reduce the risk of a data leak.

Data subject rights

All parties involved in the development of AI systems should be aware that individuals are entitled to exercise their data subject rights in accordance with the law. If the gathering and use of personal data is part of the AI development process, then it is important to understand the rights of the individuals based on where the data has been gathered, how it has been collected and the laws of the local jurisdiction the system is being developed in.
When drafting the AI contract, it is therefore important that the roles of the parties in handling any requests involving personal data are clearly set out.


Read more on this topic


We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.