Out-Law Analysis 6 min. read
15 Apr 2025, 10:48 am
Fresh guidance to help organisations determine whether data they handle is subject to UK data protection laws, has been published by the Information Commissioner’s Office (ICO).
UK data protection law applies to the processing of personal data. Sometimes organisations seek to remove information from which individuals can be identified from data sets, to reduce restrictions and compliance burdens affecting how they can use and share that data. This also reduces risk posed to individuals.
The ICO’s new guidance on anonymisation is designed to help organisations understand the various ways in which they can do this, the relative strengths and weaknesses of the various approaches, and their suitability to different scenarios. The guidance, which updates the ICO’s 2012 code of practice on anonymisation and comes after draft changes were published in 2021 and 2022, makes clear that this is a complex exercise.
Below, we explore some of the central concepts in the ICO’s new guidance and the tests the authority has outlined to help businesses understand if data they are responsible for is truly anonymised, for the purposes of UK law, or if it should be handled as personal data.
Anonymisation ensures that the risk of identification is sufficiently remote to minimise the risks to people arising from the use of their information. Identifiability is a wide concept. A person can be identifiable from many factors that can distinguish them from someone else, not just a name. The concept of identifiability should be taken into account in its broadest sense in an organisation’s anonymisation process.
In its guidance, the ICO states that the identifiability of information exists on a spectrum. At one end of the spectrum, information is directly related to identified or identifiable individuals, making it personal data at all times. At the other end, it is impossible for the organisation to link the information to any identifiable person, rendering it anonymous in that organisation’s possession.
The concept of ‘spectrum of identifiability’ offers organisations a useful way to analyse whether a piece of information that they are in possession of would be categorised as personal data or anonymous information.
In between these two extremes, information may shift towards one end of the spectrum, depending on the identifiability of the information concerned.
Organisations must be able to demonstrate that disclosing or sharing apparently anonymous information will not lead to an inappropriate disclosure of personal data. Therefore, organisations must assess the risk of the information being identified.
The ICO’s new guidance introduces the concept of a ‘reasonably likely’ test. This test follows the test that applies under EU case law, which was set out by the Court of Justice of the EU in the case of Patrick Breyer v. Bundesrepublik Deutschland, where the CJEU ruled that dynamic IP addresses can constitute personal data.
The test involves the following assessment:
If an organisation assesses that there are means “reasonably likely” they or anyone who might gain access to the information will use to identify someone, then the information is personal data. If no means are “reasonably likely” to be used, then the information is anonymised.
The concept of “reasonably likely” is rooted in UK data protection law. Recital 26 of the UK General Data Protection Regulation states that, “to determine whether a natural person is identifiable, account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly”.
According to the new guidance, the more feasible and cost-effective a method becomes, the more organisations should consider it as a means that is reasonably likely to be used. Data protection law does not require organisations to adopt an approach that takes account of every hypothetical or theoretical chance of identifiability. It is not always possible to reduce identifiability risk to a level of zero, and data protection law does not require organisations to do so. What is important is how organisations assess what is “reasonably likely” relative to the circumstances, not what may be independently “conceivably likely”.
The ICO said that the cost and time involved in carrying out identification, what technology is available at the time of processing and how that might change in future, as well as the specific risks that different types of data release present, are all relevant factors organisations should consider as to whether identification of information in their possession is technically and legally possible. It said organisations should also consider how the risk of identifiability may change as information moves from one environment to another, as part of that assessment.
The greater the likelihood that someone may attempt to identify a person from within a dataset, the more care organisations should take to ensure anonymisation is effective.
Once the likelihood of identifiability has been confirmed, organisations must:
Data protection law does not prescribe how organisations should determine whether the anonymous information they disclose is likely to result in the identification of a person. Organisations must consider all practical steps and means that are reasonably likely to be used by someone motivated to identify people whose personal data was used to derive anonymous information.
This is known as the ‘motivated intruder’ test, a concept that existed in the 2012 code and which has been recognised by both the ICO and the First-tier Tribunal in information rights cases.
In applying the motivated intruder test, organisations should consider the:
A ‘motivated intruder’ is someone who wishes to identify a person from the anonymous information that is derived from their personal information. The test assesses whether the motivated intruder is likely to be successful. Organisations should assume that a motivated intruder is someone that:
The intruder is therefore someone who is motivated to access the personal data organisations hold in order to establish whether it relates to people and, if so, to identify them. Such intruders may intend to use the data in ways that may pose risks to organisations and the rights and freedoms of people whose data organisations hold.
As additional explanation to its 2012 code, the ICO has now said that organisations should consider the following in relation to the intruder:
Organisations should assume that the greater the perceived value of the data from the perspective of the motivated intruder, the greater the capabilities, tools and resources that are at the intruder’s disposal.
The ICO said that generative AI tools are now among the “obvious sources of information” that a motivated intruder should be considered to have at their disposal.
It added that obvious motivations for a motivated intruder may include: malicious reasons or financial gain; causing mischief by embarrassing others, or to undermine the public support for release of data; revealing newsworthy information about public figures; or for political or activist purposes. It also said the motivated intruder could simply be curious or seeking to demonstrate that it is possible to identify a person from the information available.
Again, supplementing its earlier 2012 code, the ICO states that way data is released has an impact on the factors organisations have to consider when assessing the identifiability risk and the robustness and effectiveness of anonymisation measures.
With a public release, it said organisations should have a very robust approach to anonymisation. This is because, it said, when information is released to the public, organisations lose control of the data and it can be almost impossible to retract the data if it later becomes clear that it relates to people that are identifiable. It also highlighted that organisations would not have control over the actions and intentions of anyone who receives that information.
With a release to defined groups, the ICO said organisations should consider what information and technical know-how is available to members of that group. It said contractual arrangements and associated technical and organisation controls will play a role in the overall assessment. Organisations are urged to consider the possibility that the data may be accessed by an intruder from outside the group, or that it may be shared inappropriately, and apply physical and technical security controls to seek to prevent this.