Out-Law / Your Daily Need-To-Know

Out-Law Analysis 5 min. read

Works created by AI image generators pose copyright risks


The advent of artificial intelligence (AI) software that can generate images has raised several legal questions over who owns the copyright in these works – or whether they can be protected by copyright at all.

‘Computer-generated work’

The 1988 Copyright, Designs and Patents Act (CDPA) defines ‘computer-generated work’ as work that is “generated by a computer in circumstances such that there is no human author of the work”. In this context, “author” means a person who creates a work. Therefore, at first glance at least, UK law appears to have anticipated the possibility of a computer having at least some creative input into a work which could attract copyright protection.

But this definition is difficult to apply to an AI tool which generates images from pre-existing photographs based on text prompts provided by a human. It could be argued that there is some human authorship in that situation, since it is the human who comes up with the idea of the text prompt and who creates the work using the AI as a tool – like a painter uses a paintbrush as a tool to create their work.

In addition, if an AI tool uses pre-existing photographs taken by human photographers to create a composite image which is similar to those pre-existing photographs, perhaps those human photographers could be said to have some authorship stake in the final work. The fact that media companies are already claiming copyright infringement when their original material is being used to train AI could also point to an element of human authorship in AI-generated works.

AI authorship

Given that there is a mix of human and AI ‘authorship’ in AI-generated art today, it is difficult to delineate where the human authorship ends and the AI’s begins. How great does the human contribution have to be to comprise putting in place the arrangements necessary for the creation of the work? If a person simply presses a button or inputs limited text, will they have done enough to pass the threshold to put in place the arrangements necessary for the creation of that work?

The current [UK statutory] definition of computer-generated work … is not sufficient to provide clarity on copyright protection for AI-generated works, whether it is defining exactly how much human input is required in a computer-generated work, or setting out the originality requirement

The law is yet to provide clear guidance on these questions. At the time of writing, AI has no legal personality and is incapable of owning IP rights, so it will likely be the company developing the AI, or possibly the user of the AI tool, that will own any copyright in a generated image. However, as AI becomes more powerful, it may begin generating works without any human intervention. If these are deemed original for the purposes of copyright law, AI IP rights will need to be recognised.

Originality

It is also unclear whether a computer-generated work, within the meaning of the CDPA, is capable of being original, regardless of whether that work is generated with minimal human input or not. The UK requirement of originality is that of “sufficient skill, labour, and judgement” with the qualifier that “the author originated it by his efforts rather than slavishly copying it from the work produced by the efforts of another person”.

The 2009 Infopaq case specified that the standard of originality required is “the author’s own intellectual creation”, which appears to confirm that there needs to be an element of human input in order for this criterion to be satisfied. The Court of Justice of the EU additionally required that “the author expresses his creative ability in an original manner by making free and creative choices… and thus stamps his own “personal touch”.

It is difficult to see how an AI-generated work could satisfy any of the above formulations of originality, even with some element of human involvement. Given that anyone can use AI to generate an image in a matter of moments with the click of a button, it may be unreasonable to say that there is sufficient labour, skill, and judgement in the creation of that work. Furthermore, it might be difficult to prove that the resulting work is the “author’s own intellectual creation”, precisely because of the uncertainty around where a human’s contribution ends and an AI’s contribution begins.

It is unclear whether the fact that a human user conceived an idea for a text prompt would be enough to satisfy the requirement of “intellectual creation” since it is the AI tool that generates the image at the end. Moreover, if an AI tool can be considered to contribute to a work in a material way, then the question of whether an AI tool is even capable of making its own “intellectual creation” arises. At the time of writing, AI cannot think and understand ideas. Just because we can identify the expression of a work, we cannot assume that it has been generated by intellect or an idea.

Creativity concerns and ‘deepfakes’

It is also unclear what impact AI might have on human creativity. While AI systems currently struggle to generate certain images – such as those containing human figures – accurately, over time they could become so powerful that they can generate perfect pieces of art or music in seconds. These creations would be ostensibly original and not based upon, or similar to, pre-existing work.

In such circumstances, would humans ever endeavour to create something themselves if they know an AI tool can do it in a fraction of the time? Some people see AI as a tool to aid their creativity, but others worry that a world saturated by AI-generated works would threaten human ingenuity and innovation.

The growth of so-called ‘deepfake’ technology is another area of concern for some observers. A “deepfake” is a piece of synthetic media in which a person in an existing image or video is replaced with someone else’s face or voice using deep-learning AI. While the technology is still in its infancy, ‘deepfaked’ images and videos have already found their way into mainstream media.

Online streaming service ITVX recently released a series called Deep Fake Neighbour Wars, which features individual actors with the likenesses of well-known celebrities grafted onto their faces using AI technology. While some of the deepfakes in the series are easy to spot, others are uncomfortably convincing. As the technology advances, it might become practically impossible to recognise whether footage has been deepfaked.

It is unclear at this point whether this will lead to an eradication of human actors in the business, as current technology still requires an actor to speak their lines and act out the scenes, with the AI then grafting the deepfake onto their face. However, the technology may well be moving in that direction and paying smaller amounts of money for less well-known “faces” – quite literally – could potentially save film studios millions.

Nevertheless, famous actors, or their estates if they are deceased, may also own the rights to their likeness – we have seen franchises such as Star Wars bring deceased actors back to life. Actors or estates may see an opportunity for lucrative licensing of their likeness even if it is to be deepfaked onto the face of someone less well-known. However, the extent to which the law protects the likeness of an individual currently differs between jurisdictions and so disputes around the requirement for a licence to deepfake are likely.

Ultimately, it is clear that the current definition of computer-generated work in the CDPA is not sufficient to provide clarity on copyright protection for AI-generated works, whether it is defining exactly how much human input is required in a computer-generated work, or setting out the originality requirement.

Co-written by Concetta Scrimshaw of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.