Out-Law / Your Daily Need-To-Know

Out-Law Analysis

PODCAST: Inside an AI company's copyright defence, and how finance firms can cope with AI regulation

AI Image

Photo by Stefani REYNOLDS / AFP


Cerys Wynn-Davies uses a court filing to analyse how AI companies are defending themselves against huge copyright infringement claims, and Luke Scanlon sets out the steps finance firms need to take to stay on the right side of growing finance-specific AI regulation, ahead of delivering training for financial services senior managers.


  • Transcript

    Hello and welcome to The Pinsent Masons Podcast with me, Matthew Magee. It’s a bit of an artificial intelligence special this week – not deliberately, but it’s just everywhere really, isn’t it? So we’ll get an unusually detailed look at one AI developer’s copyright defence, and we’ll find out what UK financial services managers have to know about the UK’s regulation of AI. But first, some business law news

    EU clarifies ‘essential use’ meaning for forever chemicals

    Infrastructure spending cut in Victoria and

    Dutch Data protection authority web scraping ruling could impact the business model of Information Services providers.

    The European Commission has clarified what ‘essential use’ means in relation to PFAS or ‘forever’ chemicals in a move that could result in the removal of the most harmful chemicals from EU markets for non-essential uses, such as for consumer products. EU laws on chemicals rely on the concept of ‘essential use’, and the guidance clarifies what that means. It says that two criteria must be met when determining whether a use of “a most harmful substance” is essential for society. The first is that the use is necessary for health or safety or is critical for the functioning of society, and the second is that there are no acceptable alternatives. Katie Hancock of Pinsent Masons said that the EU’s aim of phasing out the use of the most harmful substances is moving forward, but cautiously. She said “wholesale banning is neither practical nor possible. This is particularly so given the critical use of some such substances – for example, in the green transition. A balancing exercise will have to be carried out, to encourage advances and innovation to find alternatives and at the same time to limit the harm which may be caused while that is done.” The state of Victoria in Australia will cut its budget for infrastructure spending from 24 to 16 billion dollars over the next four years in a move which one expert said would cause uncertainty for projects in the state. The cuts were announced in the state’s budget for 2024-25 and could result in delays to major infrastructure projects, including the Melbourne Airport rail link. Other delayed projects included new campuses for the Royal Melbourne and Royal Women's hospitals. Plans to build the campuses in the Arden precinct have been scrapped in favour of further developing the existing hospital sites in Parkville, and redeveloping the Arden site, including to accommodate new homes. New hospitals in Diamond Creek/Eltham, Emerald Hill and Torquay have also been delayed. Construction expert Rebecca Dickson said the delays may be a concern for the construction industry which would now face greater uncertainty over future projects. She said “the news that projects identified as vital for Victoria – including a rail link to Melbourne Airport – are now delayed, may spur further industry frustration with an uncertain project pipeline and procurement timeframes.” Recently issued guidance from the Dutch data protection authority on web-scraping may impact the business model of information services providers and cause concern for those who rely on those services, an expert has said. The Dutch data protection authority warned that the practice of web-scraping to acquire personal data for purposes such as training artificial intelligence (AI) models is “almost always a violation of the General Data Protection Regulation (GDPR)”. Technology expert Wouter Seinen said: “the regulator decided to zoom out and look at data harvesting practices as a general theme” and came to the conclusion that “many practices that are currently quite commonplace may actually be violating GDPR. This may impact businesses and make companies nervous who rely on services varying from anti-money laundering to know your customer controls to direct marketing.” The Dutch guidance said that firms were required to show a legitimate interest in processing personal data even in situations where the information scraped is publicly available online. Whether interest is legitimate will depend on the way in which data is processed; the purpose of processing the personal data; and any safeguards to protect the interests of the data subjects.


    A battle has raged for a couple of years now between the tech companies redesigning all our futures by creating mind-boggling generative AI technology, and the owners of the acres of online content used to train and refine those systems. Newspaper publishers, photographers, musicians and artists have said that the mass-scraping of their content from the web represents copyright infringement. AI developers say that their use of the material is essential and doesn’t breach copyright law because of exemptions from the law, such as the fair use exemption for educational purposes. One of these cases is in the UK, where photo library Getty Images is suing Stability AI for copyright infringement over its generative AI systems, such as Stable Diffusion, which creates images from text or picture prompts. Stability AI has filed its defence with the High Court in London and we’ve taken a look – it gives an unusually detailed picture of the intricacies of how AI companies might defend themselves from these claims, and it includes a couple of legal arguments that haven’t been tried in courts before. London-based Cerys Wyn Davies told me what the case is about.

    Cerys Wyn Davies: So, at its broadest this dispute is about the AI developers generally but in this case specifically about Stability AI, being able to access and use the data that is available widely on the Internet and taking that information, that content generally and being able to use that to develop their AI tools, particularly the generative AI tools that have become very popular over the last 24 months or so.

    Matthew Magee: Stability AI says it's not breaking the law and it has three main arguments.

    Cerys: The first one that is worth mentioning is the outside of the jurisdiction of the UK Courts defence. So, what they say there is that that part of the tool that Stability AI and their employees were involved in supporting around research and development took place outside of the UK and therefore took place outside of the jurisdiction of the UK courts and therefore our copyright legislation. Now that particular issue is still to be examined because the judge who considered a strike out request has actually said more evidence needs to be seen to determine whether in fact that is the case, but that certainly does raise a really interesting challenge in terms of if developers could all avoid the jurisdictions where infringement might be held to occur, then this would mean that AI development would happen outside of the UK, for example, and any other jurisdiction that took the decision to prevent this sort of use of content. Another interesting defence is around any exclusions or exceptions to copyright infringement when the images are actually being generated. What Stability AI is saying is that they have a defence of pastiche, which means that they are intending to mimic existing content, authors, artists, etcetera, but not in a way that they take commercial benefit from those works, not in a way that is intended to be a reproduction that challenges those works, but they access a collection of different works and create what might be described as a pastiche from all of those sources. Now in the UK, and indeed in the EU, we really haven't seen this particular exception tested, so it produces a really interesting examination of the breadth of that exception to copyright infringement as a form of fair dealing with them with content and copyright works. The other issue that they raise is that when the two sorts of uses are made of the tool, so in the case of the text search or the text input, in order to create an image, they say there can't be copyright infringement there because the tool itself having developed and learned, it will access lots of different pieces of content to create an image so that doesn't amount to a substantial copy of any one piece of content or copyright work. In relation to those users who actually input an image in order to create the work, and then there they say that isn't an infringement by Stability AI or by the tool itself, but rather by the user who has decided to bring the image into the tool set the constraints around that.

    Matthew: The jurisdiction argument and the pastiche arguments are new, so it will be fascinating to see what the court in London makes of them. The legal picture won't fully emerge until the case is heard in the middle of 2025, and that probably won't be the end of it. Much is at stake, and appeals are likely. But what this case and these defences do is illustrate really neatly, the stark dilemma facing governments and policymakers, the courts aren't the only places will be settled. Governments around the world are busy trying to formulate new laws governing all sorts of aspects of AI and its development, including intellectual property, and it isn't easy. Many countries and the UK in particular, want to attract, develop and retain lucrative creative industries like music, film, game and television production, and they want to be centres of AI development. They use laws and regulations as tools to attract investment and talent, but the problem is that on this issue they can't come up with a law that pleases both industries. Create more protection for rights owners like publishers or photographers, and you alienate the AI industry, loosen the laws to allow freer AI development, and you undermine the creative industries which are entirely based on intellectual property rights. Cerys says this is a real challenge, but the existing international agreements might help.

    Cerys: The UK Government and other governments around the world are looking at the ability for AI developers to access and use this all important data, quality data, breadth of data is absolutely key to AI development to good AI tools and yet there needs to be the recognition of the content developers valuable content and their ability to commercialise that content. So, what government has been looking at is getting a balance between those two positions. There has been mention of potential collective licencing or other licence arrangements being put in place. We have conventions whereby countries around the world respect the copyright of other countries of the businesses and individuals within those countries, and agreed effectively to enforce the copyright in those jurisdictions, so respect each other's rights of enforcement. So generally, copyright has been seen as one of the intellectual property rights that can validly be enforced multi jurisdictionally. Generally, the discussion around this is that we need really in all jurisdictions on a worldwide stage to come to some consistent laws, not just in relation to copyright and intellectual property, but in all of the other issues that are key to the use of AI.


    Matthew: Staying with artificial intelligence and moving on to what it actually does – well it’s transforming many business processes in all kinds of industries, and financial services is no exception. What marks that part of the economy out is the extent of regulation it faces - to protect investors, clients and customers, financial services firms are tightly regulated not just in what kind of business they can and can't do, but in the mechanics of how they go about it. So I talked to London-based financial services and technology expert Luke Scanlon about how financial services AI is being regulated, and how that will change in the months ahead. The first thing to get straight is what kinds of tasks financial services firms are using AI for.

    Luke Scanlon: What does it mean for financial services businesses? Well, there's a specific set of risks they need to think about because first and foremost, they'll be thinking about their clients or their customers and there's many different regulatory duties which they need to comply with, which range from, you know, vulnerable customers through to consumer duty through to other requirements around disclosure and transparency. So financial institutions really need to think about how do they use this technology to meet all of their different regulatory requirements. And you know, there's a likelihood that a lot of regulatory obligations will attach to their use of AI, particularly if they are involved in the development of the of the AI system. So that's some work that needs to be undertaken to understand, you know, what's your definition of AI as an organisation and when is the organisation using AI?

    Matthew: Regulators and lawmakers are already putting controls in place for this kind of activity. The European Union has its AI Act, a wide-ranging piece of legislation that is quite prescriptive in how it requires companies to control AI. And in the UK the financial regulators are developing policies and rules that firms expect to see come into force in the next year or two. So the first thing companies have to do is find out exactly what they are doing, which isn't always as easy as you might think.

    Luke: So there's a number of different use cases. I think it's some of it's similar to other sectors and others are more specific. You know, financial crime, fraud detection, anti money laundering are, you know, one key area where it's been used. Other areas include as is the case with other sectors is in relation to code and development of technology itself and then in relation to customer onboarding and customer complaints, so that's a really interesting area if you think about complaints is something that needs to be dealt with across the whole sector and really automating and simplifying that process right through to the generation of how to respond to the complaints, that's, you know, an efficiency win for a lot of businesses.

    Matthew: One thing that managers will need to get their heads around is that regulation on AI might encompass technology that they're already using and that they don't really think of as AI at all.

    Luke: So step one really is understanding when and where are they using AI technology, because up until now, you know, a lot of this technology may not have been classified within the organisation as AI and now it or in the future, it might fall within the regulatory definitions of what is AI. If you look at the regulatory definitions, they're very broad. Now, if we take the EU perspective, it is based on a risk assessment in understanding what's high risk, understanding what is, you know, a low risk application so, different steps will need to be taken depending on the risk profile of the of the AI technology. So you need to look back at you know what technology has been developed by the organisation or procured in the past and reassess whether that would be in line for remediation activities to take place in relation to upcoming regulatory requirements. The next step is to think about what are the governance processes that need to be put in place. So who is the owner of this risk within the organisation now? In the UK the, the regulators, the FCA and the PRA have discussed the topic of senior management functions and so which senior manager within the organisation will be responsible for AI risk. You know they haven't concluded that it is a specific senior manager, but there is an expectation that senior managers who are subject to the senior managers regime, they would be responsible for AI risk. So that takes some planning and some discussion.

    Matthew: Luckily, financial firms have a model they can follow. When lots of their operations move to the cloud, regulators realised they had to have absolute clarity about who was responsible for what, given that systems and data would distribute it across networks and even countries. The way companies dealt with that should be a model for how they deal with AI Regulation, says Luke.

    Luke: Yeah, I think it's just following the same processes and putting together the same structures that were put in place in relation to addressing cloud risks. So that's, you know, looking at what are the AI specific controls from a security perspective from a data perspective that need to be put in place. Understanding how operational resilience more generally is affected by the use of AI and then also thinking about customer contracts and the different ways in which a financial institution can be assured that it's not their third party provider that's causing the issues rather than their use of the technology itself.

    Matthew: Whether or not they follow the same approach as with the move to the cloud, financial firms can't ignore AI regulation, and scaling up is not optional, says Luke, particularly for senior managers in the UK whose regulation makes them personally responsible for specific tasks.

    Luke: So, there is a growing expectation both in the UK and the EU in particular, that senior managers will have the competence that is necessary to deal with AI risk and so this is something that really needs to be acquired over the next 12 months, even while regulatory rules are still formulating and that's really understanding all of the key issues which is, you know, bias and discrimination, transparency, explain ability, the extent to which you can explain the use of the technology, understanding IP as well, how to protect your business and that of trusted partners as well when using AI and the many data risks which are impacted by the use of AI.

    Matthew: Well, thanks for joining us and I hope you stuck with us for our little trot through how AI is made and how it's used, and the various disputes and regulations around it. If you think this would be useful to colleagues, friends or family, please do share, do subscribe, maybe leave us a little review if you have the time and remember, we publish business law news and analysis every day at pinsentmasons.com coming from our team of reporters all around world. Sign up to our tailored newsletter at pinsentmasons.com/newsletter and do join us again in two weeks time. I hope to see you then for now. Bye. Bye. The Pinsent Masons podcast was produced by Matthew Magee for International professional services firm Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.