Out-Law Analysis 3 min. read
02 Aug 2023, 8:50 am
The World Ethical Data Foundation (WEDF) has published proposed new guidelines for the ethical and safe use of artificial intelligence (AI).
In an open letter published last week, the WEDF set out questions and considerations that show how multi-faceted decision-making can become and how holistic decision-making needs to be to comprehensively consider the impact of using an AI tool. The document covers several important topics for the developing technology and invites feedback from AI stakeholders.
Importantly, the WEDF’s questions use accessible language and are reflective of the lifecycle of an AI tool. The questions are arranged according to the stages of development and use, which is highly beneficial because it acknowledges that the risks arising from the development and use of an AI tool differ at each stage.
The release of these guidelines comes at a time of increasing scrutiny around the transparency and explainability of decision making by AI systems, and a rise in the number of AI-related privacy breach and copyright infringement cases. In further developments last week, Anthropic, Google, Microsoft and OpenAI launched the Frontier Model Forum, with the aim of “ensuring the safe and responsible development of frontier AI models” through collaboration, research and the establishment of best practice for “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models”.
While the landmark Getty Images litigation continues, it is good to see a particular focus on protection of copyrighted material at all stages outlined in the open letter: training, building and testing the AI tool. It is also encouraging to see an emphasis on consideration of systems for managing data sources throughout.
Marina Goodman
Pinsent Masons
The way the guidelines are structured is very helpful, and will help to ensure that they will remain relevant even as the technology and its means of development changes.
The WEDF guidelines will encourage companies to consider practically how to determine whether there is copyrighted material in the outputs produced by a tool they have used and whether that has resulted from their input to create the output or use of copyrighted material by the tool developer at the development or training stages of the AI tool, before taking the tool to market or updating it.
This consideration is likely to result in an increased demand for assurance products that can determine whether copyrighted works have been reproduced in AI outputs without consent or a more risk averse approach by customers, such as internal policies restricting the business data that can be used.
There has been an increase in the number of companies publishing their own codes of ethics to govern the responsible development and use of AI. At the same time, more customers are also looking to these documents as part of their supplier due diligence. There have already been instances where customers have requested contractual assurances that an AI system operates in compliance with these documents.
The questions posed by WEDF are user-friendly and could be utilised by companies to further refine and improve their policies to address some of the key factors that should be considered when implementing the use of any AI. For companies that intend to produce a code of AI ethics and are yet to do so, this is a useful starting point.
Krish Khanna
Pinsent Masons
Despite many positive elements, the WEDF’s new guidelines are still very general. This will invite companies to use them as a starting point for the development of their own bespoke approaches to responsible use of AI.
The WEDF’s emphasis on front-end teams taking responsibility for – or at least being aware of – ‘explainability’ issues is key. In the UK, the Department for Culture, Media and Sport has already stated that: “accountability for the outcomes produced by AI and legal liability must always rest with an identified or identifiable legal person – whether corporate or natural”.
In many large organisations, this accountability might sit with individuals that are one step or more removed from the development of an AI system. It has previously been suggested that companies could implement an AI ‘explainabilty’ appraisal process for internal, regulatory and consumer use. The questions posed by the WEDF are very relevant and could be used as building blocks to develop these governance processes.
Despite many positive elements, the WEDF’s new guidelines are still very general. This will invite companies to use them as a starting point for the development of their own bespoke approaches to responsible use of AI – a process that will be determined by the specific context and the complexity of the technology’s use.
Overall, the way the guidelines are structured is very helpful, and will help to ensure that they will remain relevant even as the technology and its means of development changes. For example, in the context of generative AI, the WEDF’s guidelines on data integrity and sourcing will remain relevant regardless of whether a model is trained using real sources obtained through web scraping, or using synthetic data. Development of the WEDF guidelines is also an open forum project, so will improve with greater and more diverse engagement over time.
Co-written by Krish Khanna and Marina Goodman of Pinsent Masons.