Out-Law Analysis 5 min. read
16 Mar 2023, 5:07 pm
ChatGPT – which has already wowed users for its abilities to answer queries, pass graduate-level exams and even write computer code – has enormous potential applications in the life sciences and medtech industries too.
Developed by OpenAI, ChatGPT is a natural language processing tool launched in November 2022, which uses a novel artificial intelligence (AI) algorithm. Its ability to predict a sequence of words based on the context of those written in prior sequences, rather than being trained on large amounts of data, means ChatGPT can “answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests”, according to OpenAI. Users simply access ChatGPT online and ask it to write or produce almost anything.
Despite producing seemingly convincing responses, ChatGPT does have some limitations. For example, its responses might ostensibly appear plausible, but upon closer examination, are actually incorrect or nonsensical. According to OpenAI, this is because there is no source of objective truth for the AI software to rely on during its training. At the same time, trainers can sometimes err on the side of caution, leading to ChatGPT declining questions that it can actually answer correctly.
Supervised training also misleads the model because the ideal answer depends on what the model knows, rather than what the human trainer knows. ChatGPT can be overly sensitive to how questions are phrased and can also be excessively verbose – an issue arising from biases in the training data, because trainers often prefer longer answers that look more comprehensive.
There are myriad ways in which AI chat bots may be used in the future in the life sciences and medtech spaces. The fact that ChatGPT can predict the properties of new compounds, identify potential targets for treatment, and model other factors such as bioavailability, means it could be used for drug discovery. At the same time, the software could be used to support human clinical decision-making in the healthcare space by analysing patient data, suggesting diagnoses, and providing treatment information.
AI technology has already been used in the healthcare sector for a number of years, but ChatGPT will make these technologies more accessible and easier to use for both healthcare providers and patients alike
As a natural language processing tool, ChatGPT could generate text for scientific papers by outlining the structure, suggesting language, and identifying relevant information. It can also automatically analyse and interpret the results of experiments, as well as identify patterns and make predictions arising from that data. Other potential applications for ChatGPT include summarising patients’ medical records, answering basic queries in a GP-style setting, and even predicting whether a patient is at increased risk of disease based on large amounts of pre-existing data for similar demographic groups.
Clearly, ChatGPT and AIs like it have the potential to make an enormous positive impact for both medicine providers and patients. It could improve access to information, empowering patients to make their own informed healthcare decisions, and might also improve efficiency and save costs in already overburdened health systems.
Medtech and life sciences businesses should be mindful of how ChatGPT will disrupt the sector. AI technology has already been used in the healthcare sector for a number of years, but ChatGPT will make these technologies more accessible and easier to use for both healthcare providers and patients alike. Despite the hype, however, businesses should also be aware of the ethical limitations and keep in mind that AI is still a relatively unregulated technology at the time of writing.
Healthcare providers will need to be familiar with the limitations of liability and think carefully about how much trust they are willing to place in AI systems, given that they are not immune to making errors – and such errors could have grave consequences for patients. There are also data protection issues, since ChatGPT will be processing and interpreting vast amounts of confidential healthcare data.
There are also interesting legal questions around whether AI systems are capable of owning IP rights. Notably, the question of whether AI systems can own and transfer patent rights has been considered by courts across the globe, and is due to be heard by the UK Supreme Court in March 2023. The UK Supreme Court is the first supreme level court in the world to hear the arguments in this case, and the rise of ChatGPT further emphasises the need for clarity on this issue.
If, for example, ChatGPT was asked by a human to draft a patent application, arguments could arise over whether the human is entitled to be named as the inventor of the application. The development of such AI technologies will prompt further international conversations as to the alignment of patent laws.
At the same time, ChatGPT can generate works which may attract copyright protection, such as song lyrics, articles, and even computer code. However, it is still uncertain whether its works even qualify for copyright protection in the first place. ChatGPT itself states: “I do not own the content that I generate. I am a machine learning model developed and owned by OpenAI, and the content generated by me is subject to OpenAI’s license [sic] and terms of use”. As it currently stands under UK law, AI has no legal personality and therefore cannot own property rights, including IP rights.
ChatGPT cannot create something which is truly “original” in the purest sense of the word, given that as an AI model, it is trained on pre-existing, and perhaps even copyrighted, information, creating responses based on that information. However, it can elaborate on that pre-existing information and create what appears to be a “new” answer to a query. It is also important to consider whether copyright protection subsists in any prompt which the human user inputs into ChatGPT. Such prompts might be sufficiently complex to qualify for protection themselves.
Under the current version of OpenAI’s terms of use, it is the human user who owns the copyright in any inputs, as well as “all its right, title and interest” in the outputs generated by ChatGPT. Interestingly, the terms of use make it clear that ChatGPT’s outputs might be similar or even identical for multiple users, in which case it becomes difficult to envisage how such copyright would be enforced.
There is currently no guidance on where human input stops and AI input begins for the purposes of AI-generated content with an element of human input. In January 2023, Getty Images brought legal action against Stability AI for allegedly infringing its copyright by training an AI picture generator on millions of Getty’s images without its consent. Getty suspected its pictures had been used because its iconic grey watermark appeared in images generated by the AI software.
If the courts find in favour of Getty, suggesting that enough of its work is incorporated into the AI-generated work such that copyright is infringed, then it will be difficult to argue that an AI is producing anything truly original. While there is no guidance from the courts on such issues at this time, this is yet another example of the English courts’ flexibility and innovation in its ability to apply the existing law to new and significant technologies such as AI.
Co-written by Concetta Scrimshaw of Pinsent Masons.