Businesses developing artificial intelligence (AI) systems must “build in security” to the technology to avoid mistakes made when the internet was developed, the head of the UK’s National Cyber Security Centre (NCSC) has said.
In a speech on Wednesday, Lindy Cameron, chief executive of the NCSC, said digital infrastructure relied on today was “never designed with security at its heart” and was therefore “built on foundations that are flawed and vulnerable”. She said there is a risk that “a similarly flawed ecosystem for AI” could be built unless action is taken now to embed a ‘security by design’ approach into its development. In doing so, she said, AI developers must “predict possible attacks and identify ways to mitigate them”.
Cameron said: “We cannot rely on our ability to retro-fit security into the technology in the years to come nor expect individual users to solely carry the burden of risk. We have to build in security as a core requirement as we develop the technology.”
“Like our US counterparts and all of the Five Eyes security alliance, we advocate a ‘secure by design’ approach where vendors take more responsibility for embedding cybersecurity into their technologies, and their supply chains, from the outset. This will help society and organisations realise the benefits of AI advances but also help to build trust that AI is safe and secure to use,” she said.
In her speech, Cameron highlighted security principles on machine learning that the NCSC has produced, as well as its separate cybersecurity guidance on large language models (LLMs). Those resources, she suggested, could help organisations using AI “understand the risks they are running by using it – and how to mitigate them”.
“It’s vital that people and organisations using these technologies understand the cybersecurity risks – many of which are novel,” Cameron said.
“For example, machine learning introduces an entirely new category of attack: adversarial attacks. As machine learning is so heavily reliant on the data used for the training, if that data is manipulated, it creates potential for certain inputs to result in unintended behaviour, which adversaries can then exploit. And LLMs pose entirely different challenges. For example – an organisation's intellectual property or sensitive data may be at risk if their staff start submitting confidential information into LLM prompts,” she said.
While Cameron said the NCSC’s guidance “provides pragmatic steps that can be taken to secure AI as it is implemented”, she said there was a need to go further, calling on chief executives of corporations to factor security considerations into investment decisions about AI.
Cyber risk expert Eleanor Ludlam of Pinsent Masons said: “The concept of security by design is not new in the EU as we see equivalent obligations under data protection legislation. It is positive that this is being emphasised in relation to AI technologies given the pace of development in this arena.”
“We have already seen examples of litigation in relation to AI cropping up globally, from the Getty copyright claim against Stability AI in the UK, to the world’s first “robot lawyer” being sued in California for professional negligence. As such, it is important that governments legislate for the potential risks and exposures around AI and it is helpful that the NCSC is highlighting concerns too. We are watching the progress of the EU’s tripartite model of the AI Act, the AI Liability Directive and Product Liability Directive, which seek to address AI challenges, with interest as they seek to make security and liability as two sides of the same coin.”