Out-Law News 4 min. read
08 Jun 2022, 9:42 am
Singapore has launched the world's first artificial intelligence (AI) governance testing framework and toolkit to help AI developers measure their systems in an objective and verifiable manner.
The testing framework and toolkit is known as A.I.Verify. It brings together open source testing tools. Developers, management and business partners in companies deploying AI will be able to get reports created by the toolkit.
A.I.Verify aims to promote transparency between companies and their stakeholders via a combination of technical testing and process checking. It is currently a minimum viable product (MVP), meaning it is functional enough for early adopters to test and provide feedback for product development. This follows the launch of Singapore's Model AI Governance Framework (second edition) in Davos in 2020.
Sarah Cameron
Legal Director
It will be interesting to cross check these with the Singapore output to see whether they have produced a more comprehensive overall offering for public and private sector and personal and non-personal data aspects.
Mark Tan
Partner
It is hoped that over the course of time and through such self-assessment, companies are more likely to further improve in their deployment of such advanced technologies, and rack up greater success as more and more stakeholders subscribe to the idea of trustworthy and reliable AI.
A.I.Verify allows AI system developers and owners to conduct self-testing to maintain business requirements while providing a common basis for declaring results. However, there is no guarantee that any AI system tested under this pilot framework will be free of risk or bias, or completely secure.
Technology expert Sarah Cameron of Pinsent Masons said: “It is currently not possible to assess the MVP Toolkit in its entirely unless you are piloting it. However, feedback from some of those who have piloted it was positive at a panel session at the recent Singapore AXAI Conference. Various UK bodies such as Office for Artificial Intelligence (OAI), Information Commissioner's Office (ICO), Aerospace Technology Institute (ATI) have of course produced a considerable volume of excellent guidance, standards and toolkits but these are still fragmented. Work is needed to bring all this together into a comprehensive navigable framework. The Centre for Data Ethics and Innovation (CDEI)’s Assurance Roadmap and the promise of an internationally recognised assurance ecosystem can play a key role in achieving this.”
“It will be interesting to cross check these with the Singapore output to see whether they have produced a more comprehensive overall offering for public and private sector and personal and non-personal data aspects,” Cameron said.
Mark Tan of Pinsent Masons MPillay, the Singapore joint law venture between MPillay and Pinsent Masons, said: “Without a doubt, the use of AI is a very powerful tool, and its success is often tied to its ability to augment and not just automate. The bottom line of what it seeks to do is to outperform human capabilities and this is arguably where it is most productive. Therefore, with the launch by Singapore of the world’s first AI testing framework and toolkit, it is likely that this will enable companies to be more transparent about their AI products and services, and thereby build trust with the various stakeholders, given that there is now an objective and verifiable method of assessment, Organisations will correspondingly be able to use the A.I. Verify to undertake self-assessment of its AI systems, products or services.”
“Naturally, it is hoped that over the course of time and through such self-assessment, companies are more likely to further improve in their deployment of such advanced technologies, and rack up greater success as more and more stakeholders subscribe to the idea of trustworthy and reliable AI,” he said.
According to a document (12-page/508KB) released by Singapore's Infocomm Media Development Authority (IMDA), the development of A.I.Verify has been aligned with internationally accepted AI ethical principles. There are 11 key AI ethical principles organised into five pillars. For a start, an initial set of eight principles were selected for the MVP with at least one principle chosen from each of the five pillars.
The five pillars include five aspects demonstrated by system owners and developers to customers to build trust and they are: transparency in AI use, meaning customers will be able to choose whether to use the AI system when they know AI is being used; understanding how AI models reach decisions, which explains to customers the factors leading to the model’s output; ensuring the AI system is safe and reliable; ensuring fairness and no unintended discrimination in the AI system; thereby ensuring proper management and oversight of AI systems is conducted.
The eight selected ethical principles of AI can be assessed by a combination of technical testing and process inspection.
Among others, three principles could be evaluated by both technical testing and process inspection:
Another five principles are assessed via process checks and they are: transparency, assessed via a process of examining documentary evidence of providing proper information to individuals who may be affected by AI systems. This information includes the use, intended use, limitations, and risk assessment of AI in the system without compromising intellectual property, security, and system integrity.
Reproducibility is assessed through a process check of documentary evidence including evidence of AI model provenance, data provenance, and use of versioning tools; security is assessed through process inspection of documentary evidence for significance assessment and risk assessment, including how known risks to the AI system are identified and lessened; accountability is assessed via process examination of documentary evidence, including evidence of clear internal governance mechanisms for proper management supervision of the development and deployment of AI systems.
Human agency and oversight are evaluated by process inspection of documentary evidence, and AI systems are designed in a way that does not reduce the ability of humans to make decisions or control that system.
In December 2021, Singapore and the UK had concluded negotiations on the UK-Singapore Digital Economy Agreement (UKSDEA). According to the final agreement explainer on UKSDEA published in February, the two countries will promote interoperability and compatibility between their different regulatory regimes, including via mechanisms such as mutual arrangements or a broader international framework