Out-Law Analysis 7 min. read
17 Nov 2023, 11:46 am
The popularity and capabilities of artificial intelligence (AI) have increased exponentially in recent times, with machine learning products such as ChatGPT capturing the imagination of the public as to what is possible.
The legal sector, and employment law in particular, has not been immune from being altered by the introduction of AI. The technology has had significant impacts, both positive and negative, on equality, diversity and inclusion (EDI) efforts.
AI has the potential to remove the biases that humans involved in the recruitment of employees have, particularly in respect of the sourcing and screening of candidates. For example, some AI tools have been used to help draft job descriptions. The technology can help identify problematic words and phrases and suggest more inclusive alternatives.
However, a fundamental aspect of AI is that it can only work with the data that it is fed. This data may be inherently biased, particularly if it is based on historical experience or a small sample size. In 2014, for example, Amazon developed a CV screening algorithm that inadvertently inherited biases from past hiring practices. This error resulted in an arbitrary favouring of male candidates over female candidates.
Clearly, great care needs to be taken when developing AI to make sure that the ultimate product does not have inequality baked into it before it has even been used. Only 20% of AI developers are female, and even fewer developers are from ethnic minority backgrounds – it is easy to see how this lack of diversity of thought could have damaging repercussions for the final product that is developed.
One possible means of correcting this issue is to use synthetic data to aid the development of AI. Synthetic data anonymises data in big data sets and can be used to create more examples of individuals from underrepresented groups when training AI models so that the AI can be exposed to a greater cross-section of society when being tested, hopefully eliminating any biases that exist.
As the use of AI becomes more widespread, it is also likely to have a number of implications for a business’ existing workplace. One positive impact of AI is that it will likely result in productivity gains, with repetitive tasks prime candidates for automation, freeing up employees to complete tasks that are more bespoke in nature or require some degree of critical thinking and creativity.
In response to a question from the press about AI taking people’s jobs at the 2023 AI Safety Summit, prime minister Rishi Sunak said that the technology would act as a “co-pilot” to employees. The impact of relieving employees of the burden of doing repetitive, monotonous tasks should enable them to focus their attention on other, more interesting aspects of their job.
Only 20% of AI developers are female, and even fewer developers are from ethnic minority backgrounds – it is easy to see how this lack of diversity of thought could have damaging repercussions for the final product that is developed
This has advantages for both employees and employers; employees may well see an improvement in their wellbeing if they are getting more fulfilment from work from being able to do more interesting tasks, and will also be able to dedicate time to tasks that they wouldn’t otherwise get to in a day, increasing productivity and profitability for the business.
The introduction and use of AI within businesses will, nevertheless, need to be carefully thought through, documented, and clearly communicated. Given the ever-changing nature of issues in the EDI space, for example, words and phrases can quickly become unacceptable in everyday usage and any AI that is used will need to be regularly updated to reflect these changes. A failure by the business to be diligent about its use of AI can potentially have serious repercussions for its reputation.
The need to introduce any new AI systems in a thoughtful manner is particularly important so that the implementation of AI does not alienate a section of the workforce. As a by-product of increased implementation of AI technology, there could well end up being layoffs within businesses to make way for the new technology, which in itself brings a lot of risks if the process is not managed carefully.
In Australia, the use of AI within recruitment has also received scrutiny. In the 2021-22 financial year, the merit protection commissioner for the Australian Public Service and Parliamentary Service overturned 11 promotion decisions made by the government agency Services Australia, which were made within a single recruitment round.
The commissioner found that the decision was based on a new, automated system which tested candidates across a variety of AI assessments. This included questionnaires, self-recorded video responses and psychometric testing. With AI being the crux of the decision-making process, it was found that there was no human intervention during the promotion process, which led to other potential candidates not being awarded promotions that were deserved.
Whilst there is no legislation in Australia regulating the use of AI, the government is now exploring options for more formal regulation so that AI is ethically utilised. A recent study from KPMG and The University of Queensland showed that three out of five people (61%) are either ambivalent or unwilling to trust AI, based on responses across 17 countries. The research shows that for society and organisations to effectively use AI, relevant regulations need to be enacted so that AI can be trusted.
According to a recent survey by the Institute for Economic Research, 13.3% of companies in Germany currently use tools based on AI during their operation and a further 10% plan to use such software in future. Until planned EU regulation for AI is introduced, German companies should go through a test process when introducing AI-based software. Such a process should be based on three methodological foundations: transparency, participation and feedback. This method is currently being tested as part of the ‘Künstliche Intelligenz im Dienste der Diversität’ (KIDD) funding project by the Federal Ministry of Labour and Social Affairs.
All in all, anxiety over the use of AI is still rather high in Germany, particularly in human resources (HR) and similar contexts. However, an implementation process that ensures transparency and also actively involves the workforce can increase the trust in the systems and reduce prejudices against the technology.
The use of AI in HR, including in recruitment and selection, is rapidly increasing in the Netherlands. Research by the Netherlands Institute for Human Rights shows that in the recruitment phase, the indirect use of algorithms is the norm, with employers frequently using social media or online HR platforms to share their vacancies or actively searching for candidates. In addition, 12% of employers use algorithms to select and assess their candidates. According to the report, employers have limited awareness that using algorithms can lead to discrimination and exclusion. It also notes that employers hardly ever check their systems for fairness.
Although limited, there are several upcoming regulations under Dutch law in which AI in the context of HR, and more specifically its impact on EDI, plays a role. In March this year, for example, a bill monitoring equal opportunities in recruitment and selection was passed by the House of Representatives and is now before the Senate.
The bill requires employers and intermediaries to draw up a so-called ‘working method’ stating how they organise their recruitment and selection process and ensure that labour market discrimination is prevented. This includes, among other things, that employers and intermediaries which use an automated system for recruitment and selection must verify that using this system does not lead to labour market discrimination.
Additionally, in 2020, the Recruitment Code of the of the Dutch Association for Personnel Management and Organisational Development was updated to address the development of AI. It provides, among other things, that employers may only use AI for recruitment purposes if the systems used are validated and transparent and the potential risks and shortcomings thereof are clear, such as regarding discrimination.
Earlier this month, businesses, academics and other representatives from 28 countries at the forefront of the AI development came together for the world’s first AI Safety Summit, held in Bletchley Park in England. The purpose of the summit was to discuss the risks that AI poses and how, by organising coordinated international action, those risks can be combatted, to enable us to fully harness the potential that AI can bring to society by, for example, combating climate change or helping to treat health conditions.
The main theme coming out of the summit was the need for collaboration. It was acknowledged that many countries had developed high-level principles regarding the use of AI, but that it was time to take the step of transforming those high-level principles into a global set of best practice guidelines on the deployment, regulation and governance of AI.
This commitment towards collaboration culminated in the 28 countries reaching a historic agreement, named the Bletchley Declaration, establishing a shared understanding of the opportunities and risks posed by AI and the need for governments to work together to meet the most significant challenges.
On top of the signing of the Bletchley Declaration, two new AI safety institutes were created, as there was recognition that governments alone would not be able to effectively regulate about AI with the speed that is required, and that more help is needed from multiple stakeholders across different industries. The challenge now is to establish greater diversity of thought within the safety institutes, and it is hoped that this will have a transformative impact on the quality of AI models from a diversity standpoint.
Importantly, the commitment towards the collaborative development of AI models will be long-lasting, with the summit at Bletchley Park being the first of a series of summits on the issue. France and South Korea will host further AI Safety Summits in 2024.
There can be little doubt that AI has the potential to revolutionise workplaces, enabling certain tasks to be done more quickly and efficiently, driving efficiencies and enabling all employees to focus on strategic, more interesting work. It could also become an important tool for businesses in the drive towards greater inclusivity, with one use case potentially being for AI to act as an aid for non-verbal members of the workforce or those who struggle to communicate.
Inevitably though, there will be some employees that are more sceptical about the introduction of AI, and for that reason it is crucial that businesses conduct a thorough investigation of the suitability of any new systems across the business and fully explain any changes and how they will impact the existing workforce. Taking these preliminary steps will hopefully mitigate the likelihood of the introduction of AI causing friction between employees and employer, and ensure an easier transition towards using AI, and the benefits that it can bring from an EDI perspective.
On 21 November, Pinsent Masons will host the first in a series of events on AI in HR decision making. Author, entrepreneur, and advisor on generative AI, Nina Schick, will discuss the current landscape in the world of AI followed by a Q&A with a panel of legal & data professionals.
Co-written by Sarah Klachin, Emma Lutwyche and Anthony Convery of Pinsent Masons.