Out-Law Analysis 9 min. read

Irish businesses must act now to navigate the impact of AI and the EU AI Act


As artificial intelligence (AI) adoption picks up pace, Irish businesses need to be aware of the areas in which AI is having an impact, the factors to take into consideration when using AI and how to prepare for the EU Artificial Intelligence Act (EU AI Act).

AI tools are transforming many aspects of business operations, from intellectual property and data protection to contract drafting and deal making, as well as financial services and employment. They are also affecting a number of sectors, such as the TMT sector, which continue to attract significant investment and play a key role in the Irish economy.

The EU AI Act, one of the world’s first comprehensive legal frameworks for AI, will enter into force in the coming weeks. It aims to foster innovation in trustworthy AI throughout the EU while ensuring respect for fundamental rights, safety and ethical principles. Irish businesses must now strike a balance between innovation and responsible use, ensuring that the benefits of AI are harnessed while safeguarding privacy and adhering to legal frameworks.

While the regime will be enforced on a “phased” basis, Irish businesses should start reviewing existing policies and systems to identify what AI they are already using or deploying, such as spam filters, and seek advice in relation to the risk categories these systems fall under as well as corresponding obligations they need to comply with. This also presents a good opportunity to review current business operations and practices against other existing legislation, such as data privacy and cyber security.

TMT and data protection

In Ireland, AI is increasingly shaping the technology, media and telecommunications (TMT) sector. A recent study conducted by EY said that TMT is at the forefront of AI adoption with 61% of TMT CEOs across Europe, Asia-Pacific and the Americas viewing AI as a positive force, capable of enhancing business efficiency and driving favourable outcomes. As an example, AI is increasingly deployed in Ireland’s TMT sector to enhance customer care experiences, through chatbots and other automated communication.

Ireland is home to a growing number of multinational tech giants, including Boston Scientific, Microsoft, Dell and Amazon, and has seen the establishment of specialised tech centres focussing on critical areas such as data analytics, cloud computing and big data. With the rise of AI and the proliferation of data centres, it is increasingly important for companies in the TMT sector to set up robust regulatory measures and cyber security protocols and review their compliance against privacy regulations, including the General Data Protection Regulation (GDPR) and Irish data protection legislation.

Irish tech businesses should particularly be aware of the risks in implementation and utilisation of AI technologies, including the risks of unauthorised data access, digital security breaches, the spread of false information and algorithmic prejudice. Due to the broad definition of personal data under EU and national data protection legislation, the development and operation of AI technologies will often entail the handling and processing of personal data. Businesses, therefore, must comply with the fundamental data protection principles of lawfulness, fairness, transparency and security in data processing.

For instance, article 15 of the GDPR gives data subjects a right to access their personal data and verify if it has been processed or subjected to automated decision-making processes. Article 22 of the GDPR ensures individuals are not subjected to decisions with significant legal implications if they are based solely on automated processing, such as profiling. AI systems are captured under these provisions.

The Court of Justice of the EU recently examined the scope of article 22 in a case involving the German credit reference agency, SCHUFA, and its automated system for credit scoring predicting behaviour. The court found that denying an online credit application falls under article 22, constitutes profiling and therefore necessitates the implementation of specific protective safeguards concerning the profiling process. The case underscores the delicate balance between AI innovation and legal compliance, emphasising a need for robust protections to safeguard individuals’ rights in an increasingly AI-driven landscape.

This aspect of regulation overlaps with parts of the incoming EU AI Act, which has introduced data governance in respect of training, validation and testing of datasets pertaining to high-risk AI systems. AI systems intended to be used to evaluate a person’s creditworthiness or to establish their credit score are among those that could be regulated as high-risk AI systems, unless those systems are used for detecting financial fraud. As a result, businesses that use, provide, or deploy such systems may be subject to strict requirements and significant obligations.

Employment

AI has been recognised as a transformative and disruptive force to Ireland’s labour market in a report published jointly by the Department of Enterprise, Trade and Employment and the Department of Finance. While these technologies can contribute to higher productivity, they could also lead to certain job closures whilst at the same time generating and augmenting others. Overall, 63% of employment in Ireland is relatively highly exposed to AI, of which 30% of employment falls into the ‘at risk’ category, such as administrative and secretarial occupations and sales and customer services jobs which may be replaced by AI.

AI is also increasingly used for recruitment and screening purposes as well as for internal decision-making affecting existing employees. Such AI deployment could now fall under the category of high-risk AI. For example, if the AI is used for the purposes of placing targeted job advertisements or to evaluate candidates, it would be considered high-risk. AI systems used for the evaluation of performance and behaviour of existing employees or affecting employment contracts will also be categorised as high-risk. These tools will need to conform to certain governance requirements including risk management, data quality, transparency, human oversight and accuracy. Businesses deploying such systems will face obligations around registration, quality management, monitoring, record-keeping and incident reporting.

Intellectual property

As the use of AI has become more prevalent in the media and creative industries, such as for content generation, there have been a growing number of intellectual property (IP) infringement cases, where IP rights holders claim that the use of data by AI systems has infringed their IP rights. As generative AI systems are trained on vast amounts of data, this training often involves the use of copyright-protected content. The critical question in such cases has been whether the training of generative AI tools constitutes copyright infringement. Courts have been grappling with this issue. The Getty Images v Stability AI case in the UK and, in the US, New York Times vs OpenAI and Microsoft are two high-profile examples. Although both actions are currently going through the courts, the outcome will likely affect IP proprietors globally as courts come to grips with the new technology.

Questions also arise in relation to ownership or authorship of IP rights if they are created using AI systems as IP laws generally require human input in their creation. The UK Supreme Court recently upheld the lower courts’ decisions and denied Dr Stephen Thaler’s patent application for his AI ‘Dabus’ as the inventor of a food container and flashing light beacon. The judgment clarified that only humans can be inventors and as such an AI could not be named as an inventor to secure patent rights. Similar patent applications brought in other jurisdictions, including in Australia and Germany, have also been refused.

While this issue has not yet come before the Irish courts, it is likely that some form of human modification or input will be required in order to acquire copyright protection in Ireland. In early 2023, the US Copyright Office wrestled with the question of how much human input is required for copyright protection. It granted copyright protection to the author in the written text and arrangement of images in the comic book ‘Zarya of the Dawn’ but denied such protection to the images themselves as these had been generated by Midjourney, a generative AI system, and thus had been produced by a non-human.

Although this is a developing area of law, prudent businesses should adopt a robust AI governance framework and ensure a sufficient level of human oversight when creating works or inventions with the assistance of AI.

Finally, AI may also have a role to play in monitoring and combatting IP infringement. For example, AI has been deployed by large online retailers to combat fake reviews used by fraudsters to trick consumers into buying counterfeit goods, and to monitor IP portfolios and fraudulent behaviour.

Contracts involving AI solutions

As AI increasingly features in automated processes and systems, and with the introduction of stringent compliance obligations under the EU AI Act, businesses should impose controls on how AI technology is used internally as well as by suppliers and clients.

Businesses which purchase AI solutions or use suppliers that deploy AI systems, are advised to review their terms and conditions, corporate policies and contractual agreements. If AI systems are used by suppliers, the contracts should include a definition of the AI used, introduce controls, seek warranties about the use of AI by suppliers, implement risk monitoring obligations and clearly allocate liability in order to assess and control operational risk when using AI systems.

Corporate M&A

Appropriate use of AI in corporate mergers and acquisitions (M&A), can result in time and cost savings for the parties. In a recent study conducted by Accenture, 64% of M&A executives globally said they believe that generative AI will revolutionise the deal process as it is expected to generate excess returns.

Primarily, AI tools can accelerate target identification and evaluation as well as streamline the due diligence process and business integration post-completion. For due diligence particularly, AI can offer greater efficiency to a buyer in analysing company data at great speed and scale. It can also assist in identifying and locating relevant data from the sell side and categorising and preparing data for the due diligence exercise.

However, it is imperative that both buyers and sellers ensure the AI tool’s transparency and verify any output through their own risk assessment as the AI could uncover previously overlooked points or mischaracterise any potential red flags. As well as the EU AI Act, existing obligations under data protection and security legislation must be considered when deploying any AI models or systems in corporate transactions.

Financial Services

Financial services is another sector benefitting from AI. AI’s ability to process vast amounts of data opens up possibilities for real-time analytics and decision-making. AI is already being used by financial services institutions for the purpose of anti-money laundering checks, credit and risk assessments of consumers and fraud prevention. For example, AI, in the form of machine learning, is being used for credit scoring purposes, analysing a range of personal information to predict the probability of a borrower defaulting on a loan.

Credit scoring systems and related products are classified as high-risk AI under the new AI legal framework, granting individuals the right to a detailed explanation of how the AI system influenced the decision-making process and the key factors of the decision. A careful balance will need to be struck between the right of affected persons requesting such information and the companies’ need to safeguard their trade secrets and IP in the credit scoring algorithm.

Due to the strict regulatory regime applied to the financial services sector and sensitive data involved, the implementation of AI systems and tools requires close attention to existing regulatory obligations including those concerning data and consumer protection, third-party liability, and algorithmic trading. The prevention of discrimination based on biased data is especially important in the financial services sector, in order to avoid the AI inadvertently making decisions based on protected characteristics.

In addition to the new requirements set out under the EU AI Act, financial services institutions must be mindful to ensure that the AI systems and tools they use comply with existing legislation and regulations, including obligations under the Digital Operational Resilience Act, taking effect on 27 January 2025, and the proposed EU Financial Data Access Regulation.

Opportunities and challenges

It is clear from the above that AI is not just a fleeting trend but a transformative force reshaping various sectors in Ireland from TMT to financial services. The imminent entry into force of the EU AI Act underscores the need for businesses to navigate the delicate balance between harnessing AI’s benefits and adhering to legal frameworks.

AI’s emergence presents both opportunities and challenges. It holds the potential to boost efficiency and drive favourable business outcomes, but it also raises complex issues concerning data protection, intellectual property and employment rights. Businesses must be proactive in identifying the AI tools they are using or deploying, understanding the risk categories these systems fall under, and ensuring compliance with corresponding obligations.

While the adoption of AI is not without its challenges, careful planning, thorough risk assessment, and strict adherence to regulatory requirements can enable Irish businesses to successfully navigate this new terrain. The journey towards AI integration may be complex, but the potential rewards make it a journey worth taking.

Co-written by Laura Finn and Isabel Humburg of Pinsent Masons. 

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.