Out-Law / Your Daily Need-To-Know

Out-Law Analysis 7 min. read

Businesses urged to review AI policies as Ireland prepares to implement EU AI Act


Following the formal adoption of the EU Artificial Intelligence Act (EU AI Act), the Irish government has launched a public consultation on the national implementation of the new legislation.

Irish businesses should review their existing policies and create an inventory of any AI systems they are using or deploying in preparation for the incoming EU AI Act.

The EU AI Act, dubbed the world’s first AI law, is set to come into force in the EU in the coming month. Ireland is one of several EU member states to have kick-started the national process to implement the new EU-wide regime.

Ireland’s Department of Enterprise, Trade and Employment has recently launched a public consultation, which aims to inform the country’s approach to applying the new EU AI Act, and with a specific focus on the configuration of national competent authorities for enforcing the new legislation. Although the consultation is to end on 16 July, some existing authority bodies, including the newly established broadcasting and online media regulator Comisiún na Meán, are expected to take up some of the regulatory functions under the EU AI Act.

The new EU AI Act will come into force 20 days after its publication in the Official Journal of the EU, which is expected to be around 15 July. However, most of its provisions will not take effect for another two years after that date, with some obligations only applying after 36 months following entry into force.

In addition to the EU AI Act, Ireland’s public sector has already been subject to the government’s guidance on responsible and ethical use of AI. The guidance sets out that all AI tools used by the Irish public service should comply with seven requirements: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability.

Despite the phased implementation of the EU AI Act, Irish businesses which use or deploy any AI systems or general purpose AI models should start preparing risk assessments for each such technology to identify the level of risk and to seek advice on associated obligations under the new EU AI Act. Technical documentation and data governance policies should also be prepared and reviewed in order to gear up for compliance with the legislation. Non-compliance with certain AI practices could lead to a fine of up to €35 million or 7% of a business’ worldwide annual turnover under the new regime, while providing incorrect or misleading information could also incur fines of up to to €7.5m or 1% of annual turnover.

The transformational power of AI for Ireland

Although the new AI rules may increase compliance requirements, businesses should embrace this new technology as it can provide benefits to a myriad of business streams. In its 2024 Regulatory and Supervisory Outlook Report, the Central Bank of Ireland ranked AI amongst the technologies with the greatest transformational potential. A separate study conducted recently by Implement Consulting Group on behalf of Google Ireland found that, if fully adopted, generative AI could boost the Irish economy by €45 billion over the next decade.

The main business benefits AI can provide include:

  • enhanced efficiency by automating tasks, improving existing processes and minimising costs;
  • the creation of new revenue streams by enabling the development of new products and services that were previously unattainable;
  • remaining competitive by leveraging AI technologies and exploring novel tools to stay at the forefront of industry advancements.

AI under the EU AI Act

The EU AI Act distinguishes between two AI concepts: “AI systems” and “general-purpose AI (GPAI) models”.

“AI systems” refers to “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

This definition is focused on the autonomy of an AI system and is intentionally kept broad to encompass future AI advancements. It captures a vast array of AI technologies and subcategories including machine learning such as spam filters, e-discovery or predictive text messaging; deep learning, such as autonomous driving or image classification; and the now widely-debated generative AI (GenAI):

  • Machine learning (ML), which is already heavily being used by companies in automated processes and refers to an AI model that is trained to learn from and make predictions based on input data without being explicitly programmed;
  • Deep learning, which is a subset of ML that uses neural networks to mimic the learning process of the human brain in order to be able to process more complex patterns; and
  • GenAI, which is a type of ML trained to find patterns in large data sets and generate original content matching those patterns.

Tools like ChatGPT or Gemini are a form of GenAI using large language models to generate output.

Risk-based system of regulation

The EU AI Act regulates AI systems by applying a risk-based approach and imposing varying degrees of obligations within a “pyramid” structure. There are four different “risk” categories, with some forms of AI being completely prohibited.

Unacceptable risk

At the top of the pyramid are AI systems that pose an unacceptable risk to human rights or the livelihoods and safety of individuals. Such systems are banned outright. This category includes biometric categorisation systems using sensitive personal data like political convictions, religious beliefs, race or sexual orientation; facial recognition databases with limited exceptions for law enforcement purposes, social scoring and AI systems that manipulate human behaviour.

High risk

The second category comprises high-risk AI systems, which attract the strictest requirements under the EU AI Act. Providers and deployers of these systems must comply with heightened regulatory obligations, including initial risk assessments, human oversight, and transparency requirements. The aim behind this high-risk designation is to protect individuals against the risk of physical harm, for example through vehicles, and against the risk of discrimination due to bias, such as in recruitment or education. Examples of high-risk AI systems include those intended to be used in critical infrastructure, education and recruitment or employee selection processes.

Limited risk

Limited-risk AI systems attract minimal compliance requirements that focus mainly on transparency obligations, so that users are made aware that they are interacting with an AI. Chatbots serve as an example of limited-risk AI systems.

Minimal risk

The free use of minimal-risk AI systems is permitted, such as the use of AI-enabled video games or spam filters.

GPAI models

An AI system often incorporates one or more AI models. In such instances, concurrent responsibilities may be applicable to both the AI system and the GPAI model. This can affect different stakeholders in the AI value chain to varying degrees.

A GPAI model is defined as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.

This concept was added later in the legislative process to address the evolution of large language models which often form the basis of popular GenAI systems, such as ChatGPT.

GPAI models can be categorised into “normal” or “systemic risk” models. Models with “systemic risk” face stringent obligations, including model evaluation, systemic risk mitigation, incident reporting, and cybersecurity.

All GPAI models must maintain technical documentation and integration information. They must also respect EU copyright law and provide a summary of training content. Some open-source AI models are exempt from these rules, unless they pose a “systemic risk”.

Testing and developing AI products or services

The EU AI Act sets up coordinated “regulatory sandboxes”. These sandboxes provide businesses with a controlled environment to test, develop, train and experiment with AI products and services under the supervision of national regulators and in real-world conditions.

These sandboxes serve several purposes, including research and development, learning and compliance, and innovation. They foster AI innovation by establishing a controlled experimentation and testing environment during the development and pre-marketing phase. They also facilitate both company and regulatory learning about new technologies and products thus contributing to greater regulatory compliance. In general, sandboxes play a crucial role in supporting innovation and the development of safe AI models and systems.

Access to these sandboxes will be free of charge for SMEs and start-ups, with certain exceptions, while SMEs established in the EU will have prioritised access. Businesses using the sandboxes remain liable for damage to third parties arising from the experimentation. However ,they will not be subject to any administrative fines as long as they followed guidance provided by the national competent authority in good faith. Under article 57 of the EU AI Act, member states must set up at least one regulatory sandbox within 24 months from entry into force of the legislation.

In Ireland, the National Standards Authority of Ireland is tasked with monitoring the progress of the EU pilot project on sandboxes. Lessons learned from this pilot project will inform the development of national sandboxes for AI. Additionally, Enterprise Ireland is investigating the potential for establishing a regulatory sandbox in the area of fintech and edtech.

Implementation timeline

The EU AI Act sets out various implementation deadlines, according to risk categories and types of AI. The main milestone dates include:

  • The prohibition on AI systems falling into the “unacceptable risk” category will take effect six months after the EU AI Act enters into force;
  • Codes of practice must be published by the EU AI Office regarding the proper application of the EU AI Act within nine months;
  • Obligations around GPAI models will take effect after 12 months;
  • A national notifying authority and market surveillance authority must be designated by each member state within 12 months;
  • At least one operational regulatory sandbox must be established in each member state within 24 months; and
  • Obligations for high-risk systems will commence after 24 months, subject to certain exceptions kicking in after 36 months.

The EU AI Act is a significant development that will impact businesses across the EU, including Ireland. In order to prepare for the new legislation, Irish businesses should review their organisation’s AI governance structures and policies and familiarise themselves with the key provisions of the EU AI Act outlined above.

Co-written by Isabel Humburg and Hannah McLoughlin of Pinsent Masons.

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.