Out-Law / Die wichtigsten Infos des Tages

Out-Law Analysis Lesedauer: 7 Min.

Healthcare providers need AI strategy as EU AI Act requirements loom


Developing an AI strategy can help healthcare providers meet legislative requirements arising under the EU AI Act.

An effective AI strategy will help healthcare providers address challenges to data security and privacy that can arise from AI use, as well as the risk of bias. It will also help them build the trust of patients in use of the technology and, in turn, enable better diagnosis of diseases, treatment planning, and personalised care.

As Pinsent Masons experts explain below, AI strategies can be developed to reflect EU AI Act obligations as well as existing regulations and best practices in place around Europe.

Healthcare and high-risk AI

A first step for healthcare providers seeking to develop or use AI in the EU should be to determine the extent of their exposure to the EU AI Act, which sets out a risk-based system of regulation under which some uses of AI are entirely prohibited and strict requirements apply to ‘high-risk’ AI.

Pinsent Masons has developed a guide to help organisations understand whether the AI systems they develop or use constitute ‘high-risk’ AI. Where this is the case, organisations will face obligations around registration, quality management, monitoring, record-keeping, and incident reporting. They will also need to ensure that the AI systems themselves comply with requirements around data use, transparency, human oversight, and accuracy, among other things.

Most uses of AI in healthcare are likely to constitute ‘high-risk’ AI.

It has also been long established that software – including AI – can constitute a medical device for the purposes of regulation. In the EU, medical devices are strictly regulated under existing medical devices rules. The EU AI Act affirms that AI systems that are subject to conformity assessments under the medical devices regime will be considered high-risk AI systems for the purposes of the EU AI Act.

Transparency and fairness

Transparency requirements under Article 13 of the EU AI Act are focused on ensuring those using AI systems are given sufficient information – and instructions – from those developing such systems in order that they can “interpret a system’s output and use it appropriately”.

While some healthcare providers may develop their own AI systems in-house, it is likely that many will procure them from specialist developers. Healthcare providers will therefore want to understand what information they can – and should – expect from AI developers they procure from.

Article 50 of the EU AI Act sets out further transparency requirements that are aimed at ensuring people are informed when they are interacting directly with an AI system and made aware about certain AI-generated outputs.

Annabelle Richard of Pinsent Masons in Paris said: “In France, healthcare professionals are already subject to a transparency obligation vis-à-vis their patients when using a medical device that operates with a machine learning AI system. The patients need to be informed of the device use of AI as part of a diagnostic, treatment, or preventive procedure. The healthcare professional is then under the obligation to inform each patient of the output of the device. The scope of this obligation is narrower than the one provided by the AI Act but will continue to apply.”

The transparency requirements in the EU AI Act sit alongside other rules that promote the development of AI systems that adhere to principles of fairness and respect for human rights. In this regard, Article 27 of the Act introduces a requirement for a fundamental rights impact assessment to be carried out prior to the deployment of a high-risk AI system.

Special rules around data

Healthcare AI relies heavily on patient data, which is sensitive and subject to strict conditions on processing – including rules outlined in the GDPR in the EU. The EU AI Act goes further than the GDPR, however, in laying out additional restrictions around health data processing.

Article 10 of the Act requires data used in the training, validation and testing of high-risk AI to be subject to data governance and management practices appropriate for the intended purpose of those systems. Those practices include steps to anticipate and, separately, detect, prevent and mitigate potential biases that are likely to affect individuals’ health and safety, have a negative impact on fundamental rights, or lead to discrimination.

The rules provide, however, that health data or other forms of sensitive personal data can only be processed where strictly necessary for the purpose of ensuring bias detection and correction – healthcare providers will generally be expected to use either synthetic or anonymised data for this purpose.

Jeroen Schouten of Pinsent Masons in Amsterdam said the Dutch government has a focus on improving access to and availability of health data, having published a national vision and strategy on the health information system and a vision and strategy on secondary use of health data in 2023.

“Health data is to be made available in a standardised, accessible, and usable format,” Schouten said. “Patient files will become health care institution agnostic to prevent issues with non-operability of electronic patient records between institutions. The processing and use of health data will be restructured in such a way so that it becomes more readily available for primary health care and for secondary use and general prevention purposes.”

“Health data records are to be linked to the specific individual first rather than a health care institution. Each person should have one interconnected record containing a holistic view on all his/her health data rather than a fragmented institution-based record. That record can also contain health data obtained through wearables and other informal sources of health data. All healthcare providers creating health data records are connected to a common technical infrastructure with national coverage,” he said.

An enabler for health data use in the EU will be the establishment of a new European health data space (EHDS), which is to be provided for in EU law. That framework envisages the use of health data for, among other things and subject to safeguards, the purposes of training AI.

Human oversight and bias

Under the EU AI Act, high-risk AI systems must be designed and developed in such a way that they can be effectively overseen by people when they are in use. The Act is not prescriptive about the specific oversight measures that must be put in place – only that the measures are commensurate with the risks, level of autonomy and context of use of the high-risk AI system.

A stated purpose of the human oversight requirements is to address the risk of overreliance on the AI systems’ outputs and the bias that may be embedded within those outputs that people could come to rely upon when later making decisions – including those pertaining to healthcare.

Cerys Wyn Davies of Pinsent Masons in Birmingham said: “AI models can inherit biases from training data, leading to discriminatory outcomes. In healthcare, biased predictions run the risk of propagating stereotypes or other social prejudices against vulnerable groups, which could exacerbate disparities in access to and quality of healthcare. The Act aims to address this by encouraging developers to document bias mitigation strategies.”

“The UK government has already taken significant action to overcome disparities in the performance of medical devices. For example, the Medicines and Healthcare products Regulatory Agency now requests that approval applications for new medical devices describe how they will address bias. Further, NHS guidance has been updated to show that pulse oximeter devices’ accuracy is limited when used on patients with darker skin tones,” she said.

Skills, funding and other support

Limited resources and the costs of compliance can pose a challenge for healthcare providers in developing an effective AI strategy and meeting the requirements of the EU AI Act. However, there are initiatives in some EU member states designed to support businesses.

Frankfurt-based Volker Balda of Pinsent Masons said, for example, that an AI action plan set out by the Federal Ministry of Education and Research in Germany last year deals with AI in healthcare.

“The plan emphasises the promotion of research and talent, aiming to advance AI potential in the healthcare sector through funding and support for young AI scientists,” Balda said. “The plan also addresses the need to counteract demographic changes and the shortage of specialists, as well as highlights the importance of revising data infrastructure to strengthen the availability of biomedical data for the exploitation by AI.”

“The AI action plan also includes various funding initiatives to support the development of AI solutions for specific medical issues and to address ethical, legal, and social aspects of AI in healthcare. These initiatives are part of Germany’s broader strategy to become a leading hub for AI development and application,” he said.

Like other countries, Spain has also developed a national AI strategy. Tatiana Fernández de Casadevante Aguirreche of Pinsent Masons in Madrid said further Spanish government initiatives are also focused on promoting AI use in healthcare specifically – including through the creation of so-called AI observatories.

“In addition, Spanish universities, hospitals and/or research institutes and tech companies are actively involved in the development of AI-based healthcare solutions, focusing on areas such as AI-assisted diagnostic systems – such as through images and algorithms – as well as predictive analysis of patient data, prosthetic devices and optimisation of clinical processes,” Fernández de Casadevante Aguirreche said. “There is also a strong interest in further developing AI tools to help patients manage chronic diseases and improve the management of hospital resources.”

The development of new guidance and standards is expected to support EU AI Act compliance in future. Ireland’s National Standards Authority is already playing an important role in this regard, according to Dublin-based Dorian Rees of Pinsent Masons.

“Ireland aims to be at the forefront of AI standardisation, contributing to the development of international AI standards and ensuring that Irish expertise is influential in shaping global AI regulations,” Rees said. “This leadership can help position Ireland as a hub for ethical and innovative AI solutions. The National Standards Authority has already been providing guidance and resources to ensure businesses can navigate the new regulatory landscape effectively, while Ireland’s Data Protection Commission is also expected to play a crucial role in overseeing compliance and enforcing the Act's provisions, ensuring that AI technologies used in healthcare protect patient data and uphold high standards of accuracy and fairness.”

“The Irish government is committed to supporting the healthcare industry through grants, subsidies, and legal workshops to ease the financial and administrative burden of compliance. These efforts are designed to help SMEs and startups, which are integral to Ireland’s economy, adapt to the new regulations without stifling innovation,” he added.

We are working towards submitting your application. Thank you for your patience. An unknown error occurred, please input and try again.