Guest Column | March 4, 2024

6 Things Medtech Companies Should Know About The EU's AI Act

By Josefine Sommer, Eva von Mühlenen, and George Herring, Sidley Austin

Business AI act regulation-GettyImages-1987583895

With the publication of the agreed text of the European Union’s long-awaited Artificial Intelligence Act (AI Act) now available (subject to minor linguistic and numbering tweaks), the message to medtech companies is clear: they will need to begin their journey toward compliance by understanding their AI systems, determining their role under the AI Act, and identifying the specific obligations and implementation strategies that apply to them.

In this article, we answer six key questions arising from the AI Act, likely to affect medtech companies using AI systems governed by the AI Act, either as medical devices or incorporating such AI systems into their devices (in-scope devices).

  1. What devices does the AI Act apply to?
  2. What is the territorial scope of the AI Act?
  3. What rules apply to in-scope devices?
  4. What do providers of in-scope devices need to consider at the development stage?
  5. When does the AI Act apply?
  6. What are the risks of non-compliance for medtech companies under the AI Act?

These (and many more) questions will be front of mind for compliance and regulatory professionals in the medtech industry as they grapple with the implications of the AI Act for their business.

1. What Devices Does The AI Act Apply To?

In-scope devices, which undergo a third-party conformity assessment under MDR or IVDR, must comply with the requirements of the AI Act applying to high risk AI systems. In practice, this means virtually all AI systems used in medical devices will be within scope due to classification Rule 11 of the MDR.

The AI Act defines AI as machine-based systems, designed to operate with varying levels of autonomy, that exhibit “adaptiveness after deployment” and the ability to “infer” outputs including “predictions, content, recommendations, or decisions.” Iterations of this definition have developed through the legislative process to clearly distinguish, on the one hand, AI systems from other software systems, which fall outside the scope of the AI Act, and, on the other hand, to closely align with the OECD definition of AI. AI systems put into service for the sole purpose of scientific research and development are excluded from the scope of the AI Act.

Medtech companies will have to meet the AI Act’s requirements primarily in their role as “providers” (manufacturers or developers) of AI systems but may also be subject to the rules covering “deployers” of AI systems to the extent they utilize AI systems. The rules for deployers focus on using AI systems in accordance with their intended purpose. Providers of in-scope devices will be subject to both the AI Act and the MDR or IVDR.

2. What Is The Territorial Scope Of The AI Act?

Providers of in-scope devices – irrespective of where they are established – are covered by the AI Act when the device is placed on the EU market, put into service in the EU, or if the output produced by the AI system is used in the EU.

For example, this “extraterritorial” effect means that the use of an AI-based clinical decision support device provided by a company located in the United States would fall within the scope of the AI Act when used for decision-making on patients in the EU.

3. What Rules Apply To In-Scope Devices?

The AI Act takes a risk-based approach, with varying levels of requirements applying to different AI systems depending on their use. Many of the requirements covering in-scope devices will sound familiar to medtech companies and will resonate with device manufacturers that have brought devices into compliance with MDR or IVDR. However, some requirements for in-scope devices utilizing in-scope AI are distinct from those in the MDR and IVDR, and medtech companies should undertake a comprehensive gap assessment to identify areas of potential non-compliance.

To assist industry and to avoid duplication and additional administrative burden, the AI Act permits providers to streamline the implementation of requirements where overlaps arise with the MDR/IVDR. Overlapping elements include, for example:

  • Ensuring that the in-scope device has undergone an AI Act conformity assessment, demonstrating compliance with the requirements for high-risk AI systems. In practice, such AI Act conformity assessment may be done simultaneously with the in-scope device’s MDR/IVDR conformity assessment, provided that the notified body is AI Act-accredited.
  • Putting a quality management system (QMS) in place documenting the strategies, policies, and procedures of the company covering matters including, among others, regulatory compliance, risk management, the implementation of a post-market monitoring system, document retention, compliance with applicable technical specifications, and quality control. In practice, these elements should be integrated into the provider’s existing QMS.
  • Preparing instructions for use (IFU), detailing the intended purpose of the in-scope device, or where it is distinct, the high-risk AI system used in conjunction with the device. This must include the level of accuracy of the AI system, including the metrics against which it has been tested and validated. In practice, these elements of the AI system’s IFU s are likely to be included in the IFU of the in-scope device.

Providers should be aware that although the processes needed to meet these requirements can be integrated into the equivalent processes required under the MDR/IVDR, the substance of the requirements themselves are different and demand a distinct set of expertise and know-how.

Responsibility for ensuring compliance with the requirements for high-risk AI systems under the AI Act will place considerable additional demands on the skills of affected in-house compliance teams, who will soon be tasked with designing a single streamlined compliance process for products captured by both the AI Act and MDR/IVDR.

4. What Do Providers Of In-Scope Devices Need To Consider At The Development Stage?

The AI Act introduces several new obligations affecting in-scope devices that are specific to the use of AI, many of which should be considered at the development stage. These include:

  • Training, validation, and testing data of in-scope AI systems are subject to new quality rules.
  • AI systems must be capable of automatically recording their decision-making, with such records to be kept for at least six months. This requirement applies in addition to document retention rules under MDR/IVDR.
  • In-scope devices also will need to be capable of effective transparency and enable human oversight. The intention of the AI Act is that human oversight will prevent or minimize the risk to health and safety by intervention of natural persons when the AI system is used. This means that, in general, AI systems need to be provided to users in such a way that natural human persons are able to understand the limitations of the device and correctly interpret its output.

​​5. When Does The AI Act Apply?

Pending final adoption by the European Parliament in the coming weeks, the AI Act is expected to become EU law later this year. Once the provisions apply, the new regime will be felt horizontally across all sectors utilizing AI systems. Among these, the medtech industry will have to grapple with new requirements applying to in-scope devices, which will apply in different ways throughout the medtech supply chain. Once it is determined which rules apply, understanding when they apply and establishing a clear timeline to compliance will be critical.

The primary set of rules affecting medtech companies will apply after a three-year transitional period, but medtech companies with in-scope devices must consider taking action now to ensure they implement efficient routes to compliance in good time. For example, medtech companies with in-scope devices must ensure that they have – once available – a notified body accredited as a conformity assessment body for the purpose of the AI Act. The accreditation of notified bodies will depend on the development of harmonized standards in order to carry out accreditations and conformity assessments. The European Commission has already requested its standardization bodies to develop harmonized standards to support the implementation of the AI Act.

The timeline below illustrates how different elements of the AI Act will apply in a staggered timeline.

Implementing support will be provided to industry. For example, the European Commission is obliged to produce detailed guidance on the relationship between the AI Act and MDR 2017/745 (MDR) and IVDR 2017/746 (IVDR), and the newly established AI Office will produce codes of practice within nine months of the AI Act’s entry into force.

6. What Are The Risks Of Non-Compliance For Medtech Companies Under The AI Act?

The AI Act contains a tiered system of fines aimed at incentivizing compliance. The maximum penalty for non-compliance with the prohibited uses of AI is the greater value of either (i) a fine of up to 35 million euros or (ii) 7% of worldwide annual turnover.

Penalties for breaches of certain other provisions, including the obligations imposed on providers, will attract fines of either (i) up to 15 million euros or (ii) 3% of worldwide annual turnover, whichever is of greater value. Enforcement of the AI Act will fall on the member states, who also will be represented in a new European Artificial Intelligence Board, which is tasked to issue opinions, recommendations, and advice relating to guidance on the implementation of the AI Act.

A new body, the AI Office, will support market surveillance and enforcement at member state level. The EU Commission Decision of 24 January 2024 establishing the European AI office entered into force on February 21, 2024. The AI Office shall support the implementation of rules for high-risk AI systems such as those utilized in in-scope devices and is expected to do so without duplicating the work of sector-specific bodies.

Furthermore, the AI Act introduces a mechanism whereby natural or legal persons may submit AI-related complaints with the market surveillance authorities. This provision could significantly impact medtech companies, as it opens the door for competitors or other stakeholders to file complaints against AI systems that they believe are not compliant with the AI Act’s stringent requirements.

Conclusion

The AI Act is geared toward streamlining requirements within existing legislation and establishing a framework that accommodates the nuanced requirements of AI systems, such as continuous learning capabilities. As AI technologies evolve, particularly those AI systems that adapt and learn from new data post-deployment, the development of guidelines will be needed to support the alignment of medtech companies with regulatory standards.

As further clarity emerges over time, medtech companies should be encouraged that the Commission will be keen to avoid a repetition of the transitional issues experienced with MDR and IVDR, particularly with this flagship piece of legislation, which is intended to foster innovation. Given the new requirements introduced by the AI Act, affected medtech companies should consider the questions posed above, and understand whether their devices fall within scope of the AI Act and, if so, proactively build a robust AI governance framework to meet these challenges.

About The Authors:

Josefine Sommer is a partner in Sidley Austin’s Global Life Sciences practice based in Brussels. She assists clients in regulatory, compliance, and enforcement matters. She counsels medical device, pharmaceutical, and biotech companies on EU regulatory compliance, including in clinical trials, authorizations, and regulatory authority interactions. She additionally advises clients on EU environmental law, including chemicals legislation impacting medical devices, pharma, and biotech, as well as consumer products. Sommer handles GMP and quality management system (QMS) matters, and also represents companies in regulatory enforcement actions.

Eva von Mühlenen is a senior advisor and head of Sidley Austin’s Life Sciences Group in Switzerland. She advises on complex legal and regulatory matters across the life sciences spectrum, drawing on her broad experience and knowledge of Swiss regulations, as well as the administrative procedures involved in manufacturing, authorizing, marketing, and advertising of medicinal products and medical devices. In addition, she advises on matters concerning digital health, including the application of artificial intelligence and machine learning software in healthcare. She also advises on digital and data strategies, including data protection, privacy, and cybersecurity issues.

George Herring is an associate in Sidley’s Global Life Sciences practice, based in the firm’s London office. He advises a range of clients within the life sciences industry, including medicinal product, medical device, and food companies, on the interpretation of the EU- and UK-applicable legal framework, regulatory compliance, market access strategy, and optimization of IP regulatory rights. Herring also has commercial experience, working in-house at a major biotechnology company, advising on data privacy and transparency matters alongside contracting arrangements with healthcare professionals and third-party vendors.