Guest Column | July 9, 2024

Understanding The Potential Of AI Med Devices Amid Regulatory Challenges

By Timothy Bubb, technical director, IMed Consultancy

Artificial intelligence in medicine-GettyImages-1648843235

The healthcare landscape has embarked on a new era with the advent of digital health technologies. From wearable sensors that monitor vital signs to AI-powered diagnostic tools, the range of innovations is vast and promises to revolutionize healthcare delivery.

Regulatory bodies, such as the FDA and MHRA, are making strides in defining and regulating AI/ML-enabled medical devices, with a focus on ensuring safety and efficacy. AI and ML offer immense potential in revolutionizing healthcare delivery, enabling early disease detection, personalized treatment approaches, and remote patient monitoring to name a few. However, navigating the evolving regulatory landscape, with new regulations like the EU AI Act, poses challenges, especially as digital health solutions can blur the lines between medical devices and non-medical tools. Current and forthcoming initiatives like the FDA’s predetermined change control plan and the U.K.'s regulatory sandbox for software medical devices incorporating AI aim to provide developers with a clearer and more predictable runway to achieve regulatory compliance in this rapidly evolving space.

Embracing AI-Powered Advancements

As the industry embraces these advancements, regulatory bodies the world over are grappling with the complexities of evaluating and approving medical devices that do not conform to traditional paradigms and do not have a physical presence in the traditional sense. AI holds immense promise in revolutionizing healthcare delivery by facilitating early disease detection, providing personalized treatment approaches, and augmenting remote consultations through telehealth solutions. AI-powered algorithms can analyze vast troves of health data to identify disease biomarkers, predict disease trajectories, and tailor interventions to individual patients.

While concerns around potential harm and bias are considered when evaluating possible use scenarios in which AI/ML can be employed and with what degree of human intervention, more and more AI/ML tools are being developed in the healthcare sector to help manage vast amounts of data and interpret it speedily and accurately. Whether this data is in text form, video, or imagery, AI/ML can help save hours of manual analysis and cross-checking and suggest interpretations that would otherwise take human reviewers years to complete. AI/ML can rapidly analyze radiology images, histological data, posture, eye movement, speech speed, pitch and sound, and a whole range of other types of input.

AI and ML have the potential to revolutionize several areas of healthcare. In diagnostic imaging, for example, AI and ML algorithms can analyze medical images like X-rays, MRI scans, CT scans, and ultrasounds to identify certain features or structures present in the image and assist in diagnosing various conditions. For remote patient monitoring, AI-powered devices can continuously collect data on vital signs, activity levels, and other metrics, enabling early detection and timely intervention, while personalized medicine benefits from AI/ML algorithms that analyze patient data, including genetic information, medical history, and lifestyle factors, to tailor treatment plans. Clinical decision support systems (CDSS) can use AI/ML to examine patient data, aiding in diagnosis, treatment planning, and decision-making. Wearable health devices equipped with AI/ML can monitor health parameters such as heart rate, blood pressure, sleep patterns, and physical activity. In robotic surgery, AI-powered robots could assist surgeons in minimally invasive procedures by enhancing precision, dexterity, and control. Lastly, predictive analytics for healthcare management involves AI and ML models analyzing large volumes of data, including electronic health records (EHRs), insurance claims data, and operational metrics, to identify patterns, trends, and risk factors, ultimately improving healthcare management.

These application areas demonstrate the versatility and potential impact of AI and ML in medical device development and healthcare delivery, spanning from diagnosis and treatment to patient monitoring and management. In underfunded areas of medical research, this could even prove life-changing by helping detect comorbidities and environmental or genetic factors that place particular individuals at higher risk of disease. Moving the bar even higher than early detection, it could become possible to warn people of an estimated potential risk years before diseases begin to manifest, to allow preventive or protective changes in lifestyle.

Digital Health Products Vs. Digital Medical Devices

The delineation between digital health and digital medical devices is not always simple, and it is crucial to understand regulatory nuances. Digital health encompasses a spectrum of technologies, ranging from non-medical devices designed to monitor well-being to medical devices tailored for specific medical purposes. The many mindfulness apps, sleep tracking software, and many other apps flooding the internet, for example, are usually not medical devices. However, a change in a single word of a marketing claim can sometimes be the difference between a product being a digital health product and being regulated as a medical device.

A consensus, but not exhaustive, definition for “Software as a Medical Device” is provided by the International Medical Device Regulators Forum (IMDRF), which describes it as "software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device."

In its June 2023 Roadmap, the MHRA confirmed it is developing guidance to help identify Software as a Medical Device (SaMD), differentiating it from the wide and confusing range of other tools such as well-being and lifestyle software products, IVD software, and companion diagnostics.1

Understanding Digital Health Regulations

Regulators are thus striving to keep pace with technological advancements, while addressing concerns regarding data security, potential bias, and safety impacts possible from poorly performing clinical software tools, which underscores the need for robust laws to support proportionate regulation of AI across a range of sectors, including healthcare.

The EU has published the final text for the AI Act that will regulate AI systems in multiple industries, which also encompasses medical devices. It specifies one of the core objectives for healthcare AI regulation: “in the health sector where the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems supporting human decisions should be reliable and accurate. The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk.”2

As a result of wider definitions of high-risk AI uses, certain digital health products may also come under CE marking regulation for the first time under the AI Act’s regulatory assessment of high-risk AI systems, where they are not considered to be medical devices. The legislation specifically calls out certain digital health technologies as being high risk, such as, for example, AI systems that are used for emergency healthcare patient triaging and systems used in evaluating eligibility for certain healthcare services. These are classified as high-risk “since they make decisions in very critical situations for the life and health of persons and their property.”3 These digital health products will therefore now require notified body assessment and CE marking as an AI system, which will be based on different criteria than existing requirements for medical devices.

In Europe, medical device software incorporating AI or ML currently falls under Class IIa classification at a minimum and requires formal regulatory assessment. However, the forthcoming EU AI Act is poised to introduce additional complexities, adding further regulatory scrutiny for AI-based medical devices.

In the U.S., the FDA has significant experience in successfully regulating AI/ML-enabled devices and has gone so far as to compile a publicly available list of such AI-enabled tools with FDA marketing clearance. Among these, radiology has the largest number of submissions and is showing the steadiest increase in AI/ML-enabled device submissions. Interestingly, algorithmic models can be seen to be increasing in complexity with more featuring deep learning models.4

The FDA currently applies its “benefit-risk” framework and confirms that devices must conform to some basic principles such as the demonstration of sensitivity and specificity for devices used for diagnostic purposes, the validation of intended purpose, and stakeholder requirements against specifications and development that ensures repeatability, reliability, and performance. FDA also has considered the need for some AI/ML systems to be adaptively re-trained on new data or context-specific data, so it has introduced processes to enable certain pre-authorized software changes to be agreed upon by the manufacturer and FDA, which can then be deployed without the need for further regulatory assessment.5 This is a significant milestone in regulatory innovation, as traditional assessment methods still used in EU medical device assessments can struggle to enable similar effective and proportionate regulation.

In the U.K., a new regulatory sandbox, starting in pilot in May 2024, has been designed to provide a safe space to trial innovative healthcare AI products given to regulators before they are implemented6], to help simplify the confirmation of clinical efficacy and safety of AI systems. To respond to a need to keep pace with evolving ML methods and applications, the MHRA also will develop a system based more on guidance than regulation in the U.K., allowing for more frequent updates to account for the speed of innovation. Alongside the FDA and Health Canada, the MHRA has outlined 10 guiding principles that can inform the development of good machine learning practices (GMLP) that are safe, effective, and promote high-quality medical devices that use AI/ML.7

In 2023, the MHRA also updated the Software and AI as a Medical Device Change Programme to ensure future regulatory requirements for software and AI are clear and patients are protected. The Change Programme specifically builds on the intention to make the U.K. a globally recognized home of responsible innovation for medical device software by achieving safety assurances, defining clear guidance and processes for manufacturers, and liaising with key partners such as the National Institute for Health and Care Excellence (NICE) and NHS England and also with international regulators through the International Medical Device Regulators Forum (IMDRF). In a bid to address bias and inequalities, the MHRA also confirms that it recognizes that SaMD and AIaMD must perform across all populations within the intended use of the device and serve the needs of diverse communities.

As the digital health landscape continues to evolve, collaborating with regulatory experts and embracing multidisciplinary approaches are crucial for navigating these challenges and ensuring compliance. Given the heightened complexity of AI-based medical devices and digital health products, it is imperative to discern whether AI/ML functionalities are integral to product functionality or serve as supplementary components and how this could impact decisions on fundamental product architecture, software algorithm design, and regulatory evidence generation strategy. Partnering with professionals who possess in-depth knowledge of regional regulations and emerging trends in digital health can expedite market entry and ensure compliance with evolving standards and requirements. By embracing collaboration and expertise, innovators can navigate regulatory challenges and pave the way for transformative digital health solutions.

References:

  1. Gov.co.uk, Software and AI as a Medical Device Change Programme – Roadmap, June 2023
  2. EUR Lex, Proposal for a Regulation laying down harmonised rules on artificial intelligence, (27)
  3. EUR Lex, Proposal for a Regulation laying down harmonised rules on artificial intelligence, (37)ù
  4. FDA, Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices
  5. FDA, Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions Draft Guidance for Industry and Food and Drug Administration Staff APRIL 2023
  6. Gov.co.uk, MHRA launches AI Airlock to address challenges for regulating medical devices that use Artificial Intelligence, 9th May 2024
  7. Gov.co.uk, Good Machine Learning Practice for Medical Device Development: Guiding Principles

About The Author:

With more than 10 years’ experience in QA/RA roles, Timothy Bubb, technical director at IMed Consultancy, has breadth and depth of knowledge across the regulatory, engineering, clinical, design and development, and quality assurance disciplines. Tim has a passion for empowering innovation in medical devices and brings insight and pragmatism to projects bringing complex lifesaving and life-enhancing products to market.