Guest Column | November 1, 2024

AI-Enabled Medical Device Manufacturers: Are You Prepared for Evolving FDA Oversight?

By Brett Mason, Troutman Pepper

AI text with circuit pattern-GettyImages-924253506

With the rise AI-enabled medical devices, the FDA is soliciting comments and feedback regarding the importance of AI safety and effectiveness considerations in advance of its upcoming Digital Health Advisory Committee Meeting. The meeting will specifically address FDA concerns related to total product life cycle considerations for generative artificial intelligence (AI)-enabled devices, including premarket performance evaluation, risk management, and post-market performance monitoring for these devices.

As the FDA continues to develop and solidify its policies, guidance, and regulations on AI, medical device manufacturers should prepare to navigate new legal challenges, as well as evolving safety and compliance risks.

The Current Landscape Of AI-Enabled Medical Devices

At present, the FDA has approved the marketing and sale of approximately 950 AI-enabled medical devices and that number will only continue to rise. AI-enabled medical devices are medical devices that use AI to enhance the diagnosis and treatment of patients, including through the use of data analytics and machine learning (ML) algorithms.

These AI-enabled medical devices are transforming medicine and can range anywhere from an algorithm that can interpret diagnostic imaging to an implanted medical device that makes automatic adjustments to patient treatment based on ongoing data monitoring.

While these devices can offer significant benefits to patients, concerns remain for FDA regulators about the potential risk factors, such as limited training data, biases, gaps in performance metrics, and end user errors.

Transparency And Evolving Risk Throughout The Product Life Cycle

As early as 2021, the FDA, in coordination with Health Canada and the United Kingdom's Medicines and Healthcare Products Regulatory Agency (MHRA), outlined guiding principles for good machine learning practice (GMLP) and emphasized the importance of transparency. And these considerations have only become more important with the rise of generative AI.

For example, warnings for users of AI-enabled devices — most often, diagnosing and/or prescribing physicians — inevitably must account for more and different variables than traditional medical devices. It’s not just a matter of alerting to physical and/or operational risks but also the complex interplay of the AI algorithm with the data set used to train the algorithm. The FDA is concerned that end users may not understand AI’s limitations and how its capabilities are impacted by the subject patient, how the device is used, and what data the device has access to.

One area of regulatory concern is that an AI-enabled medical device could have certain limitations in data characterization, meaning certain patient populations would not be appropriate candidates for the AI-enabled medical device. This might include a situation where the AI-enabled device was developed using only data from adults and is not intended for a child. However, it may not always be simple for manufacturers to identify such data gaps if certain patient populations are inadvertently unrepresented during device development. If the gap remains undetected, it might not come to light until after commercialization.

But even more challenging is that these AI systems can be created to constantly learn over time as the device absorbs new data from patients. This means manufacturers may find it more challenging to anticipate or test for potential risks that might occur down the road as the AI digests more data or encounters unanticipated scenarios.

In light of these potential risks, the FDA will likely be looking to manufacturers to continuously update warnings throughout the product’s life cycle. With post-market monitoring as one area of focus for the FDA’s upcoming committee meeting, AI-enabled medical device manufacturers should prepare for the possibility of more stringent FDA post-market monitoring requirements.

Balancing AI With Clinical Judgment

The medical device landscape is also evolving because manufacturers are stepping into a risk area that was previously occupied by medical providers alone. Before AI-enabled medical devices, the manufacturer might provide medical providers with a label of risks and instructions — sometimes even training — but the provider ultimately made the final medical decisions.

Now, AI-enabled devices can offer real-time outputs that might help inform the medical provider’s decisions. And a recent study has shown that “ChatGPT-4 alone demonstrated higher performance” in diagnosing several cases in comparison to two physician groups – including one group that used ChatGPT-4 as a resource. While these outputs can be incredibly beneficial and may be shown to be more accurate than humans, the FDA prioritizes full transparency to medical providers as to the reliability of these outputs.

One GMLP for risk mitigation is to offer confidence intervals with such outputs. For example, the AI-enabled device might state with 95% confidence that there is a high risk of infection during a heart transplant surgery. The confidence interval can help the surgical team understand the reliability of the AI's recommendation.

Risk management is another area of consideration for the upcoming advisory committee meeting, making it likely that the FDA will consider the significance of these types of qualifiers to mitigate risk for the patient. However, it remains to be seen where, ultimately, responsibility and liability will be allocated between the manufacturer and its end users.

Conclusion

With the nature of AI, the FDA continues to consider risks and safeguards for AI-enabled devices. As this regulatory landscape develops, medical device manufacturers should monitor FDA guidance and AI advancements to ensure that AI-enabled medical devices are safely and effectively optimizing patient outcomes consistent with regulatory requirements.

About The Author:

Brett Mason, a partner in Troutman Pepper's Atlanta office, is a trial attorney and litigator who defends clients facing complex tort litigation. Her approach bridges the gap between the law and science.