The FDA Regulatory Landscape For AI In Medical Devices
By Michele Buenafe, Jake Harper, and Andrew Gray
In recent years, the digitalization of the healthcare industry has been accelerated to meet demands for smarter devices and robotics, wearable technology, AI-based data analysis, and enhanced platforms and simulations, among others. This digitalization has driven an increased interest in incorporating artificial intelligence (AI) and machine learning (ML) technologies into medical devices.
Over the last decade, the FDA has reviewed and authorized a growing number of devices with AI/ML functionality across many different therapeutic categories using its 510(k) clearance, de novo, and premarket (PMA) approval processes — and anticipates this trend to continue. In addition, AI and ML technologies may be used to support the investigation, development, and/or production of medical devices and other FDA-regulated products. When used for medical or other healthcare-related purposes, these technologies are likely subject to FDA regulations, policies, and guidance.
In Part 1 of this article series, we discuss the existing FDA programs and recently issued guidance impacting AI/ML technologies intended for use in healthcare, as well as what to expect from the FDA’s FY2023 priority list. In Part 2, we will examine the reimbursement framework for AI/ML and some challenges ahead for the medical device industry.
Existing FDA Programs Impacting AI/ML Technologies
FDA’s regulation and oversight of AI/ML software continues to grow, as evidenced by the online list compiled and maintained by FDA’s Center for Devices and Radiological Health (CDRH) of medical devices utilizing AI/ML technologies that CDRH has cleared or approved. That list currently includes more than 500 devices, the vast majority of which were cleared via the 510(k) process, along with a few de novo submissions and PMA applications. In terms of FDA review branch, the significant majority fall under radiology, followed by cardiovascular, hematology, and neurology.
However, FDA/CDRH’s regulatory priorities for AI/ML technologies expand beyond premarket review and are led by CDRH’s Digital Health Center for Excellence. Launched on Sept. 20, 2020, the Digital Health Center for Excellence’s main purpose is to foster responsible and high-quality digital health innovation. Its three main goals are to develop and issue guidance documents, increase the number and expertise of the digital health staff, and develop the Software Precertification Pilot.
On Jan. 12, 2021, the FDA released its first Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan that details a multipronged approach to advance FDA oversight of AI/ML-based medical software. This action plan is in response to stakeholder feedback that it received from an April 2019 discussion paper, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning–Based Software as a Medical Device. The five-point SaMD Action Plan outlines the following: further develop the proposed regulatory framework, including through draft guidance issued on a predetermined change control plan; encourage harmonized good machine learning practices to evaluate and improve ML algorithms; foster a patient-centered approach, including device transparency for users; support regulatory sciences; and advance real-world performance monitoring pilots.
In October 2021, the FDA, Health Canada, and the United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA) jointly identified 10 guiding principles that can inform the development of good machine learning practice for medical devices and how they can help promote safe, effective, and high-quality use of AI and ML.
Recently Issued Guidance Documents Affecting AI/ML
CDHR remains active in promulgating guidance documents impacting AI/ML technologies, including the following recently issued guidances.
Clinical Decision Support Software — Final Guidance (Sept. 28, 2022): This long-awaited final guidance represents a significant and more conservative shift from the prior draft guidance issued in September 2019 (described in our prior LawFlash) and could present challenges for developers of AI/ML technologies. This guidance document describes the FDA’s interpretation of the statutory exemption for clinical decision support (CDS) software functions under the Federal Food, Drug, and Cosmetic Act. Software that meets the four criteria set forth in the statute would be exempt from FDA’s medical device regulatory requirements. FDA’s interpretation of these four criteria, as described in the final guidance, will make it more challenging for software developers to fit their software products (including AI/ML software) within the scope of the CDS exemption. Further, unlike the prior draft guidance, the final guidance does not include any proposed enforcement discretion policy for software that does not fully meet all four statutory criteria.
Computer Software Assurance for Production and Quality System Software — Draft Guidance (Sept. 13, 2022): This new draft guidance provides recommendations for “computer software assurance” for software and automated systems used for medical device production or quality. The guidance describes “computer software assurance” as a risk-based approach to establish confidence in the automation used for production or quality systems and identify where additional rigor may be appropriate. The guidance further includes various methods and testing activities to establish “computer software assurance” and ensure compliance with quality system regulation and other regulatory requirements.
Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions — Draft Guidance (April 8, 2022): This new draft guidance would replace the 2014 final guidance, Content of Premarket Submissions for Management of Cybersecurity in Medical Devices. This is FDA’s second attempt at a new draft – a prior draft guidance was issued in 2018 and received significant criticism. This new draft guidance includes changes to align with use of a Secure Product Framework, removal of risk tiers (from the prior draft), replacement of the Cybersecurity Bill of Materials with a Software Bill of Materials, additional clarification regarding premarket submission document requests throughout the draft guidance, and addition of Investigational Device Exemptions to the scope. The guidance also makes clear that cybersecurity is part of FDA’s Quality System Regulation (QSR) design control requirements.
Digital Health Technologies (DHTs) for Remote Data Acquisition in Clinical Investigations — Draft Guidance (Jan. 21, 2022): This draft guidance describes considerations when using DHTs in clinical investigations and applies to ALL types of clinical investigations (whether the investigation is for a drug, biologic, or device product) using a digital health technology for remote data acquisition.
Assessing the Credibility of Computational Modeling and Simulation in Medical Device Submissions — Draft Guidance (Dec. 23, 2021): This new draft guidance sets forth a proposed nine-step process to assess the credibility of computational modeling and simulation (CM&S) used to support a medical device premarket submission. CM&S can be used in a variety of ways in medical device regulatory submissions, such as to support in silico device testing, as a device development tool, or within the device itself as software as a medical device (SaMD) or software in a medical device (SiMD).
What’s In Store For 2023?
Toward the end of 2022, the CDRH published its annual list of guidance documents that it intends to publish in FY2023 (known as the A-list and B-list, with A-list including items of higher priority). The guidance document priorities from this list that are most likely to impact AI/ML technologies are listed below:
- Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions (final)
- Content of Premarket Submissions for Device Software Functions (final)
- Transition Plan for Medical Devices That Fall Within Enforcement Policies Issued During the Coronavirus Disease 2019 (COVID-19) Public Health Emergency (final)
- Transition Plan for Medical Devices Issued Emergency Use Authorizations (EUAs) During the Coronavirus Disease 2019 (COVID-19) Public Health Emergency (final)
- Marketing Submission Recommendations for A Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions (draft)
Changes are also on the horizon for AI/ML-enabled devices marketed for pandemic-related uses under an Emergency Use Authorization (EUA) or one of FDA’s many COVID-related guidance documents describing enforcement policies. As noted above, CDRH’s A-list includes finalization of its previously issued draft guidance documents on transition plans for such devices (discussed in our prior LawFlash). Under the draft guidance documents, FDA had proposed a three-phase 180-day transition period for devices covered by either an EUA or a COVID-related enforcement policy. The final guidance documents are expected to issue this year.
We will explore the reimbursement landscape of said technologies in our next article. After all, what are the benefits of developing them if they cannot be implemented into medical devices and sold? How will these AI/ML technologies find their way into day-to-day use in the healthcare industry?
About The Authors:
Michele L. Buenafe is a partner at Morgan Lewis’ Washington, D.C., office. She counsels clients on FDA compliance and enforcement matters related to medical devices, combination products, and digital health technologies, such as software as a medical device (SaMD), telemedicine systems, clinical decision support software, wearable devices, artificial intelligence systems, and mobile medical apps. Buenafe also advises on U.S. Drug Enforcement Administration and state regulatory issues for controlled substances and medical products, including both drugs and devices. She serves as the leader of the firm’s digital health team. You can reach her at email@example.com.
Jacob J. Harper is a partner at Morgan Lewis’ Washington, D.C., office. He advises stakeholders on an array of healthcare regulatory, transactional, and litigation matters. His practice focuses on compliance, fraud and abuse, and reimbursement matters; self-disclosures to and negotiations with the Office of Inspector General and Centers for Medicare & Medicaid Services; internal investigations; provider mergers and acquisitions; and appeals before the Provider Reimbursement Review Board, Office of Medicare Hearings and Appeals, and the Medicare Appeals Council. You can reach him at firstname.lastname@example.org.
Andrew J. Gray IV is a partner at Morgan Lewis’ Silicon Valley office. Serving as the leader of the firm’s semiconductor practice and as a member of the firm’s fintech and technology industry teams, he concentrates his practice on IP litigation and prosecution and on strategic IP counseling. Gray advises both established companies and startups on AI, machine learning, blockchain, cryptocurrency, computer, and internet law issues; financing and transactional matters that involve technology firms; and the sale and licensing of technology. He represents clients in patent, trademark, copyright, and trade secret cases before state and federal trial and appellate courts throughout the U.S. You can reach him at email@example.com.