Guest Column | December 19, 2023

How To Operate In The AI Compliance Vacuum

By Brigid Bondoc, Wendy Chow, and Melissa Crespo, Morrison Foerster

Future of technology, questions, challenges-GettyImages-1776536866

The use of artificial intelligence (AI) in healthcare has grown significantly in recent years. The global market estimate for AI in healthcare was over $15 billion in 2022, and that market is expected to grow exponentially. The COVID-19 pandemic accelerated this global demand 167% from 2019 to 2021, in part due to the demand for rapid disease diagnostics and a shortage of healthcare workers. Additionally, as healthcare spending continues to rise to unprecedented levels, AI is increasingly viewed as a tool to reduce cost inefficiencies in the U.S. healthcare system.

AI is implemented in a wide range of healthcare applications, including disease detection and diagnosis, patient monitoring, drug and vaccine development, and even surgery assistance. AI is also transforming medical imaging. AI also has led to vast improvements in healthcare administration, with AI being used for medical billing and coding and even provider and patient scheduling.

The rapid incorporation of AI into healthcare systems raises significant legal compliance concerns for both technology developers and users. These compliance concerns are magnified in light of the rapidly changing AI regulation landscape, including the recently issued Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “EO”). While there are many compliance considerations for the use of AI in healthcare, this article focuses on implications from:

  • state practice of medicine laws,
  • the FDA’s regulation of medical devices,
  • the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule, and
  • the EO.

AI Tool Outputs And Interactions: Does It Qualify As “Practice Of Medicine”?

AI tools used in the healthcare context can implicate state laws governing the practice of medicine. These laws vary by state and broadly cover several areas of potential enforcement and liability. Three areas of such potential liability are (a) criminal liability for the unlicensed practice of medicine, (b) tort liability for personal injury and medical malpractice claims, and (c) contractual and other liability for violation of corporate practice of medicine laws.

In most states, the “practice of medicine” is defined very broadly and could cover many interactions between an AI tool that addresses healthcare questions and issues, and individual users.

For example, California Business and Professions Code (BPC) § 2052 states broadly that an individual who takes any of the following actions must be licensed to practice medicine: “or attempts to practice, or advertises or holds himself or herself out as practicing, any system or mode of treating the sick or afflicted in this state, or diagnoses, treats, operates for, or prescribes for any ailment, blemish, deformity, disease, disfigurement, disorder, injury, or other physical or mental condition of any person.”1

Similarly, under the New York Education Law, “practice of medicine” is defined as: “diagnosing, treating, operating or prescribing for any human disease, pain, injury, deformity or physical condition.”2 New York case law provides some guidance regarding the factors that constitute the practice of medicine but, ultimately, also leaves the term open for broad interpretation.3

Whether specific outputs of and interactions with an AI tool qualify as the practice of medicine will depend on the exact nature of the output, as well as the applicable state law. Actions that constitute “diagnoses” in California may not necessarily constitute “diagnoses” in New York. Additionally, depending on the scope and purpose of the tool, the wide and varied types of individualized outputs and interactions that could be generated by an AI tool could make a case-by-case analysis of each output extremely difficult.

Criminal Liability and Personal Injury Claims

Criminal charges for the unauthorized practice of medicine are serious and extend to persons who aid or abet or conspire with others to practice without a license. Because criminal liability extends to direct and indirect acts, developers and users of AI tools must ensure that the tool does not practice a licensed profession and that it does not encourage other individuals or entities to practice a profession for which they are not licensed.

Injured patients ultimately could attempt to apply state tort and personal injury doctrines to healthcare AI developers, as well as the users of the tools, based on a variety of theories that involve the practice of medicine. For instance, an end user could claim that a personal injury resulted from interactions with a healthcare AI tool, styled as either a products liability or a malpractice suit.

Contractual Liability And Corporate Practice Of Medicine

Many states have laws that prohibit a corporation from practicing medicine, except in certain limited circumstances. In many states, a contract made in violation of state laws or regulations, such as a corporate practice of medicine prohibition, is void.

If an entity is deemed to be practicing medicine vis a vis its use or sale of a healthcare AI tool, it also could be held to be violating applicable corporate practice of medicine laws due to the mere fact that it is not a licensed individual or professional corporation. In that instance, commercial contracts with customers and vendors that involve the licensing or sale of the tool or services provided using the tool could be considered unenforceable.

Is It Regulated As A “Medical Device”?

FDA regulates many software functions in the healthcare space as medical devices, and this industry is regulated extensively by governmental authorities. The regulations are very complex and are subject to rapid change and varying interpretations that may even affect whether an AI software function is included in FDA’s jurisdiction.

Currently, FDA and other U.S. government agencies regulate numerous elements of the medical device business at various stages, including the following:

  • product design and development;
  • preclinical and clinical testing and trials;
  • product safety;
  • establishment registration and product listing;
  • labeling and storage;
  • marketing, manufacturing, sales, and distribution;
  • premarket clearance or approval;
  • servicing and post-market surveillance;
  • advertising and promotion; and
  • recalls and field-safety corrective actions.

Here’s a preview of when FDA can treat a software function as a medical device and when it cannot. We emphasize the term “preview” here because jurisdictional determinations are very fact-specific, warranting a careful examination by an experienced practitioner.

  • The Federal Food, Drug, and Cosmetic Act (FDCA): FDA’s authority over medical devices originates from the FDCA, which defines “device” as:

similar or related article, including a component part, or accessory which is:
. . .
(2) intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease, in man or other animals, or
. . .
which does not achieve its primary intended purposes through chemical action within or on the body of man . . . and which is not dependent upon being metabolized for the achievement of its primary intended purposes.
The term “device” does not include software functions excluded pursuant to section 520(o).

The statute does not define many of the terms used in these criteria, and the last criterion appears to involve a subjective determination about whether a healthcare provider can “independently review” the bases for the recommendations made by the clinical decision support (CDS) tool.

  • FDA Guidance: FDA has issued three draft guidance documents interpreting these criteria for non-device CDS software since 2017. Most recently, on Sept. 28, 2022, FDA released the final version of its Clinical Decision Support Software Guidance (the CDS guidance) that has been criticized for significantly narrowing FDA’s interpretation of the CDS exemption criteria and removes the risk-based enforcement discretion policy that FDA previewed in prior drafts.

Ignorance of the Regulations Is No Excuse

The FDA does not require a company to have intended a violation: there is strict liability for regulatory missteps. In other words, marketing an AI tool that meets the FDCA’s definition of “device” without authorization from FDA can result in enforcement action by the FDA, including the following:

  • untitled letters, warning letters, fines, injunctions, consent decrees, and civil penalties;
  • customer notifications or repair, replacement, refunds, recall, detention, or seizure of our marketed products;
  • operating restrictions, partial suspension, or total shutdown of production;
  • refusing or delaying requests for 510(k) clearance or premarket approvals of new products or modified products;
  • withdrawing 510(k) clearances or PMA approvals that have already been granted;
  • refusal to grant export approval for marketed products; or
  • criminal prosecution.

HIPAA and AI

Given the complex patchwork of privacy laws in the United States, entities in the healthcare space seeking to incorporate AI tools into their workflow and AI providers must evaluate their obligations under these laws and take steps to ensure compliance.

HIPAA, and more specifically, the HIPAA Privacy Rule, imposes several data protection obligations that regulate how entities covered by HIPAA, covered entities, and business associates4 use and disclose protected health information (PHI), including in connection with AI. HIPAA also impacts how AI developers may gain access to patient information necessary to develop AI tools.5

Covered Entity Permitted Uses and Disclosures of PHI in AI

While many covered entities may be eager to incorporate AI into their workflows and treatment activities, given the rapid proliferation of AI, covered entities should develop policies and procedures to evaluate potential use cases that will involve processing PHI under HIPAA and ensure that AI providers who will process PHI are engaged in a HIPAA-compliant manner.

The HIPAA Privacy Rule permits a covered entity to use and disclose PHI without a patient authorization for enumerated purposes, including treatment, payment, or healthcare operations of a covered entity, and certain public policy purposes. A covered entity seeking to use AI will need to evaluate whether the use case falls within a permitted purpose under HIPAA and then ensure that any such uses are in line with HIPAA’s minimum necessary requirements, which requires covered entities to evaluate their practices and utilize safeguards as needed to limit unnecessary or inappropriate access to and disclosure of protected health information.

Additionally, assuming the AI tool is provided by a third party that will have access to the PHI, the covered entity will need to enter into a HIPAA-compliant business associate agreement, just as it would with any other HIPAA-covered vendor relationship.

Business Associate and Other Third-Party Considerations

Inherent in AI development is the need to access a vast amount of data. While an AI developer that is providing services to a covered entity may have access to PHI as a business associate, there are a few considerations that factor into how such AI business associates can use PHI.

Business Associate Agreements

A business associate may only use and disclose PHI as permitted under its business associate agreement, as permitted by and in compliance with HIPAA. Accordingly, a business associate may only use or disclose PHI in certain circumstances, such as to provide services to the covered entity (or upstream business associate) customer (provided that such use and disclosure would be permissible under HIPAA if done by the covered entity), as required by law, data aggregation6 services, or for the business associate’s own proper management and administration.

While HIPAA would clearly permit a business associate to use or disclose PHI to provide AI services to a covered entity, provided that the covered entity is permitted to engage in such uses or disclosures of PHI (i.e., AI can be used to process PHI for treatment, payment, healthcare operations of the covered entity), other common uses of data in an AI context, such as for further development or improvement of an AI, will require additional analysis showing that such use is permitted under HIPAA. Such an inquiry will be fact-specific and depend on the covered entity’s and business associate’s risk tolerance and interpretation of certain less-clear areas of HIPAA.

De-identification

Alternatively, an AI developer seeking to use PHI for its own purposes may consider whether de‑identified PHI would suffice. HIPAA permits a covered entity to de-identify PHI in accordance with certain requirements, and once such information is de-identified, the information is no longer regulated by HIPAA. Additionally, a covered entity may engage a business associate to de-identify PHI, provided that such de-identification falls within a permitted purpose under HIPAA, and then may permit the business associate to use such de-identified data. However, de-identification, particularly of the type of unstructured data necessary for healthcare AI training, can be complex and often decreases the utility of such data, and most AI developers do not find this to be a viable option.

EO on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

The EO tasks the U.S. Department of Health and Human Services and collaborating agencies to develop policies, frameworks, and regulatory actions to address issues related to AI technologies and healthcare. The EO applies a broad-brush approach to addressing the sometimes-conflicting goals of protecting individuals and patients from the potential harms of AI technology in healthcare and encouraging development and innovation, opening the door to comprehensive regulation that could impose onerous disclosure and monitoring requirements on healthcare technology developers, at all stages of the development process. Relevant entities will need to follow how HHS navigates the developing patchwork of AI-related federal guidance to create a comprehensive policy for the healthcare industry and then take steps to comply.

Key Takeaways

Given the untested waters in this area, users and developers of AI tools should carefully consider usage and compliance policies that address concerns raised by the practice of medicine laws and policies, including as follows:

  • Implement usage policies that identify whether a healthcare AI tool could be considered to provide outputs that constitute the practice of medicine.
  • Do not assume that the purveyor of the cool new software tool has done its homework and sought the appropriate regulatory authorizations — there are plenty of well-meaning folks who are marketing without understanding the regulatory requirements.
  • If a software tool uses data, ask questions about that data and get a better sense of whether the patient population on which the software was trained is similar to the intended patient population.
  • Providers must remain responsible for care decisions and remember that CDS tools are there to aid, and not replace, the practice of medicine.
  • HIPAA-covered entities should develop policies and procedures to evaluate AI use cases and ensure compliance with HIPAA.

About The Authors:

Brigid Bondoc is a partner in Morrison Foerster’s FDA & Healthcare Regulatory and Compliance group, where she advises clients on pre- and post-market issues.



Wendy Chow is of counsel with Morrison Foerster’s FDA & Healthcare Regulatory and Compliance group, where she advises clients on a wide range of healthcare compliance and transactional issues.




Melissa Crespo is a partner in Morrison Foerster’s Privacy & Data Security group, where she helps clients navigate compliance and data security matters, with a special focus on HIPAA.

 

 


 

 

  1. Cal Bus & Prof Code § 2038.
  1. NY CLS Educ § 6521.
  1. Andrew Carothers, M.D., P.C. v. Progressive Ins. Co., 42 Misc. 3d 30, 33.
  1. Additionally, if HIPAA doesn’t apply, organizations will need to consider their obligations under state privacy laws, including recently enacted health information-specific health privacy laws, like Washington’s My Health My Data Act.
  1. Entities subject to HIPAA will also need to consider the HIPAA Security Rule in connection with evaluating and implementing security for AI tools, as well as any breach notification obligations that may arise in connection with a breach of PHI involving AI.
  1. Data aggregation means, with respect to protected health information created or received by a business associate in its capacity as the business associate of a covered entity, the combining of such protected health information by the business associate with the protected health information received by the business associate in its capacity as a business associate of another covered entity, to permit data analyses that relate to the healthcare operations of the respective covered entities. 45 C.F.R. § 164.501.