Guest Column | January 10, 2024

Product Liability Considerations For AI-Enabled Medtech

By Elizabeth Chiarello, Laura Craig, and Anna Boardman, Sidley Austin LLP

Liability-GettyImages-1705387599

Consumers have been bringing product liability claims for over 100 years. Since the first suit was filed in the early 1900s, product liability law has evolved and will continue to evolve to address alleged harms caused by simple products to technologically complex devices. In this article, we explore how and under what circumstances a medtech manufacturer potentially could be held liable under traditional product liability theories for AI-enabled products, as well as possible available defenses. We also discuss the challenges associated with AI’s ability to change the product from the version that leaves the manufacturer’s facility and AI’s reliability outside of its intended use case. Lastly, this article proposes several strategies that companies can employ to help reduce the risk of a product liability suit based on AI-enabled products.

Product Liability Framework Applied To AI

As a general matter, there are three common types of product liability claims: (1) manufacturing defect, (2) design defect, and (3) failure to warn. Each of these scenarios is premised upon a product that leaves the manufacturer’s facility with the defect in place — either in the product or in the warning.1 These theories fit neatly for products that remain unchanged from the moment they leave the manufacturer’s facility, such as consumer goods sold at retail. But products containing AI can change as the consumer uses them or as intermediate manufacturers modify them. This presents several unique challenges.

First, there is a gating question that must be answered for most AI-enabled products in the context of a product liability suit: is the AI-related product even a product at all?2 Courts historically have viewed software as a “service” that is not subject to product liability causes of action; however, this approach may be evolving to reflect that most products today contain software or are composed entirely of software.3

Second, the evolving nature of products containing AI means that lawsuits could turn on novel theories involving causation or a res ipsa-type theory (i.e., unexplained defect).4 Consider the following examples:

  • Health-Focused Mobile Applications. Manufacturer A builds a health screening application that uses AI to model risk factors for a particular consumer based on inputs from the entire user base. Manufacturer A does not warn consumers that this data may be inaccurate depending on the input from other consumers or the volume of available data. The application alerts a consumer that they may be at risk for a disease, which prompts the consumer to undergo costly medical testing. The consumer sues and claims that the manufacturer’s product, which was ultimately wholly incorrect and caused them to undergo costly and unnecessary medical treatment, was defectively designed and that the manufacturer failed to warn consumers of the product’s intended use and limitations. For the defective design claim, the consumer would have to show that there was an alternative design that would have avoided the false positive. And a court would need to assess the design under (i) the “consumer expectation standard,” meaning whether a reasonable consumer would expect that an AI tool could generate a false positive and (ii) the risk utility standard, meaning whether the risks of the current product are higher than an alternative design. It may be difficult for a consumer to satisfy either standard, given that with current knowledge of AI’s limitations, there may be a very good argument that a consumer would expect an AI-based tool to generate some false positives, and an alternative design may be riskier than the design of the tool as sold. Further, how difficult it is for a customer to satisfy this standard may change over time because the accuracy of the AI tool can improve over time. 
  • Algorithm-Powered Medical Devices. Manufacturer B creates automated dosing pumps that rely on AI analysis of data based on past use to maintain the appropriate dosage of medications. These devices operate in a complex system that, for many users, involves a smartphone application and continuous monitor. If the AI-enabled device dispensed an overdose, it could result in a product liability claim for design defect or failure to warn. Defending this type of claim involves myriad issues — including the role of the consumer in prior conduct that led the device to overdose the medication and other technical causes of an overdose related to the mechanics of the pump, the smartphone, the app, and the monitor.
  • Health Q&A Tool. Manufacturer C creates an application for a healthcare company that allows consumers to ask a “bot” basic health questions. The application is powered by AI that was trained by human nurses. The AI application also has the capability to search medical databases and the internet to identify answers. What happens if the AI application accidentally finds out-of-date information and provides that to a consumer? Does the manufacturer have an obligation to audit the application’s sources and ensure that it is relying on the most updated information? A consumer could bring claims alleging that the manufacturer had a duty to ensure the application relied only on the best and most accurate information. Defending this type of claim would require creative thinking about what the standard of care is in a new and novel space.

Reducing Risk And Increasing Successful Defenses In Litigation

It is important to consider the potential litigation risks associated with AI-enabled products. Reducing risk requires early planning and mitigation, which can help set a manufacturer up for greater success if litigation ultimately arises. Consider the following three actions.

1. Be Thorough And Clear With Stated Warnings

While manufacturers have long adjusted to creating extensive warnings — one only need listen to a television commercial for medications that contain a long list warning of potential side effects — careful consideration of the far-reaching effects of products that evolve and adapt over time will be critical. This will require further analysis of how a reasonable consumer may use the product and what that consumer believes the product’s intended use is. For example, in the context of insulin pump litigation, at least one court stated, on summary judgment, that a manufacturer must consider that users could be in a deficient cogitative state due to a hypoglycemic event.5 Manufacturers would be well advised to think about how their consumers could actually use the product, including situations that may deviate from the “intended” user or use case.

​​2. Assess the Risks and Reasonable Alternative Designs

Manufacturers should consider whether alternative designs are safer. To show that a design is not defective, a manufacturer must establish that “the foreseeable risks of harm posed by the product could [not] have been reduced or avoided by the adoption of a reasonable alternative design by the seller or other distributor[.]”6 In practice, this requires the manufacturer to show that there were no other designs of the algorithm or algorithm combined with other component parts that could have prevented the harm. During the product development stage, a manufacturer should consider the risks that an evolving algorithm could cause and assess whether any reasonable alternative designs would reduce those risks. Having a third-party expert opine that there are no reasonably safer designs would be even more helpful, if feasible. It would be valuable at this stage to consider what technology is available in the market and the design standards employed by competitors. For some products, the technology may not exist to reduce the risk and the manufacturer may need to rely on warnings and other consumer education to ensure the safe use of their product.

3. Monitor Real-World Use Of Your Products In The Market

If a product is going to evolve over time, it may require a program for ongoing monitoring once it is in the hands of the consumer. If the product is connected via Wi-Fi, Bluetooth, the cloud or otherwise to a central manufacturer’s hub, consider whether updates are required to ensure the product is not deemed “defective.” For instance, consider a medical test performed for a very specific condition (“Condition X”), the results of which are imported to an AI platform for diagnosis or to determine who needs further treatment. Now assume that by importing so many results to the AI platform, the system can now determine if a patient has markers of Condition Y. Is the platform now defective because it is not used to screen for Condition Y? Considering these questions at the outset as a company prepares to launch an AI-enabled product may help with product liability claims in the future.

Conclusion

As products continue to evolve and become more complex, manufacturers will need to continue to adapt their strategies for mitigating the risk of product liability suits. These challenges, while presented by new technology, are ultimately common in the marketplace, which is ever changing, growing, and adapting.

References/Notes

  1. Restatement (Third) of Torts: Prod. Liab. § 2 (1998) (“A product is defective when, at the time of sale or distribution, it contains a manufacturing defect, is defective in design, or is defective because of inadequate instructions or warnings.” (emphasis added)).
  2. In re Soc. Media Adolescent Addiction/Pers. Inj. Prod. Liab. Litig., No. 4:22-MD-03047-YGR, 2023 WL 7524912, at *20 (N.D. Cal. Nov. 14, 2023) (analyzing, as a threshold question, based on “functionalities of the alleged products” whether social media platforms are “products”).
  3. Brookes v. Lyft Inc, No. 50-2019-CA-004782-XXXX-MB, 2022 WL 19799628, at *3 (Fla.Cir.Ct. Sep. 30, 2022) (finding ridesharing application Lyft a “product” for purposes of product liability suit under Florida law).
  4. Even in strict liability suits, many courts allow a plaintiff to show “defect” using circumstantial evidence. See Restatement (Third) of Torts: Prod. Liab. § 3 (1998) (“It may be inferred that the harm sustained by the plaintiff was caused by a product defect existing at the time of sale or distribution, without proof of a specific defect, when the incident that harmed the plaintiff: (a) was of a kind that ordinarily occurs as a result of product defect; and (b) was not, in the particular case, solely the result of causes other than product defect existing at the time of sale or distribution.”).
  5. Dalton v. Animas Corp., 913 F. Supp. 2d 370, 376 (W.D. Ky. 2012).
  6. Restatement (Third) of Torts: Prod. Liab. § 2 (1998).

About The Authors:

Elizabeth Chiarello is a partner at Sidley Austin LLP focusing on the defense of companies in complex litigation involving medical devices and pharmaceutical products, including class action, mass tort, toxic tort, false advertising, and products liability disputes. She also advises clients on risk management, analysis and prevention, as well as conducts pre-litigation assessments and due diligence. She was recognized by Law Bulletin’s Leading Lawyers as one of the Leading & Emerging Women Lawyers in Class Action/Mass Tort Defense Law and Products Liability Defense Law.

Laura Craig is a senior managing associate at Sidley Austin LLP focusing on the defense of companies in regulatory litigation, products liability, and other complex disputes.






Anna Boardman is an associate at Sidley Austin LLP focusing on complex commercial litigation in federal court and client representation in government investigations and enforcement actions.