Guest Column | May 1, 2025

AI In Medical Devices: Meeting The Regulatory Challenge Around The World, Part 1

By Marcelo Trevino, independent expert

2025 year of AI, artificial intelligence-GettyImages-2184340765

Artificial intelligence (AI) and machine learning (ML) are redefining healthcare, from enabling earlier diagnoses and personalized treatments to streamlining hospital operations and accelerating medical research. But as these technologies evolve at unprecedented speed, regulators around the world are racing to adapt their frameworks to ensure AI-powered medical devices are safe, transparent, and equitable.

This is the first article in a two-part series that breaks down how medical devices are entering the AI era, and what regulators are doing to keep up.

In Part 1, I will explore the global regulatory landscape, examining how leading authorities in the U.S., EU, U.K., Canada, China, Brazil, Australia, and South Korea are approaching AI in medical devices. From risk classifications and explainability requirements to sandbox initiatives and post-market surveillance, this section offers a region-by-region overview of the evolving rules that are shaping AI innovation worldwide.

Part 2 will dive into global harmonization efforts, emerging international standards, cross-cutting regulatory themes, and practical actions companies can take to prepare for compliance and long-term success in this new frontier.

Whether you’re a regulatory leader or medtech innovator looking to stay ahead, this two-part article is designed to give you the clarity, insight, and strategy needed to thrive in the age of AI.

Regulatory Challenges With AI/ML In Medical Devices

Regulating AI in healthcare isn’t easy. Traditional frameworks were built for static devices that, once approved, don’t change. But AI learns and adapts post-approval, making existing models inadequate. Major challenges include:

  • Bias: Incomplete or unrepresentative training data sets can result in AI systems that perform poorly for certain patient populations. This can reinforce health disparities and undermine trust in medical technologies. Regulators are increasingly requiring companies to demonstrate how they address demographic representativeness, fairness, and bias mitigation.
  • Responsibility: Determining liability when something goes wrong with an AI system is a major regulatory challenge. If a misdiagnosis occurs, who is accountable – the software developer, the manufacturer, or the clinician? Clear documentation of roles, human oversight mechanisms, and contractual obligations are critical in managing this evolving landscape.
  • Transparency: Many advanced AI models, especially deep learning systems, are difficult to interpret. When clinicians and patients cannot understand how a system arrived at its recommendation, it becomes harder to trust or act on those outputs. Developers must design explainable AI that includes rationales, confidence scores, and pathways for user education.
  • Cybersecurity: Because AI-enabled devices rely on large volumes of data and often operate via connected networks, they present a heightened risk of cyberattacks. These threats can compromise patient data, disrupt healthcare services, or alter device performance. Companies must adopt a security-by-design approach with regular updates, penetration testing, and incident response protocols.

To keep pace, regulators around the world are crafting new frameworks. The sections below outline how different countries are tackling these issues.

United States: FDA’s Total Product Lifecycle Approach

The U.S. FDA has taken a proactive and flexible approach to regulating AI/ML-enabled medical devices through its Total Product Lifecycle (TPLC) framework. This model acknowledges that AI systems are not static and must be continuously monitored, evaluated, and updated throughout their use.

A cornerstone of the TPLC approach is the predetermined change control plan (PCCP). The PCCP allows manufacturers to anticipate future modifications and gain preauthorization to make those changes without requiring a new submission every time the AI is retrained or improved. The PCCP consists of two critical components:

  • SaMD Pre-Specifications (SPS): These are the specific type of changes the manufacturer intends to make over time, such as adjusting the algorithm based on new datasets or extending the system’s scope. SPS defines the boundaries of acceptable changes and the intent behind them.
  • Algorithm Change Protocol (ACP): This outlines the methods and processes the company will use to implement and control the changes described in the SPS. The ACP must include validation strategies, performance monitoring thresholds, risk mitigation steps, and documentation practices to ensure the modified AI system remains safe and effective.

Together, SPS and ACP offer a controlled mechanism for iterative improvement, balancing the flexibility of AI development with regulatory oversight.

To support these practices, the FDA, along with Health Canada and the U.K.’s MHRA has introduced good machine learning practice (GMLP). These guiding principles are designed to help manufacturers build robust and trustworthy AI/ML systems throughout development and deployment. GMLPs include:

  • Multidisciplinary Expertise: Development teams should include clinicians, software engineers, data scientists, and regulatory professionals to ensure balanced decision-making and effective risk management.
  • Data Quality and Representativeness: Training data should reflect the diversity of the intended patient population. Companies are expected to document data sources, clean and annotate data sets appropriately, and justify inclusion/exclusion criteria.
  • Model Transparency and Explainability: AI systems should provide interpretable outputs, confidence scores, and rationale for recommendations. Explainability should be embedded in both user interfaces and technical documentation.
  • Performance Evaluation and Monitoring: GMLPs call for continuous post-market monitoring of AI performance, including mechanisms for detecting drift and adverse trends.
  • Human-Centered Design: Systems must include user override features, clearly communicate uncertainty, and be designed to enhance, not replace clinical judgment.

FDA’s approach emphasizes collaboration, risk management, and post-market vigilance, recognizing that regulatory agility is essential for safe innovation in AI-powered healthcare.

European Union: The EU Artificial Intelligence Act (EU AI Act)

The EU AI Act is the world’s first broad, binding AI legislation. Medical devices using AI are typically classified as high-risk, subjecting them to stringent requirements on top of the existing Medical Device Regulation (MDR) and In Vitro Diagnostics Regulation (IVDR).

  • Ongoing risk management throughout the AI life cycle: Companies must have a continuous risk management plan that evolves alongside the product. This includes identifying new risks from real-world use and documenting how these risks are detected and mitigated.
  • Use of high-quality, unbiased training data: Data used for training must be representative of the intended patient population. This means sourcing diverse data sets, documenting inclusion and exclusion criteria, and addressing any gaps or biases.
  • Full traceability of system decisions: Every decision made by an AI system must be traceable. This includes logging the input data, version of the algorithm, and output provided. Companies should build infrastructure for automated logging and reporting.
  • Meaningful human oversight at all times: AI must be designed to work alongside human professionals. Users must be able to override the system, understand when it’s uncertain, and receive clear guidance on how to respond.
  • Strong cybersecurity and data protection: Companies must design AI systems with security in mind, applying layered protections and monitoring for threats. Incident response protocols must be tested and documented.

While the EU AI Act remains the world’s most ambitious AI legislation, the European Commission is already signaling potential revisions. Under growing pressure from industry and amid shifting geopolitical dynamics, Brussels has launched a new strategy aimed at simplifying compliance and reducing administrative burden. The commission is actively seeking industry input on areas where regulatory uncertainty may hinder adoption and has not ruled out amending parts of the AI Act. However, this shift has sparked debate, with civil society urging that “simplification” must not lead to weakened protections or a rollback of core principles.

Canada: AIDA And Evolving Guidance

Canada’s Artificial Intelligence and Data Act (AIDA) is still in draft form, but Health Canada already expects developers to meet high standards for AI/ML-based medical devices.

  • Submitting PCCPs as part of market applications: Companies should include a PCCP in their submissions, detailing how the AI is expected to evolve. This proactive transparency streamlines regulatory reviews and ensures continuous compliance.
  • Following GMLPs to ensure quality and transparency: Companies are encouraged to integrate GMLPs into their internal procedures. These include designing systems that can explain decisions, undergo human review, and manage updates responsibly.
  • Providing clinical proof that the AI works as intended: Health Canada expects evidence that the AI performs reliably in clinical settings. This might include retrospective studies, real-world testing, or comparisons to standard care.
  • Outlining post-market surveillance plans: Surveillance plans should monitor how the AI performs over time and identify emerging risks. Companies must define performance metrics, reporting timelines, and mechanisms for user feedback.

United Kingdom: The AI Airlock Initiative

The MHRA’s AI Airlock is a regulatory sandbox for AI as a Medical Device (AIaMD). It allows developers to test their products in a controlled environment and gather real-world evidence. Goals include:

  • Studying how adaptive AI behaves in clinical settings: Developers can observe how their AI models perform in real-world NHS environments, identify system limitations, and gather data to refine algorithms.
  • Informing future regulations: Insights gathered from AI Airlock participants are used by MHRA to improve regulatory tools and guidelines. This collaborative model bridges the gap between innovation and compliance.
  • Encouraging safe, transparent innovation: By providing a space to evaluate AI systems before full market entry, the Airlock enables safer, faster iteration without compromising patient safety.

Manufacturers participating in the AI Airlock must submit detailed plans outlining their intended learning objectives, risk mitigation strategies, and engagement protocols with healthcare professionals.

China And Hong Kong: Structured Frameworks With International Harmonization Goals

China’s NMPA and Hong Kong’s Department of Health have introduced specific AI frameworks aligned with international best practices. Expectations include:

  • Algorithm transparency, especially for black-box systems: Companies must provide explainable AI logic for clinical users and regulators, especially when decision pathways are not immediately apparent. Developers are encouraged to include confidence indicators and summaries of how conclusions are reached.
  • Life cycle risk management and version tracking: The NMPA requires comprehensive life cycle management of software. This includes predefined change protocols, documentation for all updates, and risk analyses to support continued safety and effectiveness.
  • Post-market surveillance and cybersecurity planning: Companies must submit plans detailing how they will monitor performance, respond to adverse events, and secure systems from data breaches. Regulatory submissions should include penetration testing, audit logging, and incident response protocols.

Companies commercializing in China and/or Hong Kong must localize their documentation and engage early with regulatory authorities to clarify expectations and reduce approval time.

Brazil: Proposed Legislation For High-Risk AI Systems

Brazil is drafting legislation that applies across industries. It focuses on high-risk AI systems, including medical devices. Expectations include:

  • Risk classification for AI products: Developers must assess the intended use and criticality of their AI, mapping it to a risk level. High-risk classifications will trigger stricter regulatory scrutiny.
  • Impact assessments for high-risk systems: Companies should prepare detailed assessments describing how AI could impact health outcomes, privacy, or rights. These reports must be evidence-based and updated regularly.
  • Requirements for fairness, explainability, and traceability: Companies will need to demonstrate how the AI is trained, validated, and monitored for fairness. Documentation should be clear and auditable, with logs of all key decisions and changes.

Australia: Adapting Existing Software Regulations To AI

Australia’s TGA applies its software as a medical device (SaMD) guidelines to AI systems. Expectations include:

  • What their AI is designed to do: Companies must clearly define the AI’s clinical purpose and boundaries of use. This should be reflected in all regulatory filings and product labeling.
  • How it was trained and tested: Data sources, model architecture, validation methods, and test performance must be disclosed. The TGA expects clear justification of generalizability across different populations.
  • How they’ll manage algorithm drift or bias over time: Manufacturers need robust change control processes. Plans should include how retraining is triggered, how updates are tested, and how clinicians are notified.

South Korea: Tailored Approach For Adaptive AI Models

South Korea’s MFDS has issued some of the most detailed AI guidance. Their guidance includes:

  • Retraining on new data is allowed, if the model architecture remains unchanged: Developers must document what types of changes are permissible without triggering full reapproval. This includes clearly defined retraining parameters and expected outputs.
  • Clinical validation is required for novel or high-risk systems: For new or high-impact applications, companies must present clinical data that meets Korean standards. Local trials or bridging studies may be needed.
  • Changes must be tracked using robust version control: Each software update must be documented with its rationale, test results, and impact assessment. Companies must ensure traceability between training data, algorithm versions, and clinical outputs.

Conclusion

Global regulators are taking bold, yet thoughtful steps to shape the future of AI in healthcare. From the FDA’s Total Product Lifecycle model and the EU’s evolving AI Act to Canada’s draft AIDA framework and South Korea’s detailed validation protocols, the message is clear: compliance must evolve to match the adaptive nature of AI technologies.

There is no one-size-fits-all approach; each region has its own interpretation of risk, trust, transparency, and innovation. Companies aiming for global market access need more than a strong algorithm; they need an agile regulatory strategy that can flex across borders while maintaining the highest standards of safety and ethics.

In Part 2, I will move beyond regional overviews and explore how international bodies like the IMDRF, ISO, and NIST are driving global alignment. I’ll also examine the common themes emerging across markets, like the need for life cycle oversight, explainability, and cybersecurity, and offer practical guidance on how medical device organizations can stay compliant, competitive, and future-ready. AI in healthcare is already here. And for those who understand the rules of the road, the opportunities are limitless.

About The Author:

Marcelo Trevino has more than 25 years of experience in global regulatory affairs, quality, and compliance, serving in senior leadership roles while managing a variety of medical devices: surgical heart valves, patient monitoring devices, insulin pump therapies, surgical instruments, orthopedics, medical imaging/surgical navigation, in vitro diagnostic devices, and medical device sterilization and disinfection products. He has an extensive knowledge of medical device management systems and medical device regulations worldwide (ISO 13485:2016, ISO 14971:2019, EU MDR/IVDR, MDSAP). He holds a BS in industrial and systems engineering and an MBA in supply chain management from the W.P. Carey School of Business at Arizona State University. Trevino is also a certified Medical Device Master Auditor and Master Auditor in Quality Management Systems by Exemplar Global. He has experience working on Lean Six Sigma Projects and many quality/regulatory affairs initiatives in the U.S. and around the world, including third-party auditing through Notified Bodies, supplier audits, risk management, process validation, and remediation. He can be reached at marcelotrevino@outlook.com or on LinkedIn.