Ensuring Privacy & Security In Smart Medical Devices
By Ana Fernandes, Radhika Bogahapitiya, and Sharad Patel, PA Consulting
Pressures on healthcare systems are set to become worse with aging populations requiring round-the-clock care and an exponential increase in chronic diseases. Technology-led smart solutions are becoming the lifeline in balancing cost reduction and improved efficiencies with better patient outcomes. We found that by 2030, the global market for at-home care will be worth $390.4 billion — an increase of $70 billion from today.
To this end, smart medical devices and at-home care have gathered significant momentum in recent years, as they allow healthcare providers to remotely monitor patients’ health and provide higher quality, personalized care whilst benefiting from improved margins. While the opportunity is exciting, integrating these devices into healthcare systems is not without drawbacks, with patient trust and regulatory compliance being high on the agenda. In this article we explore how organizations can build privacy, security, and ethics by design within medical devices.
Many technology components make up smart medical devices. Embedding security, privacy, and ethics across the entire life cycle of the device, from ideation to end of use, presents multiple challenges:
- A globally divergent and polarized set of medical device, privacy, security, and AI regulations is raising operational, financial, and compliance issues. For example, global players need to comprehend regional regulations such as the General Data Protection Regulation, local or state level regulations such as California Consumer Privacy Act, as well as sectoral requirements such as Health Insurance Portability and Accountability Act and Section 524B of the Federal Food Drug & Cosmetic Act in the U.S.
- A multifaceted security landscape includes threat actors who continue to target and exploit vulnerabilities in devices with increasing intensity. Naturally, this raises questions around confidentiality (encryption during transmission), integrity (reliability of the device vs. an in-person visit), and availability (failed data transmissions).
- Ethical issues with the use of AI to enhance device and analytics capabilities, are raising concerns around bias, stigmatization, and discrimination through non-transparent sensitive data disclosure to manufacturers, healthcare professionals, or insurers.
Increasingly, we have seen organizations struggling to get ahead of these challenges due to:
- A lack of fit-for-purpose enterprise-wide privacy, security or AI framework that can be implemented in the development process.
- Organizational silos preventing stakeholders collaborating to deliver trusted outcomes.
- Lack of understanding of what needs to be done across the life cycle of the device.
Successfully navigating these challenges creates opportunities for smart medical device manufactures to stand out from the crowd while delivering better patient outcomes, building trust across the value chain, and reducing the strain on healthcare systems.
So, what can be done in practice?
- Co-create and embed a trust by design framework, which strikes a balance between low-level prescriptive and high-level principle-based business requirements, by:
- bringing together functions such as product management, R&D, software development, and regulatory/quality to co-create and co-own requirements, reflecting their perspectives;
- considering trade-offs between globally applicable requirements and providing flexibility for local needs while maintaining traceability across regulations, case law, and industry standards;
- embedding governance, review, and assurance across the life cycle of devices and surrounding components to enforce usage and validation of how product teams meet key requirements.
- Set up integrated teams that break down organizational silos to deliver trusted security, privacy, and ethical outcomes by:
- mapping stakeholder objectives (e.g., marketing teams looking to comply with consent requirements while using preferences to market personalized content) and validating perspectives to drive engagement from the outset;
- challenging the composition of product delivery teams to not only consider engineers but also regulatory, quality, risk, security, and privacy practitioners; and
- creating training and easy-to-use guidance or operating procedures that resonate with stakeholders and speak to real-life scenarios to enable engagement. Approaches such as prompt cards embedded across product development lifecycles often supports a mindset shift.
- Use well-designed processes and GRC tooling to simplify compliance activities that needs to be delivered by product teams, by:
- standardizing compliance assessments across similar disciplines: security/privacy;
- integrating GRC tools managed by different functions to enable cross-pollination of compliance assessments/responses and to reduce duplication (e.g., responses provided to a security assessment used for privacy assessments);
- embedding smart workflows and logic so that assessments are dynamic, allowing questions to be shown based on inputs; and
- using reports and dashboards so that risk remediation/control efforts can be prioritized.
- Leverage key stages of the product life cycle to enable privacy, security, and ethics, which include:
- Concept generation – Include objectives around these disciplines as part of business cases to further iterate over time;
- Proof of concept – Validate minimum control requirements early on, using Trust by Design frameworks, so that feasibility is considered up front;
- Integrated development – Consistently engage with experts to seek advice, validate the design of controls, and review compensating controls;
- Testing and scale-up – Dedicate effort for privacy, security, and ethics controls testing so that operating effectiveness of key controls are validated before go-live or scale-up;
- Post-market surveillance – Embed processes to identify event trends, prioritize and implement incremental updates to remediate vulnerabilities, and improve product security/privacy features through product increments; and
- End of life – Give patients control to delete data, for example, through privacy trust centers that are widely used across technology and consumer industries, before devices are returned and/or supporting applications are decommissioned.
There are many considerations to be made when building and maintaining trust in smart medical devices. For most parts, the organizations that operate in this space will continue to see a rapid evolution of technology trends as well as user needs that increasingly drive requirements around security, privacy, and ethics. So far, manufacturers proactively set out security white papers, and also explain security controls through mechanisms such as MDS2 Forms, a template for manufacturers to describe the key security controls of their devices. Going forward, these will need to extend to and explain privacy and ethical considerations as it will soon become an operating imperative for smart medical device manufacturers to overtly demonstrate compliance and trust to patients, healthcare providers, and governments.
About The Authors:
Ana Fernandes is an experienced subject matter expert at PA Consulting on data protection, leveraging her European Parliament and legal backgrounds to help clients achieve compliance in this area. Her experience includes leading large teams to implement GDPR and privacy programs, supporting clients through regulatory, operational, and reputational challenges (including emerging and new risks) across geographies. Prior to becoming a consultant, she was a policy advisor at the European Parliament in Brussels working on the Data Protection Package (including the GDPR). Connect with her on LinkedIn.
Radhika Bogahapitiya is a data protection expert at PA Consulting who has worked with organizations in the medtech and pharmaceutical sectors for over a decade to embed privacy and security practices. His experience includes leading multidisciplinary teams working with product, regulatory, and quality experts to embed privacy, ethics, and security controls into key processes, as well as working with product teams to seek practical solutions to remediate critical risks. Connect with him on LinkedIn.
Sharad Patel leads PA Consulting's Data Privacy and Cyber Security team in the healthcare, life sciences, and medtech sector. He has over 20 years’ experience in helping organizations design, build, and embed the right security and privacy controls in their business operations, helping them with regulatory compliance and building trust with patients, HCPs, and wider stakeholders. He is CIPP/E, CIPM, FIP and CISM certified, along with being an ISO 27001 Implementation Practitioner. Connect with him on LinkedIn.