By Matthew Cavanagh and Madeleine Friel, Design Science
Have you read part one of this series? It discusses the “white hats” — programmers and engineers who have lifted the hood on medical devices to understand their underlying mechanics.
In response to growing concerns about the cybersecurity of networked medical devices, the FDA recently updated its guidelines outlining how manufacturers can ensure their networked devices are safe and market ready[i]. Though these guidelines are a step in the right direction, they remain open to manufacturer interpretation and are not legally mandated by the FDA. Based on the growing value of protected health information (PHI) and the increasing likelihood of medical devices containing PHI, manufacturers must be proactive in addressing the networking concerns of their devices.
These guidelines lay out terms for "effective medical device security management" and point out that "medical device security is a shared responsibility between stakeholders.” FDA’s definition of ‘stakeholders’ includes manufacturers, patients, health care facilities, and health care providers — which indicates that, in a cybersecurity-related adverse event, the manufacturer is not the sole responsible party. This differs from human factors device validation, where adverse events related to the device are considered the responsibility of the device and, therefore, the responsibility of the manufacturer.
Furthermore, the guidelines recommend that manufacturers carefully consider the necessity of networking a device at all. If the manufacturer determines that networking the device is necessary, then it should take measures to ensure the device is accessible to trusted users only, and that there are features in place to detect, respond to, and recover from a security breach. The guidelines also state that the overall usability of the device must be considered, and if the security measures impinge on the usability and effectiveness of the device, then these measures must be reconsidered. This implies that the networking capabilities of a device are secondary in importance to a device’s functionality.
It’s Not An Issue Until It’s A Problem
Despite the increase in public attention to the cybersecurity of networked devices, it is not surprising that the current guidelines are so vague and not legally mandated. So far, although security experts have expressed serious concern regarding medical device cybersecurity, we have yet to actually see any publicized adverse events — aside from some spectacular incidents on television shows like Homeland and CSI.
The lack of actual adverse events has not stopped security experts and researchers from continuing to highlight the issue. Recently, researchers were able to kill a simulated human by turning off its pacemaker[ii]. It took undergraduates from the University of South Alabama just a few hours to hack into and disable the pacemaker. The director of the simulations program at the university said, "It's not just a pacemaker, we could do it with an insulin pump, a number of things that would cause life-threatening injuries or death."
This instance is one of many where researchers have exposed the vulnerabilities of networked devices, such as pacemakers or insulin pumps. Experts warn that it would not be difficult to gain control of a hospital network and then use it to disable the entire hospital's insulin pumps[iii]. This is not just the FDA's problem; it’s also a concern for the Department of Homeland Security[iv].
On the other hand, there are plenty of examples of patients making use of medical device cybersecurity to innovate or customize their devices. In part one[v] of this series, we discussed these "white hats," or benevolent hackers, and how they have put their hacking skills and programming abilities to use. It takes manufacturers a long time to develop and validate a device, and even longer to update that device parallel with advances in technology. This pace is not good enough for users like the white hats, who customize their devices on their own. White hats’ consistent complaint is that the process of innovation is too slow, and they cannot wait for manufacturers to catch up.
The FDA is aware of this limitation and factors the pace of validating software into its calculations on cybersecurity. Adding networking validation would slow down innovation, the agency explains[vi], and, since there have not yet been any adverse events — however highly anticipated they may be — it would be wasteful to add to the overall validation process at this time, adding more cost than benefit in the long run.
Except, It IS A Problem
Based on the evidence, it is hard to disagree with the FDA. Why spend additional time and money to address something that has not actually been an issue, thus far? This question demonstrates one reasonable obstacle to implementing legally mandated network validation. However, remotely accessing and disabling medical devices is not the only way that vulnerable networked devices may cause harm: Less spectacular risks lurk in the shadows.
Networked medical devices can send and receive data — data that often counts as protected health information (PHI). This information should be handled with the same care generally afforded to protected health information. As hackers recently demonstrated, even a user’s FitBit can tell a hacker a significant amount about a user, from where they walk to how much they sleep[vii]. Additionally, FitBit-like devices are now being used to generate large bodies of data on motor disorders such as Parkinson’s disease[viii]. Obtaining this data would be even more valuable and, without adequate security precautions, just as straightforward to hack.
While this is one concern, it is not the primary form of PHI theft. Most commonly, PHI theft occurs to gain access to patient billing, so that a hacker can bill for services under another person's name[ix]. Currently, this information is obtained via medical records, but we anticipate that stationary medical devices — with their less-than-secure place within hospital networks — may be next in line.
This presents the possibility of significant harm to patients. For example, if a patient is billed excessively for healthcare costs, and the security breach goes undetected, the patient consequently may be unable to afford continued pursuit of the most appropriate and effective treatment. Additionally, the potential legal costs of handling such an event may prohibit the patient from continuing to seek effective healthcare.
Whose Responsibility Is Medical Device Cybersecurity?
Currently, maintaining the security of PHI is generally the responsibility of patients and healthcare providers. This distinction is logical, as patients and healthcare providers have the greatest and most direct access to PHI. But, as medical devices become more networked and more integrated into our everyday lives, whose responsibility it is to protect PHI is less clear.
Having no clear source of responsibility reduces the ability to effectively address this issue. Currently, PHI theft is generally committed with some knowledge or compliance on the patient's part, and it is considered a misuse of privileges by people close to the patient, rather than classifying the incident as a true cybersecurity attack[x]. The question of who is the primary protector of PHI is still evolving, and adding the confusion of networked devices to that question only further complicates it.
As discussed above, FDA guidelines currently state that maintaining medical device cybersecurity is a shared responsibility. But, if there is an adverse event, who is to blame? Recent incidents of hackers stealing pictures and other personal information from celebrities’ smartphones reinforced that important question: Who is responsible for protecting user information from security breaches? Although the hacked celebrity devices represent a similar situation, the consequences of a PHI security breach are arguably more serious. And, with networked devices, there will be more opportunities for hackers to gain access to PHI without a patient's knowledge or compliance.
Networking medical devices adds a new dimension to the already unclear and still unsolved problem of who is responsible for protecting PHI. Hacking of pacemakers and infusion pumps seems to be an imminent and exciting threat, just not yet one substantial enough to result in definitive action. However, the real threat of PHI theft both necessitates and inhibits legislation changes as we struggle to define who is ultimately responsible.
Manufacturers need to take device cybersecurity into their own hands, a process that starts with a thorough risk assessment of the networked aspect of their device, using the same careful approach that is used for human factors risk assessments. Manufacturers can also determine what PHI is available on their device, remove all nonessential patient identifiers, and encrypt any other possible identifiers.
Networked medical devices will only become more integral to healthcare, and we can anticipate a future where physicians have direct access to their patients’ networked devices. When that happens, it will be even more critical to take responsibility for what PHI is made available on devices.
About The Authors
Matt Cavenagh specializes in ethnographic research, with experience managing data analysis, research synthesis, and deliverable creation. Recent projects involve graphical user interfaces, product labeling, surgical instrumentation, injection devices, implant devices, and robotic surgical systems.
At Design Science, Madeleine Friel contributes to all aspects of usability research, including protocol creation, study moderation, data analysis, and report writing. She also participates in ethnographic research, supporting data analysis and deliverable creation. Recent project work involves surgical sealant processes, injection devices, infusion systems, nasal inhalation devices, and web-portal design.