Guest Column | December 3, 2018

Dos And Don'ts Of Assessing Usage Risk

By Natalie Abts, MedStar Health

abts logo

Risk analysis is a staple of medical product development, but not all risk analyses are created equal. When assessing risks related to human factors, a certain type of analysis, one specifically related to user interaction, must be completed to thoroughly analyze design and assess how use error may contribute to unsafe conditions. Though a variety of methods are available to assess use risk, a Failure Modes and Effects Analysis of usage (uFMEA) is one of the most commonly employed, widely accepted, and reliable tools.

Effectively conducting a uFMEA not only facilitates correct identification of existing usage risk, it aids in the generation of crucial design updates for risk reduction or elimination. Correctly analyzing usage risk also is a crucial step in meeting the FDA’s human factors requirements. To optimize the process, employing some key strategies can help lessen the burden of risk identification.

Do: Conduct The Right Type Of Risk Analysis

A common mistake before assessment of usage risk begins is failure to select the proper tool. This tends to happen most often if companies have not yet produced a device that must meet human factors requirements and/or do not have specialized human factors expertise within their organization.

It might be tempting to try and kill two birds with one stone by using a different type of risk analysis as a substitute for a usage-related one (e.g., design or engineering risk analysis). The problem with taking this approach is that risk documents not specifically designed to assess user interaction inevitably overlook many potential usage issues. A thorough uFMEA captures potential failures with all individual usage tasks, while other types of risk analysis may only capture select use issues or identify “use error” as one non-specific risk.

Don’t: Conduct A uMFEA Too Late

Another key issue is the timing of uFMEA conduct. Too often, developers focus only on other types of risk and do not consider usage-related risk early enough. This can be a symptom of putting human factors activities on the back burner until it is time to meet the FDA’s minimum requirements. One problem with taking this approach is that it is often too late, by that point, to make design updates, beyond superficial changes, and those updates are unlikely to mitigate residual risk.

A proper uFMEA should be initiated earlier, when prototypes are first developed, at which point it can function as a living document that can be adjusted if unanticipated risk is discovered through ongoing analysis, clinical trial data, or user testing. This approach facilitates detection of gaps in both safety and usability that could lead to unexpected outcomes in the validation stage if not previously identified.

Do: Emphasize Details In Your Task Analysis

Once you initiate a uFMEA, the first step is to identify all tasks performed with the device. However, it is a pitfall to group tasks into broader categories, rather than drilling down to individual details. For example, when the analysis is created later in the development process, it is common to use a product’s instructional materials as a guidance document for uFMEA conduct. Instructions for use, though, are not necessarily written in a manner conducive to translation for a risk assessment, and may combine multiple tasks into a smaller number of procedural steps in the effort to shorten the document or simplify instructions for the user.

An instruction for a pre-filled syringe that reads, “After removing the needle cover, pinch the skin and insert the needle at a 90-degree angle,” is essentially composed of three individual tasks with different failure modes, some which may be safety-critical, and some of which may not. Combining these tasks in the uFMEA makes it easier to overlook risks that apply only to a portion of the instruction, which could lead not only to omitted information, but to misidentification of critical tasks.

Don’t: Overlook The Simple Failures

Identifying how each task can go wrong (i.e., failure modes) is critical toward understanding the potential for use error. One way uFMEAs can be deficient is by overestimating intuitiveness and dismissing simple-seeming failures as improbable.

One of the most basic examples of this is maintaining aseptic technique through hand-washing, device disinfection, or discarding materials that have compromised sterility. These may seem like tasks outside of the scope of device design, but some devices utilize multiple components that require practices which aren’t necessarily part of standard aseptic procedures (e.g., swabbing certain components with alcohol wipes or using specialized cleaning practices). Assuming that these failures do not require assessment —  because, presumably, no provider would ever perform them — leads to failure identifying risk mitigations that may improve compliance.

It also is critical to consider the role of the system in use error. In a real-world environment, particularly a clinical one, users experience time pressure, distraction, high workloads, and a variety of other environmental and system-related factors that make it more difficult to perform tasks correctly, even with a well-designed device. Developers should be doing everything they can to reduce the user’s mental workload, making even the simplest tasks easier to perform.

Don’t: Overestimate Failure Effects

While the more common pitfall in assessing risk severity is a tendency to underestimate failure effects (i.e., the consequences of a failure mode), it also is problematic to go overboard when determining failure consequences. If multiple use failures or device failures need to occur to result in a severe consequence, it is unlikely that the effects are within the reasonable bounds of what can be designed out of the system.

For example, imagine that a nurse fails to address an alarm on a patient’s wound pump. Theoretically, this could lead to serious patient harm (or death) if left unattended long enough. Realistically, though, the pump prevents this from happening by alarming repeatedly if the problem is not fixed, providing visual and audio cues that something is wrong, and preventing the user from dismissing the alarm if the problem is not addressed. For a patient to suffer severe consequences resultant of an unaddressed alarm, serious negligence would have to occur, repeatedly, over a long period of time. Thus, failure to address the alarm would likely be associated with low-severity outcomes, rather than high-severity ones.

Risk overestimation can lead to inaccurate identification of critical, high-risk tasks, and can make a device appear far more dangerous than it is.

Do: Identify Both Use-Related And Design-Related Causes

Identifying potential failure causes is crucial toward understanding how they can be addressed. It is important to remember that the term “use error” does not imply that the user is to blame. Many use errors are influenced by design deficiencies, and blaming the user entirely for these issues fails to encourage design optimization.

For example, if a nurse enters an incorrect medication volume into an infusion pump, use issues could be contributing factors (e.g., the user forgot the value they intended to enter, misread the medication order, or failed to double-check the value before starting the infusion). However, design issues must be considered, as well:

  • Is the functionality to enter the values intuitive?                                                              
  • Are the numbers on the key pad appropriately spaced to mitigate erroneous button pressing?
  • Is the user prompted to confirm the values before starting the infusion?
  • Are there appropriate limits in place to prevent the user from entering values that are unreasonably high or unreasonably low?

These all are examples of design issues that could be contributing factors to the failure. Designs can remain unsafe if these possibilities are ignored and blame is instead placed entirely on the user.

Do: Identify Secondary And Tertiary Controls

Part of completing a thorough uFMEA is identifying controls that are in place to prevent failure modes from occurring, then examining how they can be improved or supplemented by additional controls. The most important controls are design-related mitigations that eliminate or reduce the probability for error.

It is important to identify gaps in design controls and, when tasks are identified as high-risk, to first look at design updates as the primary measures for risk reduction. Knowing that error cannot always be designed completely out of a system, secondary and tertiary measuresalso can be taken. These include adding alerts or confirmation screens, designing intuitive labeling, and developing robust and usable instructional materials and training programs. Thoroughly assessing risk measures requires identification of controls related to each of these parts of the system, and including them in the uFMEA can help to identify gaps in design-based mitigation.

Risk assessment is perhaps one of the most complicated processes in device design, but also one of the most critical. uFMEA is a valuable component of this process that incorporates usage risk into the overall risk management plan. By utilizing the tool correctly, we can create safe and usable devices that also meet FDA standards.

About The Author

Natalie Abts is the Senior Program Manager for the Usability Services division of the National Center for Human Factors in Healthcare. She manages the technical and quality aspects of usability projects conducted both for the medical device industry and within MedStar Health. Natalie has specialized experience in planning and executing both formative stage usability evaluations and validation studies for medical devices and combination products on the FDA approval pathway. She also leads an initiative to incorporate usability testing into the medical device procurement process in the MedStar Health system, and is active in delivering educational presentations to the medical device industry and other special interest groups. Natalie holds a master’s degree in industrial engineering, with a focus on human factors and ergonomics, from the University of Wisconsin, where she was mentored by Dr. Ben-Tzion Karsh.