By Natalie Abts, Genentech
When the long haul of medical device development is over and a device has been cleared for use by the FDA, it can be tempting to forget about human factors until it’s time for your next product validation. But device companies that take this approach are missing out on the opportunity to maximize device usability in a way that goes beyond the FDA requirements for human factors.
Because the requirements focus heavily on usability issues related to safety, it might be tempting to settle for the minimum, and assign less meaning to usability problems that won’t cause harm. The problems with the approach are two-fold. First, it is not always possible to catch all unanticipated use errors during validation. Second, usability problems that are not directly related to safety risks can still affect purchasing decisions and device acceptance. Focusing only on the anticipated safety risks might get you past the FDA, but considering usability primarily as an FDA hurdle to maneuver is shortsighted, and it can have unanticipated effects. So how can focusing only on the minimum requirements affect post-validation success?
Adverse Events Still Occur With Validated Devices
Validation studies aren’t perfect. Although a robust validation study is intended to evaluate all high-risk errors, there always is the potential that unanticipated use errors will occur on the market. Not only are use errors inevitable (although they may be low-risk), but poor device design can be a factor in adverse events.
If you complete a human factors validation study without the occurrence of serious safety errors, you may conclude that the device is safe. However, we must remember that serious safety errors can be rare events, and the 15-user requirement for validation testing may not capture all possibilities. Extending the reach of usability evaluations beyond minimum requirements increases opportunities to find these errors, and to mitigate them through design.
Hospitals Are Interested In Usable Devices
There are many factors involved in device purchasing decisions, and usability has emerged as a relevant consideration. Hospitals are not just looking for low-cost options, but need devices that can be used efficiently and do not lead to a frustrated staff. Thus, it is not sufficient to simply pass the FDA’s expectations for mitigating risk through design; manufacturers should be thinking about taking usability beyond the standard requirements if they want to remain competitive.
Users are becoming more savvy and will not simply accept a device because it has been cleared by the FDA. Healthcare providers are looking outside of the standard device fairs and manufacturer demos for information. They are consulting with colleagues, doing online research, and thinking critically about device characteristics that would make their jobs easier.
In addition to device end users, the decision-makers for hospital device purchasing are becoming more savvy about usability. Although it remains an emerging practice, several major hospital systems in the United States and Canada are using formal usability evaluations as a factor in device purchase decision-making. For manufacturers selling their devices in a competitive market, those study results could make the difference in a tough purchasing decision.
Case Study: Infusion Pumps
To demonstrate a post-market scenario, let’s look at a real case study of a hospital system’s effort to purchase a new infusion pump (details omitted to protect the device manufacturers’ identities). The hospital narrowed its options to a selection of two candidate pumps. Human factors engineers performed comparison usability testing with device end users to determine which candidate was likely the safest and most usable option. Performance data and subjective feedback was obtained via performance of critical and essential task scenarios in a simulated setting.
User performance showed that one pump had several flaws with the potential to cause safety-related errors. First, the interface failed to prompt users to prime the pump before beginning an infusion, a vital task that could cause serious patient harm if omitted. Second, the design promoted use of manual data entry, which bypasses key safety checks that prevent a user from incorrect programming, which could lead to delivery of an inaccurate medication dosage. Finally, the pump was designed for use by healthcare providers, even though it was realistic for providers to send patients with certain conditions home with pumps to continue treatment outside of a hospital.
Although the FDA may have concluded that this device’s risk-mitigation strategies were reasonable, this pump presented higher probability for safety-related errors than its competitor. The safer device mitigated these problems by prompting the user to prime the pump before beginning an infusion, making it easier and more intuitive for users to utilize the pre-programmed drug library, and featured a home-use version of the pump that was specifically designed for patient users. The results of the usability evaluation were a key factor in the hospital system’s decision to purchase the device that its human factors team concluded was a safer design.
Designing For Post-Validation Success
The key to reducing the probability of usability issues after your device has been approved is to think about usability issues earlier, incorporate more formal evaluations, and look beyond the focus of high-risk tasks. The value of iterative testing shouldn’t be underestimated. Utilizing this approach gives you more time to experiment with design changes and try out multiple options.
Small formative studies are relatively inexpensive and require few users, but can yield valuable data. It is ideal to discover usability issues in early stages so designs can be updated before the product is near finalization, when the trade-offs for making device updates to improve usability are relatively small. If issues are discovered too late, low-risk (potentially vital) design problems are more likely to remain unchanged.
Device developers should also focus on expanding the scope of user needs and viewing them through the lens of a systems approach. If devices are developed without thinking about the work system in which they will be incorporated, implementation may not be effective. One systems approach, the Systems Engineering Initiative for Patient Safety (SEIPS ) model, defines the system to include people, tools and technology, tasks, the physical environment, and organizational conditions. If a change is made to one system component (e.g., introducing a new device), the other system components will be affected.
For instance, introducing a new device that is radically different than the one it is replacing could result in workflow changes, reduced efficiency in completing tasks, device interoperability problems, and other related issues. Failure to account for the work systems components in the design stages may not lead to safety-critical consequences, but can have a profound impact on productivity and satisfaction.
Beware of Subjective Feedback
Subjective feedback is another form of data that can be underestimated or misunderstood. Conducting early stage interviews and observations, as well as incorporating subjective questioning into usability studies at all stages, can yield valuable information. One thing to remember, though, is not to get caught up in utilizing market research or user experience data as a substitute for proper usability testing. Although the lines between these methods are blurry, market research and user experience evaluations tend to focus more on personal preference data and user satisfaction, rather than analysis of actual use.
The infusion pump purchasing example described above provides an example of how user satisfaction data can be misleading. Of the users who participated in the study, a large majority indicated a preference for the device that was less safe, but boasted a sleeker, more aesthetically pleasing design. This was even true for participants who were observed struggling to operate the device successfully. Many of these users completed the task scenarios without realizing that they had performed unsafe acts, or had overlooked potential safety issues with the device. Although the standard approach for a purchasing decision is to gather subjective feedback after users are exposed to devices through marketing presentations or device fairs, use of that approach in this case would have been misleading, and valuable performance data would not have been collected.
This example demonstrates the cautious approach that should be taken with feedback that is not based on performance or insight into realistic use. There are many reasons why users might indicate satisfaction with a device, and making design decisions based on that data could be a mistake. Users can be swayed by aesthetic appeal, indicate satisfaction to mask their struggles with task completion, or simply rate satisfaction highly because they are indifferent. To generate more useful information, satisfaction data should be tailored toward how the device might promote safe and successful task completion, fit into current workflow, affect standards of practice, and function within the overall work system. Focusing on satisfaction ratings and aesthetic appeal may give you a false sense of how accepted your device will be in the real world.
Device developers need to think beyond a structured validation study when considering device usability. Data not directly related to risk — which might be less scrutinized by the FDA — may appear less important, but a decision to focus only on minimum safety requirements could backfire once you reach the market. Thinking beyond the risk requirements greatly increases the likelihood of product success, and will help developers take their products to the next level.