Final FDA Human Factors Guidance: 10 Updates That Affect Your Validation
By Natalie Abts, Genentech
Recently, we saw the long-awaited release of the FDA’s final human factors guidance, Applying Human Factors and Usability Engineering to Medical Devices . The document expands upon the critical importance of incorporating a robust human factors approach throughout the design and development process, resulting in devices that are safe and usable for the intended user population in the intended use environment. Drawing from our experience as consultants helping medical device manufacturers navigate the 510(k) and PMA submission processes, we’ve highlighted 10 critical updates concerning device validation:
1. What’s My Interface Again?
A key update to the definition of the device user interface (UI) directly affects task selection for validation testing (referred to in the draft guidance as “summative testing”). Here, the UI is clearly defined as “including all points of interaction between the product and the user(s) including elements such as displays, controls, packaging, product labels, etc.”
This is an incredibly important definition for the industry, as it clarifies the expectation that testing should not be limited strictly to user interaction with the screen and physical controls on the device, and that tasks involving the device accessories, instructional materials, and training must also be considered and tested. If you fail in defining your UI from the outset, you may run the risk of missing opportunities to mitigate use errors, and could be facing an uphill battle with your submission.
2. Critical Task Selection Remains Critical
Correctly identifying your critical tasks is the backbone of designing a robust, safety-focused study and increasing your chances of a successful validation. There are different ways to analyze risk, but the new guidance makes one thing very clear: The most critical element of your risk analysis is the severity of use-related errors. Thus, even if a use error is rare, any task with potential high-severity outcomes needs to be taken seriously and included in formative and summative stage evaluations. In general, the FDA’s approach to risk provides clearer guidance on how to identify critical use errors pre-validation.
3. Formative Is More Informative
Formative evaluations and their relationship to validation testing are strongly emphasized, and the guidance states that when use errors requiring device changes are discovered during validation testing, the validation test essentially becomes a formative test. This appears to indicate that, even though formative testing is not an official requirement for a successful submission, omitting human factors during the formative stages of device design may place your submission under more scrutiny.
Even with many options available in the formative stage, the methods you choose should meet the goal of uncovering use issues so device design can be updated before validation testing occurs, when (to quote the guidance) design flaws can be addressed “more easily and less expensively than they could be later in the design process.”
4. Training On Incorporating Training
The final guidance expands upon the FDA’s expectations of incorporating training into validation testing. Although the draft guidance briefly discussed training expectations, the ambiguity of the information made it difficult to make informed decisions for proper test designs. The discussion in the final guidance emphasizes ensuring realistic training in terms of content, format, and method of delivery — meaning training should not be altered to focus on specific usability issues, or to omit training content perceived as unrelated to the device itself.
Another important specification is the acceptable length of a decay period, with examples of acceptable decay periods ranging from one hour to several days before validation testing occurs. The important thing to keep in mind when making this decision is that, even if a one-hour decay period might be acceptable, it does not mean we should aim for the lowest possible standard for every validation. Testing conducted under the best possible circumstances does not prove the device will be safe and usable in other, more realistic use scenarios.
5. Utilizing Useful Users
The final guidance is more specific about selecting user groups for testing. It can be difficult to know how to distinguish between user groups, or to determine which characteristics make the difference between selecting one user group or two. The final guidance provides further instruction on delineating user groups based on performance of varying tasks, differences in expected user knowledge base, and other characteristics that could affect device interaction.
This clarification could substantially impact patient users, as user limitations should be better represented and could be significant enough to constitute multiple patient user groups. It is important to carefully consider and correctly define your user groups, because those decisions will have significant implications for project timelines and costs.
6. Useful Instructions For Use
Although device designers often want to believe that their users will quickly absorb training and will utilize help materials diligently during device operation, we know that work ‘as imagined’ is quite different than work ‘as performed.’ The FDA has responded to this disparity by incorporating stricter guidance on how instructional materials are to be used during validation, indicating that it is unrealistic to allow users to study instructional materials during the decay period, outside of certain home-use cases.
Allowing clinical users access to the materials for a validation study would likely result in a positive performance bias, considering that users may be more inclined to review materials if they know they are being evaluated when they return (however unlikely they may be to do this in a real-world use situation). The FDA has also taken a stronger stance on making materials available to participants during validation testing, but not instructing the participants on those materials’ proper use.
7. Knowing What Users Know
Validation often tends to focus on task performance, but not all safety-critical tasks are appropriate to evaluate through device interaction. A new section has been added to the validation discussion that provides direction on how to assess knowledge of contraindications, warnings, rare device errors, and fault states that are not feasible to test through scenario performance. We don’t know if the FDA will weigh incorrect answers to knowledge questions as heavily as the administration weighs incorrect or unsafe task performance in its assessment of safety. So, during validation, it may be appropriate to follow up with users on any critical incorrect answers during the post-evaluation debrief.
8. Qualitative Analysis Is King
We often observe that companies familiar only with clinical trials focus on statistical analysis and various ways of quantifying usability data. However, experienced medical human factors consultants know that simpler presentation of error data, and emphasis on subjective feedback and participant perspective on failure, are keys to a successful submission. Unlike the draft guidance, the final guidance makes a point to discuss the importance of qualitative data.
Readers will want to check out the newly added Appendix C, which provides detailed examples of what information should be garnered from a debrief, how use errors should be analyzed, and even provides a suggestion for displaying validation data.
9. Modified Guidance Introduces Evaluation For Modified Devices
For the first time, we are seeing information on how to approach validating a modified device. The final guidance makes it clear that it is acceptable to focus assessments only on the modified device’s new elements, rather than conduct a complete evaluation. Device developers can save a lot of time by focusing risk assessment, formative activities, and validation efforts on new features. The concepts for modified testing also apply to re-validation. If safety risks are discovered during validation and additional modifications are needed post-hoc, it is acceptable to re-validate only those aspects of the device that were affected by the modifications.
10. Final Report… Finally
Finally, the FDA has made several updates to the expected outline and content of the human factors report. By moving the conclusion to the very front of the document, the FDA gives insight into how reviewers will look at the full package, as they primarily want a summary of your efforts and an explanation as to how your validation has demonstrated the device to be safe.
Much of the rest of the report structure sticks closely by the draft guidance outline, except when it comes to the topic of risk. Gone is the User Task Selection, Characterization and Prioritization section of the draft, replaced by two sections that cover the topic’s minutiae in greater detail ( Analysis of Hazards and Risks , and Description and Categorization of Critical Tasks ). The emphasis on risk throughout the guidance makes it clear that the FDA expects a common thread throughout your submission, and any missteps early on could derail a successful validation.
Although other updates have been made, these key changes stand out as some of the most important issues that affect validation efforts. While industry professionals may already have recognized many of these changes as unofficial requirements, these updates demonstrate more definitive stances on some commonly assumed information. Although the expectations are more refined, these updates do not downplay the need for device manufacturers to have their human factors activities, led by human factors professionals. Strictly following the text in the guidance won’t guarantee a successful submission, but these updates have provided a better place to start.
About The Author
Natalie is the Program Manager for the Usability Services division of the National Center for Human Factors in Healthcare. She oversees the technical and quality aspects of usability projects conducted both for the medical device industry and within MedStar Health. She is involved in all aspects of planning and executing usability tests, and also leads an initiative to incorporate usability testing in medical device procurement. She has a special interest in ensuring that safe and effective products are brought to market through successful FDA submission.
Natalie holds a master’s degree in industrial engineering, with a focus on human factors and ergonomics, from the University of Wisconsin, where she was mentored by Dr. Ben-Tzion Karsh. Some of her previous work involved research on primary care redesign for the aging population and implementation of process improvement efforts in the ambulatory care setting.
Natalie can be contacted through www.medicalhumanfactors.net.