By Naomi Cherne and Patricia Anderson, Core Human Factors Inc
Human factors validation testing aims to produce results that represent real-life users, uses, and use environments. As such, care is taken that the tested scenarios simulate real life. Some aspects of real-life use are straightforward to translate to simulated use — for example, if your product will be used in a brightly-lit clinical setting, then turn on the lights during testing. Other use cases can get a bit more complicated.
When To Train Study Participants
One topic that often generates confusion — and sometimes heated debate — is whether study participants should be trained on a product’s use prior to testing. Trained participants may be more likely to use the product as intended during testing, versus participants who are not trained. However, as testing is intended to simulate real-life product use, training may not always be appropriate.
You may expect that product users will be trained on its proper use — for example by their institution, if they are healthcare providers (HCPs), or by HCPs, if they are patients or caregivers. You may even have considered such expectations about training when constructing your risk analysis. But, what do you really know about the training that your users may, or may not, receive in real life? Have you developed a training program that all users are guaranteed to complete prior to using your product? Is there a method of monitoring or enforcing this training after the product is on the market? Have you interviewed a representative range of users, and feel confident that you understand the minimum level of training that will be completed?
In our experience, a small study can be designed to use over-the-phone interviews to specifically determine whether training is conducted for a given product. Such studies can examine how the training is conducted, when it occurs, who does the training, and who, specifically, is or is not trained.
Some medical devices and combination products include training as part of the product; training is considered as risk mitigation, and real-world users of such devices are expected to undergo a well-characterized training program. An example of such a product is a home dialysis system, where users undergo a multiple-week training program, and receive a certification.
For other medical device products, training may not be well-characterized, or its consistency in practice may not be guaranteed. An example of such a product is home injection devices, where training may not always be provided (for example, if a user has injection experience with other devices) and, when provided, may vary widely in terms of the amount of time, detail, and practice covered in the training.
If your product does not have a well-characterized training program, or if consistent training cannot be guaranteed, it may not be appropriate to include training as part of simulated-use testing, even if training has been considered as risk mitigation. Instead, testing untrained representative users is a good option for a conservative, yet appropriate, simulation.
Defining Your Training
Training traditionally involves some combination of “training materials,” such as a trainer device, instruction pamphlet, and/or an instructor to teach the user. However, a broader definition of training is mandatory use of any part of the product, including all labeling and packaging. Therefore, a one-on-one session, with a nurse instructing a participant by reading through the instructions for use (IFU), demonstrating use of the product, and having the participant demonstrate proper product use, is a type of training, just as mandatory use of instructions is a type of training.
While defining the detail and scope of your training, also consider the best way to simulate real-world training in HF studies. The less structured the training, the more difficult it may be to argue that training in your HF studies represents real-world scenarios. For example, if your training mandates that a representative from your company must give a defined training presentation to every physician prior to first use, it may be logical to have a representative instruct HF study participants. However, if your training involves nurses talking to patients before sending them home with a product, there is no way to predict variability of training.
If your training is on the less-defined end of the spectrum, one option to manage this in HF studies is to include a trained and an untrained arm for each user group. This balances the unrealistically consistent training that some participants receive against participants who receive no training at all.
Another point to consider is that anything used as training in a HF validation study should be part of the product’s labeling and submission to FDA. While videos are a popular and useful method of training, if there is no plan to put that video in the hands of every user (e.g., by including a DVD in the product’s packaging), it should not be part of the HF validation training.
Memory Decay In Simulated-Use Studies
If you have decided that training is appropriate for your simulated-use study, you must next consider the fallibility of human memory.
In its guidance Applying Human Factors and Usability Engineering to Medical Devices, FDA points out that memory decays over time, and therefore information recalled at point of use may not be accurate or complete. The agency recommends that training should not be immediately followed by testing, and states that, in some cases, an hour-long gap would be acceptable. Further, FDA explains, a longer gap of one or multiple days may be more appropriate to simulate training decay as a source of use-related risk.
Likewise, in its guidance Human Factors Studies and Related Clinical Study Considerations in Combination Product Design and Development, FDA notes that a gap in time between training and product use may result in reduced retention of trained information. The agency recommends that a product’s simulated-use HF studies simulate the impact of this memory decay if the product’s use-related risk analysis implicates such decay as a source of use errors. An example is provided implementing a gap of several hours or days between training and testing, and the guidance states the selected duration should be justified in the study protocol.
Additionally, the FDA-recognized standard ANSI/AAMI/IEC 62366 calls out training decay as relevant to safe and effective use of a product, and necessary to incorporate into simulated-use testing. In the real world, the gap between training and testing may be long or variable — on the order of weeks, months, or years. Simulating a product’s long, worst-case training decay may not be practicable for a variety of reasons, so how long is long enough?
Common practice is to simulate this gap (“decay period”) between training and testing in a study using a shorter duration: hours or days. One rationale for using a shorter decay period comes from Ebbinghaus’ famous forgetting curve, which predicts that the bulk of forgetting happens fast, with diminishing rate of loss as time passes.
It is helpful (and specifically recommended in the draft guidance for combination products) to provide FDA reviewers with a rationale for the decay period selected for a particular product’s HF validation study. Consider how time and experience may impact training and use for real-world users:
Based on such considerations, you may decide that a shorter or longer decay in simulated-use testing will better represent the utility of training. So, what is “short,” and what is “long,” in simulated use? And, given the forgetting curve, does this matter?
We often recommend decay periods that span overnight when products are not expected to be particularly memorable or forgettable (based on a consideration process like that described above). For products that are expected to be more memorable, those considerations can be presented as a rationale for same-day training and testing. For products that are expected to be more forgettable, those expectations can drive a rationale for a multiple-day decay period.
While FDA’s various guidances and recognized standards have been nonspecific about decay period durations, our experience indicates that reviewers may request a decay period’s variability be minimized (i.e., use one duration, rather than a range of durations). Although we believe such variability will not necessarily impact study results, it is always good experimental practice to control variables that are not being intentionally manipulated by the study design in order to reduce ‘noise’ in the results. Complying with such requests to the extent practicable should not hurt your study results.
Participants in simulated-use human factors validation testing should only be trained when the provided training accurately represents a real-world user’s experience. When training is not expected to be consistent or guaranteed, some alternatives may be to provide the minimum expected level of training, or test untrained users. If you are unable to provide a compelling rationale for the inclusion of trained participants, FDA may request that you eliminate training from your human factors validation protocol. FDA may also request that any trained participants have a consistent decay period between training and testing. Consider the aspects of your product’s users, uses, and use environments that may impact real-life retention of trained information when choosing a decay period and presenting your thought process to the FDA.
About The Authors
Naomi Cherne is a Director at Core Human Factors, Inc. She received a Ph.D. from the Department of Psychology at the University of California Los Angeles, where she studied how people develop habits and how they overcome bad habits. She also has researched visual attention in healthy adults, memory in neurodegenerative patient populations, and used functional and structural neuroimaging to study the interactions of memory systems in the brain. Before joining Core, she spent four years performing forensic human factors analysis and safety-focused usability testing on products and environments ranging from roadways to factory equipment to children’s products.
Patricia Anderson is a Research Associate at Core Human Factors, Inc. She holds a MSE in Bioengineering from the University of Pennsylvania, as well as a B.S. from Roanoke College in Biology. Her interest in biology, engineering, and anthropology have driven her work at Core, where she strives to learn how medical devices and combination products fit into the lives of the end users, and understand how the user interface can be best optimized for usability, safety, and effectiveness. Since joining Core, she became a member of AAMI and has a firm understanding of standards and regulation requirements for human factors work on medical devices and combination products.