Guest Column | June 30, 2016

How To Effectively Evaluate Instructional Materials and Labeling Before Development Is Complete

By Natalie Abts, Genentech

Although instructional materials are a critical component of any medical device package, development and evaluation of these items too often is left until the end of the product development cycle. Whether the concern lies with development costs, tight timelines, or simply poor prioritization, this delay can contribute to poor product usability and be a major barrier in the FDA approval process, as reviewers will expect a thoughtful design based on human factors principles.

I’m referring not only to the standard instructions for use (IFU), but also to quick guides, packaging and labeling, training materials, and even the training program itself. These components all need optimization to help users operate the device efficiently and effectively.

So, how do we ensure that the instructional materials are going to facilitate safe and correct product use and not introduce the potential for error? Ideally, we want to take the same approach that should be taken for device design: Incorporate human factors evaluations early and often. However, understanding the best way to do this is not always simple. I’ve provided here some suggested approaches for the evaluation of different instructional components that will maximize useful feedback.

Labeling Comprehension

Labeling comprehension studies with a device’s target end users are a great way to evaluate printed materials, such as IFUs, package inserts, and quick guides. The term “labeling comprehension” does not refer just to product labeling, but can be generalized to all instructional components. A variety of evaluation methods can be used, but always with a goal of assessing the utility and understandability of the content.

A common approach to labeling comprehension is to instruct participants to read through parts of the materials while providing feedback or first impressions. This initial feedback can give insight into how a new user may interpret content, which information is most salient, and what questions might arise as they learn about the device. Then, targeted knowledge questions can be asked to assess understanding. A useful strategy is to include questions related to potential points of confusion that you anticipate might be problematic for the user. For example, if an IFU includes a medical term that may confuse lay users, it might be valuable to ask participants to define the potentially confusing term.

Remember, you want to avoid basing questions on memorization of non-critical facts, but focus rather on bigger-picture information that a user is expected to absorb via reading. Allowing users to reference the materials if they do not know the answer immediately also can keep the focus on assessment of the materials, rather than testing the participant’s memory.

Scope Of The Study

Let’s imagine you plan to conduct a comprehension study and you have a 400-page user guide to assess. Trying to evaluate a document of this magnitude can lead to participant burnout if you expect them to review the entire guide. Studies that are too long or repetitive can result in poor feedback, so efficiency and variability throughout the study is important. 

How do you decide which information is the most important to assess? One way is to focus on instructions for essential tasks. Ensuring this material is understandable will be important when real-world users utilize your IFU for task completion. Another major consideration should be critical tasks. Given the FDA’s emphasis on assessment of critical tasks for evaluation (more info on that here), it is vital to assess any sections of an IFU related to performance of tasks that could result in high-severity outcomes. Misunderstanding of warnings, cautions, and contraindications often is considered critical; therefore, the sections of your IFU that describe them should be included in labeling comprehension.

As overly lengthy IFUs can present navigational difficulties, it also can be useful to have users locate sections of the IFU where they would expect to find specific information. You can then observe how long it may take a user to locate information, and gather firsthand observations of any participant frustrations with navigation.

Performance Evaluation

Although knowledge questions can help you garner valuable feedback, assessing performance while participants are guided by materials is a vital component of assessing IFU utility. This can be accomplished in various ways during formative evaluation, such as allowing participants to complete tasks using only the instructional materials available, with no previous training, or utilizing labeling comprehension as a means of familiarization in place of training.

The second strategy often is useful for complex devices for which training is needed, but use of the device without some previous knowledge is unlikely or unsafe. This is a useful strategy both for instructional material evaluation and for device evaluation, as it allows you to gather usage data more closely related to intuitiveness of device design, rather than recent information transfer from training.

A Quick Note Regarding Quick Guides

Many device developers choose to incorporate a quick guide as part of their labeling package. Although the evaluation of a quick guide is not inherently different from that of a larger IFU, the importance of evaluation often is greater, because the end users tend to favor a quick guide over a longer manual when both are available.

Creating a useful quick guide can be difficult because, in essence, it omits ancillary information and gives the user the minimum needed for a task. The labeling comprehension strategies described above are highly effective for quick guides, as well, particularly the assessment of task performance with no other resources provided. Targeted questions about the utility of the guide and what information participants might feel is missing can provide great insight, and can be especially valuable if users are able to evaluate both the quick guide and IFU in comparison.

Device Labels And Packaging

Although it is ideal to assess device labels and packaging early in development, these components tend to be the least likely to be available in prototype form during formative stages. It is less likely for labels and packaging to cause problems during validation than it is for users to have problems with the IFU, but labeling components should not be regarded as an afterthought. Let’s say you have a syringe with unclear dosage labels, and participants in the validation study administer an incorrect volume. This type of error likely is critical, and may necessitate revalidation due to its preventable nature. You don’t want to be in that position, especially when problems could have been easily detected via knowledge questions or performance evaluation in formative stages.

Much like other instructional material evaluation, knowledge questions for packaging and labeling should focus on the information that it is most important for users to understand. During task performance, have multiple packaged items available and allow users to choose the components they need for each task, or have varying sizes or volumes available for users to choose from, if evaluating a device like a syringe.

Training

A training program can be more complex and difficult to evaluate than written materials. Although training also can be evaluated as part of a formative study, it can be difficult to determine the root cause of performance errors and whether they should be attributed to the training, instructional materials, or device design. You can always take the approach required of a validation study and follow up with users after task performance to obtain their perspective (more details on this in the FDA’s final human factors guidance), though you need to be cautious of users blaming the training for their own poor performance.

You also can obtain subjective feedback from users after they complete a training session, but be careful with this approach, as well. Study participants often want to be agreeable, and positive feedback could give a false impression that the program has been optimized. Providing feedback on a device is one thing, but when participants feel like they are criticizing the person who is conducting training, they can be much more reluctant to express negativity.

An approach to consider when trying to optimize feedback is incorporating users outside of the target group, who may be naïve to specifics on either medical practice (if the target end users are healthcare providers) or certain medical conditions requiring use of the device (if the target end users are patients). Although it may seem antithetical to involve participants who would not use the device in the real world, this practice may allow you to isolate the evaluation of the training program, since these participants likely will have the training as their only source of knowledge for device use, and will not be reliant on prior knowledge or learned intuition when performing tasks or providing feedback.

Conclusion

A comprehensive labeling evaluation is vital to ensuring a usable device, and should not be considered as an afterthought. However, instructional materials, no matter how well-designed, should never be used as a substitute for subpar device design. Incorporating labeling comprehension evaluations earlier in the product design lifecycle allows more robust feedback that can be evaluated in multiple iterations to maximize usability, and contribute to a stronger FDA submission and overall device package.

About The Author

Natalie is the Program Manager for the Usability Services division of the National Center for Human Factors in Healthcare. She oversees the technical and quality aspects of usability projects conducted  both for the medical device industry and within MedStar Health. She is involved in all aspects of planning and executing usability tests, and also leads an initiative to incorporate usability testing in medical device procurement. She has a special interest in ensuring that safe and effective products are brought to market through successful FDA submission.

Natalie holds a master’s degree in industrial engineering, with a focus on human factors and ergonomics, from the University of Wisconsin, where she was mentored by Dr. Ben-Tzion Karsh. Some of her previous work involved research on primary care redesign for the aging population and implementation of process improvement efforts in the ambulatory care setting.

Natalie can be contacted through http://www.medicalhumanfactors.net.