Guest Column | May 6, 2019

Understanding Costs And Risks For Human Factors Engineering (HFE) Usability Studies – Part 2: Outsourcing HFE Usability Testing

insiders-view-CM-CM-KD

By Charles L. Mauro, CHFP, Chris Morley, and Kim Dalton, Mauro Usability Science

Part 1 of this series presented factors that medical device developers should consider when executing HFE studies internally, detailing how to overcome obstacles while acknowledging that executing such studies in-house, according to accepted professional practice, is more complex and costly than most device development teams realize.

On the other hand, retaining an independent HFE testing agency can seem like an expensive and time-consuming process, one further complicated by industry dynamics. Specifically, over the past decade, a flood of small firms and digital agencies have entered the usability testing space, many of which are limited to single individuals who retain freelance moderation staff to execute data capture and related study execution.

This influx of usability testing offerings forces device development teams to carefully evaluate both the actual expertise and formal education of such firms bidding on critical HFE testing programs. The entry of new low-cost and low-experience providers has pressured independent agencies to dramatically reduce their professional fees. This further complicates bid review and testing firm selection for device development teams, as pricing across HFE study bids often varies by over 50 percent.

Fortunately, there are well-understood and time-honored methods for managing the cost of HFE studies conducted by independent testing agencies. This article examines how to navigate the complex problem of optimizing data quality while controlling for study costs and time constraints when employing independent HFE testing agencies.

Prioritize Data Quality, Even at Additional Cost

The goal of a device development team should be to obtain the highest quality data possible, given budgetary and time constraints. High-quality data is a non-negotiable requirement for HFE testing because device development teams need it to make critical design decisions or even consider submission for FDA review and approval. Thus, teams must evaluate whether a given HFE testing agency will deliver high-quality data at the price proffered in a formal bid. Such evaluation involves two questions: whether the independent agency is capable of delivering quality data, and whether the agency’s proposed study design is based on a robust understanding of HFE best practices.

Can the HFE Testing Agency Deliver Quality Data?

Twelve key variables impact a research team’s ability to deliver high-quality data; these variables should form the basis for verifying a prospective HFE testing agency’s ability to deliver such data (i.e., device development teams should ask the agency to provide all of this info in a clear manner).

It is interesting to note that requests for proposals (RFPs) rarely ask testing agencies to provide all of the information listed below:  

  1. Professional history of the agency in terms of years in business and specific experience with HFE usability studies for medical devices, including background on full-time and freelance staff that will be working on the program.
  2. Confirmation that the Principal Investigator (PI) is either a certified Human Factors Engineering professional or highly experienced in HFE research and study design, with extensive experience in medical device testing processes and procedures.
  3. Overall educational and professional experience of the staff designing the study, as well as the staff executing moderation and data collection sessions. It is a fact that many small HFE testing agencies use relatively low-paid and inexperienced staff to execute moderation and data collection, almost certainly leading to low-quality data procurement.
  4. Provision of a signed statement that all staff members working on the program have been fully briefed and understand all FDA, local, state, and federal guidelines for testing with human subjects.
  5. Guarantee that all data capture systems adhere to critical data security standard operating procedures (SOPs) and legal standards.
  6. Provision of a statement that the agency has in place formal SOPs covering all aspects of study design, respondent recruiting, respondent safety, and data security and management procedures.
  7. A short statement explaining project-specific training that will be provided to the moderation and data capture team prior to execution of the study sequence.
  8. Prior experience of agency staff with the specific product or device being tested, as well as their level of understanding of anticipated critical task errors and/or IFU design issues.
  9. Detailed description of the HFE testing agency’s pilot study schedule and provision of video recordings of all pilot sessions to be approved by the development team.
  10. Confirmation of all necessary professional liability insurance policies and related procedures for managing risks to respondents.
  11. A detailed description of the direct experience the testing agency has with both the respondent recruiting agency and testing lab to be utilized.
  12. Confirmation of all confidentiality and security procedures related to non-disclosure of device designs or related materials.

What Study Aspects Should Not Be Overlooked to Reduce Costs and Accelerate Testing Schedules?

Device development teams should understand the costs that are absolutely necessary to maximize data quality. Here, we discuss in detail aspects of HFE usability studies that should not be compromised in the interest of reducing costs:

  • Recruiting Representative Users of the Device As stated in Part 1 of this series, as well as in FDA guidance on applying human factors and usability engineering to medical devices, it is critical that respondents testing the device are as similar to the intended device users as possible, ensuring that observed behaviors are representative of real-world user behaviors. Furthermore, devices with multiple user groups should be tested by each user group to give the device development team a comprehensive understanding of types of errors/issues that may occur with different user groups. Accordingly, costs required for screening and recruiting representative users from each relevant user group often are necessary to maximize the quality and generalizability of study data.

Factors that can increase the complexity and cost of recruitment include:

  • Medical Conditions — Patients with specific medical conditions or parents/caregivers of children with specific medical conditions may be more difficult to recruit in specific geographic regions. For example, it is often easier to recruit patients with high LDL cholesterol compared to osteoporosis, as high LDL cholesterol simply is more prevalent in most geographic regions. Recruiting patients with harder-to-find medical conditions may sometimes require recruitment and data collection at multiple study sites to meet sample size quotas.
  • Specific Injection Experience Often, it is appropriate to test a device with injection-naïve and injection-experienced participant subgroups. Depending on the test device and drug indication, it may be difficult to obtain necessary sample sizes of such subgroups in certain geographic regions.
  • Expertise/Training Recruiting representative users with specific expertise/training often will require higher incentives and greater flexibility in terms of scheduling sessions, as there may only be a few individuals with the desired expertise/training within a geographic region.
  • Availability/Willingness to Participate Related to the expertise/training factor above, certain user groups may be difficult to recruit either because they are unable to fit participation into their busy schedules, or they are hesitant to take time away from their primary career responsibilities to participate in a research study. This often is an issue with busy medical doctors with very specific areas of practice. Higher incentives typically are necessary to recruit from such user groups.
  • Education Level/Health Literacy Often, and especially in studies examining comprehension of instructions for use (IFU), packaging, or related labeling, it is appropriate to recruit representative users from multiple education and health literacy levels to determine whether information necessary for drug administration is properly understood by all possible users. Recruitment complexity increases when seeking a specific distribution of education level/health literacy within already difficult-to-recruit user groups.
  • Past Participation — When multiple studies for a single device are run within a short period of time, it is necessary to ensure that recruited participants have not participated in prior studies examining the same device, as repeat participation will confound their data due to unintended learning effects. This is particularly an issue when representative device users are from a difficult-to-recruit user group. In some cases, to find participants who have not interacted with the device in prior studies, it may be necessary to recruit and collect data at a new study site.
  • Employing Appropriate and Reliable Technology for Desired Data Capture Methods Depending on the study objectives, a given study may call for use of standard technology (e.g., only video recording of sessions for post-session review) or more advanced, high-performance technologies that require specialized expert consulting (e.g., use of 3D tracking software/hardware for tracking of device movement during critical task performance). In all cases, the technology must provide accurate and reliable data capture over the course of all data capture sessions. Accordingly, device development teams should understand the costs necessary for HFE testing firms to utilize reliable technology and/or develop specialized expert software/hardware for their studies.

Examples of factors related to complexity of technology for advanced data capture methods include:

  • Production of Platform for Force Measurement When measuring force, it is critical to simulate the device manipulation of interest to the most accurate extent possible. It often is necessary to develop a force platform based on the test device’s exact dimensions, allowing force strip sensors to be placed in a way that permits participants to perform the manipulations of interest naturally and without interference from the sensors. Input from both expert modelmakers and engineers may be required to ensure the platform is fabricated so the force sensors fit properly for accurate calibration/measurement, and the sensors do not interfere with users’ natural manipulation behavior.
  • Fitting Device for 3D Tracking — 3D tracking requires fitting the test device with a tracking sensor. In many cases, a small, non-obtrusive ring may be fitted on the body of the test device to hold the tracking sensor in place. However, this too can require input from expert modelmakers and engineers to ensure the ring fits the device correctly, holds the tracking sensor in place throughout the manipulation of interest, and does not interrupt users’ natural manipulation behavior.
  • Integrating Data Streams A critical aspect of collecting multiple data streams simultaneously is integrating the data in such a way that researchers are able to draw meaningful and interesting conclusions. This requires time-synching all data streams accurately and visualizing the data to facilitate analysis. In some cases, expert consultants may need to ensure that data is integrated appropriately.
  • Allowing Ample Time for HFE Testing Firm to Execute Key Study Tasks Certain tasks, such as study design and protocol development, production of specific data capture platform, data collection, data analysis, and report generation should not be rushed in an effort to reduce the study’s total cost. As emphasized above, data quality should be prioritized during study planning and cost management.

Opportunities for Cost Optimization When Retaining an Independent HFE Testing Firm

  • Select Independent Variables Wisely to Ensure Efficient Study Design While Still Meeting Study Objectives The number of independent variables, such as device concepts or instructions for use (IFU) to be tested, directly affect the number of respondents, length of data collection sessions, and facility rental, as well as professional fees associated with moderators running data collection sessions and researchers analyzing and reporting data. Therefore, consider carefully how many device concepts, IFUs, design variations, etc., you plan to test to effectively manage study cost.
  • Minimize Sample Size by Only Testing with Least Competent User Profile : Although it is important to sample respondents from each possible user group, in early formative testing, it may be satisfactory to reduce the scope of sampling to the least competent users (LCUs). LCUs are representative device users most likely to exhibit difficulty administering medication with the device, possibly due to lack of training or cognitive or physical limitations. To minimize sample size in early formative testing, one may consider only recruiting LCUs, or recruiting mostly LCUs, plus smaller samples of the other user groups. As medical devices should be designed to accommodate the widest practical range of possible users, LCUs can demonstrate the worst-case scenario use case, allowing determination of the device system improvements necessary for LCUs to use it successfully.
  • Minimize Travel Required by Device Development Team or HFE Testing Agency Remote video conferencing software has improved in terms of reliability and resolution to the point where moderators can easily set up cameras within a testing room and live-stream data collection sessions to any location in the world. This makes it unnecessary for the HFE testing agency and device development team to travel to a common location to execute and observe data collection sessions. Notably, for live-streaming to work well, the testing facility must have high-speed, reliable internet and camera angles must be pilot-tested thoroughly prior to official data collection to ensure all critical drug administration tasks are viewable by team members observing remotely.

When to Pay More for Specific Expertise

As noted previously, device development teams often battle with the question of which independent HFE testing firm to contract for a study. Testing firms’ proposals can vary in cost, study design, and available expertise, and the quandary of when to pay more is highly dependent on the objectives of the study and the device’s current stage of development:

  • Formative vs. Summative Study Utilization of advanced methodologies is more appropriate at the formative study stage than the summative study stage. Insights gained from such advanced methodologies may still inform the device design at the formative stage, whereas summative testing ultimately is intended to demonstrate that representative users are able to perform all critical tasks successfully with the final device design. This is not to say that summative testing should be left to an inexperienced firm, as the study design, data collection, and analysis must be executed correctly for the summative study report to pass through FDA approval.
  • Depth of Insights Desired from Testing — Advanced testing methodologies provide more in-depth insights into representative users’ experience with the device and related labeling. For example, utilizing high-performance eye-tracking in a formative study assessing drug administration performance and comprehension of instructions for use (IFU) will allow for more detailed root cause user error analysis, as data is collected regarding exactly what participants are viewing during all aspects of task performance. Similarly, force and electromyography (EMG) can provide greater insights into the physical exertion required by users to perform critical device tasks, versus moderator observations or participant self-reporting. Device development teams must weigh whether the higher cost of employing advanced methodologies in formative testing is justifiable in terms of whether the more robust insights produced will lead to better design decisions and/or decreased need for subsequent formative studies.
  • Some Research Questions Can Only Be Answered Using Advanced Methodologies — Specific research questions, either posed by the FDA or internal device development teams, may require advanced methodologies to answer. Examples of such questions may include how the shape and size of a device affect manipulation in 3D space (3D tracking is well-suited for this question), how higher drug viscosities affect users’ ability to expel the drug from a device (force and EMG are well-suited for this question), or how different IFU concepts affect information uptake and subsequent task performance (eye-tracking and micro-facial expression analysis are well-suited for this question).

Evaluating HFE Testing Firm Bids for Cost and Data Quality

Evaluating HFE testing firms bids can be complex and often involves a fair amount of subjective decision-making by the device development team. Teams can follow these guidelines to effectively evaluate final bids and select a qualified HFE testing firm:

  1. Develop a short list of preferred testing firms that vary according to expertise versus cost trade-off.
  2. Write clear, concise RFPs that include the 12 variables (discussed above) that impact a research team’s ability to deliver high-quality data, as well as detail preferred study design, user profile, and final deliverable.
  3. If you aim to address an unusual problem in your study, solicit each testing firm’s opinion on how to design and execute the study. Use this information to draft a workable RFP.
  4. If a reliable testing firm submits a confusing bid, request clarification and give the firm an opportunity to update its bid. Firms that have demonstrated successful performance in the past should be allowed opportunity for revision and/or negotiation to fully meet the device development team’s needs.
  5. If a firm submits a bid with additional testing services or methods not included in your RFP, ask the firm to explain why it is recommending other approaches, and request that the firm separate the costs for additional services from costs explicitly related to your RFP.
  6. Have each device development team member review all testing firm bids independently before group analysis, and have each team member draft a short statement supporting his/her selection and submit the statement for review during the group analysis session.
  7. Be wary of testing firms that claim directly relevant experience with a particular product from working with a competitor, and express a willingness to share that experience with your team. Such exchange of information places both you and the testing firm in direct legal liability with respect to sharing trade secrets and/or intellectual property.
  8. Never select a testing firm based on cost alone. The top criterion for selection of an independent HFE testing firm should be its ability to deliver quality data on time and as proposed in its bid.
  9. If you want to use a specific firm, but its costs seem too high, reach out to the firm to determine whether there is opportunity for it to reduce its cost structure. Most firms will be happy to work with you on this issue.
  10. Always request a best-and-final bid from prospective firms, making it clear that you appreciate their help in considering cost reduction.
  11. Always provide losing firms feedback regarding why they were not selected for a project. This greatly assists such firms in improving their bids for you in the future.

Summary

Obtaining the highest quality data possible, given budgetary and time constraints, should be the goal of device development teams retaining independent HFE testing firms for studies leading up to FDA submission. Opportunities do exist for reducing costs; however, device development teams should select the testing firm they believe can deliver high-quality data suitable for design decisions and FDA submission based on the guidelines described above.

About The Authors

Charles L. Mauro, CHFP, is Founder and President of MAURO Usability Science, a consulting firm specializing in advanced HFE research and optimization of complex medical devices and related technology. He is Chairman of the Design Protection Section for the Industrial Designers Society of America and is the IDSA liaison to the US Patent and Trademark Office. He has lectured on product design and HFE research at MIT, Stanford University, UPenn, and other leading human factors research and engineering programs. Over his 47-year career, Mr. Mauro has managed more than 4,000 HFE research and development projects. He holds a BS with distinction in Industrial Design from The Los Angeles Art Center College of Design and a master’s degree in Ergonomics and Biomechanics from New York University. At NYU, he was appointed NIOSH Research Fellow at the RUSK Institute of Rehabilitation Medicine. 

Chris Morley is a Senior Human Factors Engineer at MAURO Usability Science. At MAURO Usability Science, he has managed several complex medical device HFE research studies for top-tier device development teams and is skilled in the planning and execution of FDA-focused formative and summative studies. Mr. Morley holds a master’s degree in Experimental Psychology from Old Dominion University and specializes in experimental design, statistical analysis, and human factors psychology.

Kim Dalton is a Research Associate at MAURO Usability Science, specializing in various data-collection methods, including robust online surveys, high-performance eye-tracking, galvanic skin response (GSR), and micro-facial expression analysis for evaluation of medical devices. Ms. Dalton holds a BA in Neuroscience from the University of Colorado.