By Natalie Abts, Genentech
[Editor Note: The author serves as chair of the usability workgroup for the Xcertia Guidelines.]
Mobile apps have become a go-to platform for both patients and healthcare providers to manage medical care. Applying human factors and usability principles to the development of such apps is critical — both to promote adoption and retention, and to ensure safe and usable mobile health (mHealth).
However, unlike medical devices and combination products, most mHealth does not have to meet the U.S. Food and Drug Administration’s (FDA) requirements for application of human factors and usability engineering. Thus, the processes used to design and assess mHealth is widely variable and developer dependent.
Xcertia, an mHealth collaborative, is taking steps to address this issue with the release of a set of guidelines that can help developers adhere to standardized usability processes, even when an app is not subject to regulatory requirements. To develop the guidelines’ content, a workgroup of industry and subject matter expects followed a structured approach of research, content generation, and iteration that resulted in the draft guidelines currently available on Xcertia’s website.
The usability guidelines aim to serve a variety of purposes, one of which is to streamline the development process by providing a consolidated reference document assembled from multiple resources. The benefit is further increased when the usability guidelines are combined with four other topic areas — privacy, security, content, and operability — to encompass the overall Xcertia Guidelines (as discussed in a previous MDO article). Making this information accessible in one document provides a one-stop shop for development considerations vetted by a variety of industry leaders. The workgroup composition and development process are described in more detail below.
Usability emerged as a distinct topic of interest for the overall guidelines in mid-2018. Though previously combined with the operability section, usability eventually was split into a separate section, and thus a separate working group. An initial step was to recruit new members, representing stakeholders in the usability process, to team with workgroup veterans, ensuring that a variety of viewpoints were represented. The resulting group included usability practitioners, app developers, clinical experts, academics from the human factors community, and others with specific mHealth knowledge or interest.
Because most of the previously generated content focused on operability, the workgroup was charged with developing the usability guidelines from scratch. Generating a completely new set of guidelines required substantial background research and resource gathering, so the workgroup engaged a team of students enrolled in the Human Systems Engineering and User Experience graduate programs at Arizona State University to assist with the process. The programs — which provide education in human factors, usability, interface design, and technical writing — made the students uniquely qualified and well-positioned to assist industry experts on this project.
The development process began with establishing a definition that outlines the usability guidelines' goals and purpose:
The Usability Guidelines assess how a mobile health app is designed to be safe and easy to use by incorporating five key quality aspects of usability: learnability, efficiency, memorability, prevention of errors, and user satisfaction. Apps designed based on sound usability principles will be optimized for use by the specified users within the specified use environments.
This definition includes several important points. First, it establishes that safety and ease of use both are critical considerations. Second, it references a common usability definition established by industry veteran Jakob Nielsen to outline key components of usability. Finally, it provides justification for a robust usability process and cites key considerations for usable technology: designing specifically for the end users and use environments. All subsequent guideline content was developed in consideration of this goal statement.
Research and Development
The student team initiated research by conducting a literature review to compile a list of key sources from secondary usability literature relevant to general usability principles, app usability, and mHealth. These sources included standard textbooks, existing standards and guidelines documents, and recent peer-reviewed journal articles. From there, the students prioritized relevant information and compiled it into a working document for review by the workgroup. This document was combined with information generated by other workgroup members from known and commonly utilized sources. The workgroup then sorted information into high-level topics, determining which should be included in the guidelines.
As the workgroup identified each high-level topic, team members worked iteratively on generating the content considered high-priority for inclusion. Team members participated in biweekly telephone calls to discuss the progress made between meetings, discuss potential disagreements with the content generated, and finalize the guidelines for publication.
Generation of Guidelines and Performance Requirements
From the information gathered through research and development, 10 high-level guidelines were selected for inclusion:
For each usability guideline, an introductory statement was developed to provide a general definition and, in some cases, a brief justification for inclusion. The justification statement’s intent was to promote topic understanding among an audience less familiar with usability concepts, and to establish that referenced principles are based on research and data, rather than just workgroup members’ opinions. After each guideline was chosen, a set of more specific performance requirements were generated to provide the reader with more detail on how to apply the guideline principle.
Guideline topic selection resulted in some guidelines being broadly applicable to any product evaluation — for example, following established visual design principles, creating products accessible by a wide range of users, and providing help resources — and others being more specific. The team felt it was important not to overlook elements of good design that are relevant beyond mHealth.
The team also aimed to cover common functionality that could be expected to confuse users if poorly designed. It was particularly critical to ensure that features potentially associated with risk were included. The Notifications, Alerts & Alarms guideline is a good demonstration of including risk considerations. Though many medical and nonmedical apps utilize some type of notification system for reminders or informational nudges, certain mHealth may need to utilize higher-risk alerts and alarms to relay information that, if missed, could cause harm to a patient.
Imagine, for example, that a diabetic patient is utilizing an app to continuously monitor their blood glucose. If the app detects a high blood glucose level and fails to adequately inform the user, or presents the information in a manner in which it could be misinterpreted, this poses a safety risk to health of the patient.
Finally, the Ongoing App Evaluation guideline was one of the highest prioritized for inclusion. This guideline is unique because, by following the performance requirements, developers can assess the app’s compliance with the performance requirements of other guidelines. The guideline provides guidance on incorporating usability activities covering the entire development process, from conducting user research in the early stages to performing validation testing as a final step. This promotes a comprehensive approach to usability, which is a key theme throughout the usability guidelines.
Challenges and Insights
Though development of the usability guidelines presented some challenges both unique and expected, the workgroup ultimately was able to succeed in its goal. Division of the usability and operability guidelines resulted in a blank slate for generating usability content, presenting one of the initiative’s early challenges. However, student help enabled the group to more quickly sort through information, identify the most relevant content, and develop meaningful guidelines.
Additionally, while the workgroup benefited greatly from the students’ assistance, the students also reaped substantial benefit from the collaborative process in the form of opportunity: to receive feedback from industry veterans, to participate in group discussions, and to learn how to produce quality work in their fields of interest.
One of the most polarizing challenges in the development process was establishing the appropriate level of detail for the performance requirements. A principal goal of the guidelines was to create a set of standards translatable across varying user types, operating systems, and app categories. Thus, providing too much specific detail could result in a document with many performance requirements applicable only to very specific situations. Conversely, writing requirements that were too general would limit their usefulness to the audience.
The team struck a balance by providing guidance on adherence to high-level principles, but crafted examples for applying the concepts to specific situations. For example, creating apps that are adaptable to a variety of users is a key component of usability, and was demonstrated through requirements relating to support of varying input modalities and maintaining functionality between portrait and landscape modes.
Detailed requirements were also provided when the team deemed it appropriate. This is demonstrated in the Readability performance requirement, which dictates minimum default sizes for paragraph text. Had this requirement taken the form of a vague recommendation, it would have been less meaningful to the reader.
Finally, an important factor in the Xcertia Guidelines' success is that the target audience(s) can easily read, understand, and apply them — a goal that involves several considerations. The use of a universally understandable voice was critical. Removing jargon and using lay language was a priority to make the usability guidelines translatable for different audiences. The team achieved this by using accessible human factors terminology like “redundancy” and “user-centered design,” but avoiding less understandable wording.
Delivering an understandable document also involved consideration of how the usability section would fit into overall Xcertia effort. Using consistent terminology, removing repeated information, and reconciling content that could be in conflict were all important considerations. The leaders of each workgroup convened to provide input on consistency between sections.
The draft set of usability guidelines resulted from an iterative process of research, development, and discussion, with guidelines covering topics critical to the development of usable mHealth. This draft version of the Xcertia Guidelines, including all five topic sections, was released in February 2019. The development process for the usability section was discussed with other stakeholders in the usability community at the Human Factors and Ergonomics in Healthcare Symposium in March 2019 (with accompanying paper that will be available in the conference proceedings used as a reference for this article), where preliminary feedback suggested a high level of interest in the existence of the guidelines.
After a public comment period (which closed May 15, 2019), workgroups will reconvene to incorporate feedback and publish a new version of the guidelines. The workgroup hopes the outcome of this project will be a widely adopted set of guidelines that benefits all stakeholders in the mHealth industry.
About The Author
Natalie Abts is the Head of Human Factors Engineering at Genentech, where she manages a team of engineers conducting human factors assessments for drug delivery devices. Before joining Genentech, Natalie worked for seven years consulting for medical device companies to provide advisement on human factors considerations for medical products. Natalie has specialized experience in planning and executing both formative stage usability evaluations and validation studies for medical devices and combination products on the FDA approval pathway. Natalie holds a master’s degree in industrial engineering, with a focus on human factors and ergonomics, from the University of Wisconsin, where she was mentored by Dr. Ben-Tzion Karsh.