By Jason Song, SureMed Technologies, Inc.
Device functionality testing is an essential element of any medical device or drug delivery device development process. It is an essential part of design verification that demonstrates the developed device meets the design input requirements. Testing performed during development as part of device verification serves to (1) demonstrate that the developed manufacturing process can produce good products that meet the established specification reliably and (2) demonstrate with confidence that the device design will consistently meet the established specification. Of course, design verification testing involves more than just device functionality tests; there are other elements, such as bioburden, biocompatibility, pharmacopeia compliance, and other requirements, which we will not address here. Device functionality testing can be carried out either as an attribute, or pass/fail, type of test or it can be performed as a variable test whereby the results are analyzed statistically. In this article, we’ll discuss the difference between variable and attribute testing, when to use which, and strategic approaches to selecting one over the other.
To start off, for certain medical devices, the functionality test requirements and whether those tests are to be performed as attribute or variable testing are predetermined by applicable standards and guidance documents. For example, deliverable devices require testing to demonstrate that the deliverable volume can meet specification. In such cases, the use of variable testing is predetermined by the ISO 11608-1 standard. In many other cases, the device functionality tests to be performed and whether they are to be executed as variable or attribute testing are left to the manufacturer to decide.
When it comes to identifying and designing functionality tests, it is very much dependent on the device’s design and intended use, which is beyond the scope of this article. The functionality test should be designed to examine or test the device’s intended functionality. While there may be situations where one test can examine several aspects of a design, it is generally ill-advised to try and test multiple aspects of the device simultaneously. There are some situations where, due to device design and performance characteristics, testing of multiple device functions together cannot be avoided. For example, consider a bipolar electrical sealer and cutter surgical device where a vessel is sealed using electrical current and then cut with a blade. In such a case, the two actions of the device cannot be separated and tested separately, as they are interdependent. While there is room for further discussion about strategies for designing optimal test setup and approaches, here we will focus on attribute vs. variable testing.
What Defines A Test As An Attribute Or Variable Test?
There sometimes seems to be confusion about what it means for a test to be an attribute test vs. a variable test. A test that generates numerical data does not make it a variable test. What makes a test a variable test is how the test engineer analyzes the generated data. If the data is analyzed statistically to generate a Cpk or K-value (if following ISO 16269-6) to determine whether the test passes or fails, then the test is a variable test. If, on the other hand, the test engineer takes the raw numerical test results and compares them directly to the device specification to determine if it is within specification (PASS) or out of specification (FAIL), then the test is considered an attribute test. It should be noted that in the case of a variable test, the established device specifications are still applied to the generated raw data in calculating the Cpk or K-value. Another way of looking at an attribute test is to consider it as a binary test where each test result is checked against the established specification to determine if it is Yes (Pass) within specification or No (Fail) out of specification.
When To Use Attribute Or Variable Testing
Now that we clarified the difference between an attribute and a variable test, the next question is when should we use one over the other? To answer that question, we should look at the advantages and disadvantages of each.
Advantages And Disadvantages Of Attribute Tests
The main advantage of an attribute test is that it is straightforward. Either the result is within specification, or it is not. While this test produces very straightforward results, this straightforwardness renders the result very one-dimensional. There is no additional insight into the device performance and the manufacturing process that produced the device. Insights such as how close the sample device performed to the targeted specification or how consistent the devices performed are not generated or analyzed.
If the attribute test was developed with variable data in mind, as we will discuss later in the article, this next-level statistical insight can still be gained through secondary statistical calculation. This is what makes attribute testing attractive in some instances. Unlike variable testing, where information such as whether the device is within specification as well as any next-level insights are all mixed together, an attribute test instead (1) separates whether the device meets specification and (2) allows for secondary or future statistical analysis for additional insight. I will discuss this more in terms of the strategic approach to testing.
While attribute testing is simple and straightforward, it does have drawbacks. Oftentimes, data from design verification, process validation/process performance qualification (PPQ), and other GMP tests are used to further support the overall manufacturing and quality control approaches, such as justifying why certain tests are part of release testing and others are demonstrated to be well controlled through design verification and PPQ results. In such a case, looking at results from a pass/fail standpoint provides a more limited picture than statistical analysis, where the level of consistency of the process and result is demonstrated (i.e., a high Cpk result).
In addition, with attribute testing, the sample size for the test will increase. Unlike variable testing, attribute testing sample size is determined based on risk level and associated confidence interval. For example, for a test associated with a medium risk level, the sample size can be n=59 for a confidence interval of 95%. This is higher than the typical n=30 sample size for variable tests. For tests associated with a higher risk level or requiring a higher confidence interval, the sample size will increase accordingly. This can be a challenge for devices that are complex to make, expensive, or where the number of available devices is limited.
Furthermore, one must also be conscious of differences in sample size calculation for design verification, PV, or PPQ compared to sample size calculations for routine release testing. For routine release testing, the sample size is based off the acceptable quality level (AQL) (ANSI/ASQ Z1.4) that is determined based on the size of the batch. AQL is used where process control and stability have already been established, as in the case of a commercial batch. In the case of design verification and PPQs, where the test is to establish or demonstrate that the process control and stability are under control, reject quality level (RQL) is used to establish the test sample size. RQL is sometimes also referred to as lot tolerance percent defective (LTPD) and is based on confidence interval and β value (i.e., RQL0.05).
Advantages And Disadvantages Of Variable Testing
So, what are the advantages and disadvantages of performing a variable testing? As mentioned above, attribute testing provides a straightforward and simple approach to determining if the test passes (within specification) or not. One drawback of attribute testing is the sample size required. But for variable testing, the sample size is pretty static (typically n=30). As such, considering the number of tests necessary in design verification and PPQ, sample size requirement for variable testing would be considerably less compared to attribute testing.
While variable testing sample size is constant, variable testing is still interconnected with risk level. For tests associated with higher risk levels, the Cpk or K-value acceptance criteria will be higher. With variable testing, the confidence interval is still a consideration, with a higher confidence interval required for tests associated with functionalities that have higher risk levels. A higher confidence interval requires a higher Cpk or K-value acceptance criteria.
One major challenge with variable testing is that the test results need to be normally distributed or able to be transformed for normality. As such, variable testing requires a thorough understanding of the device and testing parts. There are certain devices, such as those associated with elastomer parts that, due to their inherent design and material variability, do not lend themselves well to variable testing. A classic example of this is the needle shield on a drug delivery syringe. The needle shield comprises an elastomer cap that is fitted onto the syringe, either as a plug or as a cap that is tightly wrapped around the tip of the syringe. Due to the inherent variation of the elastomer’s elasticity from batch to batch due to the clay composition and differences between batches of clay and other raw materials used in making the elastomer, as well as the variation in glass syringe hub that is flame formed, the pull-off force varies from batch to batch. While sometimes the pull-off force comes out normal, other times it is not normal and cannot be transformed for normality. In such a case, it is not advisable for the test to be performed as a variable test. Similarly, the break loose and glide force measuring the start and sustained injection force of a syringe are not advised to be performed as a variable test.
Another example that does not lend itself well to variable testing is surgical devices’ performance on tissue. Take a harmonic surgical scalpel, for example. The sealing time and maximum burst pressure resulting from the seal are heavily dependent on the source location, size, and connecting tissue of the vessel being used for the test. In such a case, variable testing is not suitable.
To consider using variable testing, one needs to have a thorough understanding of how the device will behave under the test. This can be through past experiences with similar or related devices whereby the results are translatable and have consistently shown to be normal. Or, it can be through thorough early characterization and understanding of component variabilities and impact on the test. If prior knowledge is lacking, a thorough characterization should involve a recommended three non-consecutive batches produced at different times and/or sites, as well as an engineering assessment of the device material and any variability it may have on test results.
When possible, variable testing is a good choice in demonstrating not only that the device meets the functionality requirement and that the manufacturing process will be under control and can produce consistent products. It also provides information on how close the produced device is manufactured and performing to its specification limits relative to its natural variability, as well as the likelihood of any product falling outside the specification limit.
Attribute And Variable Testing Strategy
The pros and cons of attribute and variable tests can help programs from a risk reduction standpoint. Many programs tend to choose attribute testing not just for its simplicity and straightforwardness but also as a de-risking approach for design verification and PPQ. Attribute testing inherently requires a greater sample size than variable testing; as such, it helps in situations where the required samples size for attribute testing cannot be met due to unexpected circumstances such as errors and losses. Attribute tests can be converted to variable tests as long as the raw test data from each sample produce a numerical output and not a binary pass/fail outcome. The numerical raw data from an attribute test can be analyzed statistically to generate the Cpk or K-value used for variable testing. As such, by selecting attribute testing, if samples are lost during testing so the required sample size for attribute testing cannot be met, the test can easily be transferred to a variable test. This can be done either through a deviation or as a planned option within the protocol itself.
Here, it should be noted that not all deviation is bad. A deviation at its heart is documenting the change to a study plan or protocol. Some deviations are good/neutral, and some are bad. A good/neutral deviation is one that documents a deviation that corrects an oversight or, in this case, documents a different approach to analyzing the test result. In the case mentioned above, instead of analyzing the raw test data from a binary pass/fail standpoint, the raw test data are analyzed for normality and statistically (i.e., Cpk).
For this approach to work, the strategy should be branched back into test method development and validation. Test methods should always be developed to produce a numerical result, where possible, over a binary pass/fail type of raw results. Test methods that are developed to produce numerical raw data should be validated following ICH Q2(R1) or through gage repeatability and reliability (gage R&R), type one gage, measurement system analysis (MSA), and the like. Validating following ICH Q2(R1) or using method validation approaches such as gage R&R allows the validated method and its generated results be used from either a variable or an attribute standpoint.
The exception is tests that are purely attribute tests, which only determine if a function occurs or not. In such a case, the method is qualified using attribute agreement, whereby operators or analysts are presented with a mixture of good and bad parts or devices and need to correctly distinguish between the good and the bad parts/devices. Such a test is a pure attribute test and, since the result is just a yes/no binary result, it cannot be used for anything but attribute testing.
The difference between an attribute test and a variable test is how the raw test data is analyzed. If the raw data is analyzed from a binary pass (meets specification) or fail (does not meet specification) standpoint, then the test is an attribute test. If the test is analyzed statistically, looking at normality and calculating a Cpk or K-value and comparing that value to an acceptance criterion, then it is a variable test. Attribute and variable tests both have their pros and cons. Attribute tests tend to require greater sample sizes than variable tests. Variable testing requires the result to be normal or able to be transformed to normality. From a design verification and PPQ testing strategy standpoint, selecting attribute testing allows for a more straightforward data analysis and provides risk reduction in that the test can always be changed to variable testing through either a deviation or as a planned option within the protocol.
If you would like to discuss this topic further or have more specific questions related to attribute or variable testing, please feel free to contact me directly. If you have any other topics or questions you would like to see covered or discussed, please drop a line in the comments section or reach out to me directly at email@example.com.
About The Author:
Jason Song, P.E., is chief technology officer of SureMed Technologies, Inc., a company he co-founded in 2018 that develops holistic novel technologies, products, and services that balance the needs of patients, industry, and the healthcare ecosystem. Previously, he held various technical and leadership positions at Amgen, Eli Lilly, GE Healthcare, Motorola, and Novo Nordisk in a range of areas, including injectable and inhaled drug delivery device development, fill and finish, packaging and assembly automation, biochips, battery development, and establishing new production sites. He holds BS and MS degrees in mechanical engineering, an MS in automation manufacturing, and an MBA.