Guest Column | July 19, 2021

Retro-Validation: 6 Factors To Resolve Validation Deficiencies

By Jason Song, SureMed Technologies, Inc.

hand-putting-check-inside-box-for-validated-option-iStock-536686995

Whether you are in the medical device or pharmaceutical sector, chances are that throughout your career you will likely come across a situation where you or your team discover validation deficiencies, such as an incomplete process validation, incorrectly done shipping validation, or a method that the team had thought was validated but it turned out it was not validated or was validated in a way that is not compliant. When this type of situation arises, oftentimes the team’s initial reaction is to look for justifications or rationales, question whether they misinterpreted the standard or if there’s a standard or guidance that says this method doesn’t need to be validated, help rationalize that what was done is fine and return the team back into the safe zone, or a combination of these. Inevitably, in many of these situations, the team will eventually come to the conclusion that there is truly a gap and one that must be remedied.

At this point the question will be how, especially considering the impact it will have on all the data that has been generated in the past and decisions/justifications that were made based on those data. Will the new method validation invalidate those data? Do we need to repeat those tests to generate new sets of data? We may not have samples from those past batches, or the samples are so old that if there are any left, they would have passed their expiration dates. This situation gets more complicated when product is on the market and/or has been released or regulatory submissions have been made based on those generated data. So, what do you do in this situation?

This type of situation is more common than most realize, and the way to remedy and/or shore up these gaps is through retrospective validation (retro-validation, as it is commonly called). Retro-validation simultaneously performs the validation while at the same time retrospectively revalidating the data that was generated in the past. The following are key steps to a successful retro-validation.

1. Keep Setup & Parameters The Same As The Prior Validation

To illustrate how this is accomplished, we will use method validation as an example, as it is by far the most common reason for retro-validation. To perform retro-validation, we must both treat the situation like a new method development while at the same time approaching the method design and setup like an investigation. The reason we approach retro-validation from these two different positions is because of the sensitivity and broad implication that the retro-validation may have. If the retro-validation fails, then the validity of the prior data or work is called into question and all decisions and activities based on those data or results will be called into question and need to be reexamined. As such, while retro-validation may appear on the surface simple and straightforward, it in fact requires someone who is very familiar with the method, process, or specific validation activity in question as well as someone who can approach the situation carefully and surgically. 

First, let’s talk about what a retrospective validation is. A retro-validation, at a 5,000-foot view is, in essence, a re-execution of the method validation with the method parameters, setup, and everything maintained and kept the same as before. For example, for a device functionality test that uses a tensile tester such as Instron or Zwick, a test fixture, and a test program, the Instron crosshead speed and parameters of the test fixture used and how it is set up should all be the same as they were. Keeping everything the same allows for the retro-validation to retrospectively cover all prior generated data, hence the name retro-validation. This of course does not mean the retro-validation should be performed in the same way as the past validation, but instead you would follow applicable ISO or ICH Q2 requirements and shore up the validation deficiencies or gaps. In the case of medical devices, the method validation is likely performed either as measurement system analysis, such as a Gage R&R, Type 1 gage, etc., or as an attribute agreement test.

2. Assess Risk And Mitigate As Needed

While the high-level approach to a retro-validation may sound straightforward, it is strongly advised to first evaluate the method for risks before using the standard approach. If risk arises, risk mitigation or an adjusted approach needs to be developed, and the retro-validation plan and approach will in turn need to factor in the risk mitigation approach. In such a case, the retro-validation and risk mitigation approach need to be navigated with more care and finesse.

So, the first step in starting a retro-validation project is to assess both the method itself as well as analysis of the past data. Both must be done to fully evaluate and determine if there are risks to the retro-validated method, as well as to identify risks so that an appropriate plan and approach can be formulated.

In assessing the method and method setup, what you are looking for are method design issues or fixturing setups that may introduce additional variables or noise into the method and subsequently output results that would lead to method validation failure. In medical device and combination products, methods can be broken into two general categories, attribute and variable data test methods. In the case of variable data test methods, oftentimes method validation is done using Gage R&R. For pharmaceutical products, the F-test may also be used for method validation, depending on whether the method follows ICH Q2. Whichever the case, additional variables or noise introduced due to test setup or method design may result in a failed method validation.

3. Avoid Setup Deficiencies

Sometimes, method setup deficiencies, such as fixturing not controlling or limiting the way the device is engaged or tested, will result in the test capturing confounding variables beyond the test attribute being tested. This can often be spotted by someone experienced with the method or with the test method development, by examining the method setup and fixturing more closely. Take, for example, the case of a test method examining the trigger pull force for a surgical device while the device is loosely attached to a fixture whereby the device trigger can at times be inline or aligned axially with the trigger force measurement apparatus, while at other times may be seated at a slight angle to the force measurement apparatus, resulting in the force measured by the test setup being a vector force. In such a case, performing a retro-validation using the existing method setup would not be advisable as the test setup is not under control and at times does not truly capture results from the attribute being tested by a combination of confounded variables.

4. Conduct A Comparative Data Analysis

In other situations, the method setup and fixturing may appear to be OK. In such cases, due diligence in analyzing the past data should still be performed to eliminate any hidden surprises. To perform data analysis, you examine the past data, evaluate the differences between each prior data set, and determine if the variances are wide or within acceptable levels; in the latter case, retro-validation using the exact same setup and method parameters would be reasonable. In addition to looking at the variance between the data sets generated over time by different analysts, days, and on different batches, an SPC (statistical process control) analysis using either Westgard or Nelson rules is highly recommended to ensure there is no trending over time for the data set. If trending is determined, an investigation as to the cause of the trending is highly recommended to ensure it is not contributed by the test method and/or any post investigation correction will not impact the test method.

5. Evaluate Impacts Of Method Adjustments

If both the method and method setup, as well as data analysis, found no issues or risks with the existing method and method setup but just that the method either wasn’t validated or there were deficiencies in the way the method validation was planned and executed, the retro-validation becomes very straightforward. But if the method needs to be adjusted for one reason or another, care should be taken adjusting the method and performing retro-validation.

In adjusting or modifying the method or method setup/fixture, it is important to evaluate the method adjustment not just from a method development standpoint but also the extent of the change and how much it differs from the prior method. It is also important to evaluate how the change or adjustment will impact past generated data. Changes to the method will always call into question past data. One should evaluate how much of an impact the change has on the generated data. For example, in the case of a break loose and glide force test method, if the change is to update the test fixture so that the syringe placed inside it no longer wobbles but is held more firmly allowing for a true axial compression force measurement, then such a change is an improvement to the method and one can justify how it will not impact past generated data. If, for example, the method is a container closure integrity test method that is dependent on the fixturing, such as vacuum decay or the high voltage leak detection container closure integrity test (CCIT) method, and the change is to the syringe or autoinjector holding fixture, depending on the change, the change may have an impact on the acceptance criteria for the CCIT method and, thus, past generated results.

6. Determine If A Comparability Study Is Needed

When method change is determined not to impact past data sets, one must still evaluate if a comparability study is warranted. If the justification makes several assumptions, it may be worthwhile to perform a comparability study between past and adjusted test methods to demonstrate with data that there are no differences. In the previous CCIT example, a cross comparability study may help justify the differences between fixtures. Whether you need to perform a comparability study and, if so, what type should be assessed on a case-by-case basis.

If the method change is determined to have an impact on the past data, one must immediately assess the situation and determine if, despite invalid past data, the product still conforms with release criteria through other data and indirect evidence or another approach. Some of the other approaches may be a justification from using the variable test method to using the attribute test method, but like all other approaches at this stage, it must be backed by strong justification. If you would like to discuss or explore these other approaches, please feel free to contact us directly. In all the retro-validation cases we have seen and handled, this by far is the rarest of outcomes.

While above we have focused on examples using test methods, retro-validation also applies to other validation activities, such as process validation, as well as shipping validation.

Conclusion

Retro-validation is just as it sounds: It is retrospectively validating a process or method that may not have been validated in the past when it should have been or was validated in a way that was deficient. While retro-validation on the surface may seem straightforward, it should always be treated with care and careful consideration. Risk assessment should be part of the retro-validation planning and if adjustment to the method or process is necessary, justification and/or comparability studies are often called for to provide rationale and assurance.

If you have any other topics or questions you would like me to cover or discuss, please feel free drop a line in the comments section or reach out to me directly at jsong@suremedtech.com.

About The Author:

Jason Song, P.E., is Chief Technology Officer of SureMed Technologies, Inc. a company he co-founded in 2018 that develops holistic novel technologies, products, and services that balances the needs of patients, industry, and the healthcare ecosystem. Previously, he held various technical and leadership positions at Amgen, Eli Lilly, GE Healthcare, Motorola, and Novo Nordisk in a range of areas, including injectable and inhaled drug delivery device development, fill and finish, packaging and assembly automation, biochips, battery development, and establishing new production sites. Jason holds BS and MS degrees in mechanical engineering, an MS in automation manufacturing, and an MBA.