By Bob Marshall, , Chief Editor, Med Device Online
At most medical device companies, mere utterance of the word recall is akin to mentioning the Dark Lord’s name in the Harry Potter series of novels: it’s simply not done, unless one wishes to invite misfortune upon oneself. In Harry Potter’s world, the Dark Lord (Voldemort) is referred to as “He-Who-Must-Not-Be-Named,” a moniker easily coopted by “recall”-averse medical device companies into “That-Which-Must-Not-Be-Said.”
Recalls often result in bad press, lost revenue, frustrated or angry customers, and increased scrutiny by the FDA. However, it was a recall that created my opportunity to join a medical device company and enter the industry 25 years ago. A second recall at that same company caused me to run screaming from the building a few years later.
A Recall Opens The Door
The first seven years of my career were spent in the electric utility and rail transportation industries. I was an engineer involved in quality, safety, availability, reliability, and risk assessment. These tools were mature and regularly applied in the power and transportation industries in the early 1990s, but not so in the medical device industry. Recall that is was not until the Safe Medical Device Act of 1990 that FDA was authorized to add design controls to the current Good Manufacturing Practice (cGMP) requirements for medical devices. Prior to that, as long as products were manufactured in accordance with the drawings and procedures, and the production area was clean, everything was fine. But everything wasn’t fine. A significant number of field issues were the result of poor design, not a lack of good manufacturing practice.
A medical device manufacturer had enacted a recall for their device and root cause analysis led them in part, to realize they did not have proper discipline in their design process. Since their CEO made it very clear he wanted no more recalls, they set out to build a function separate from quality assurance, regulatory affairs, and engineering – but connected to them all. The group became known as Product Assurance and they cast their nets to hire engineers with experience in applying reliability, safety, and risk assessment principles to the design process. Their sincere desire to improve and the opportunity to build a better design process caught me.
Satiating The Appetite For Regulatory Risk
Over the course of a quarter century in the industry, I have come to the conclusion that the regulatory risk appetite within a medical device company oscillates like the swinging of a pendulum. The company will take greater and greater regulatory risks until the pendulum slams into one of its stops – the recall. (Warning letters, and especially consent decrees, can have the same effect on the pendulum).
When the pendulum reverses direction, regulatory risk decreases, and processes and systems become increasingly conservative. This will continue until sales decrease, stock value drops, the company desires to grow faster, or marketing promises a new device will be released in time for the trade show (which is only three months away). These business needs will cause the leadership team to become more aggressive and to take greater regulatory risks, driving the pendulum back toward That-Which-Must-Not-Be-Said.
I am by no means advocating that recalls are good. However, if one occurs, you have two choices: the organization can act surprised, point fingers at team members, and even dismiss a few select members of the team as sacrificial lambs. Or, the organization can learn from the experience and make changes.
See Bob. See Bob Run. Run Bob Run.
I worked with a company whose device had a particularly difficult time completing its design verification testing. The device was not revolutionary in nature; it was a next-generation product with a few enhanced features. Among these features was some new electro-mechanical technology: a variable speed pump had been replaced by a constant speed pump and a regulating valve with the intention of increasing the pump motor’s reliability while reducing noise.
However, during verification, designers encountered difficulty getting the device to perform in a stable manner at the lower end of its operating range. Additionally, some devices failed to function after simulated storage testing at temperature and humidity limits.
After numerous design changes and successful regression testing, design verification was considered complete and plans were made for a 90-unit pilot production run. These devices would be built to the final design specifications and run-in overnight, after which the design validation protocol would be performed on 30 randomly-selected samples. If all went well, the entire lot would be shipped to customers. I should mention that shipping these units — before the fiscal quarter’s quickly approaching end — had been communicated as VERY important to hitting the revenue numbers and maintaining the company’s stock value. So, with that in mind, 90 units were put through the production process, powered up, and placed in the run-in room overnight for their 16-hour cycle.
The next morning, we arrived at the run-in room to find 41 of the units alarming and non-functional. Their error codes indicated they experienced critical system faults during the night. Bear in mind, these devices were not life-supporting, but they were life-sustaining. Our risk management activities had assessed the risk of a hard device failure as unlikely, but potentially catastrophic. Under pressure of the company’s desire for revenue, I was told to create an experiment demonstrating that it would be safe to ship the 49 units that had not failed during run-in. Interestingly, after power was removed and restored to the 41 units that had failed the overnight test, they all restarted and ran normally.
I collaborated with our engineers and we decided to repeat the run-in and test cycle on all 90 units. Our expectation was that if 49 units were truly “good” and 41 units were “bad,” the result would be identical. The run-in cycle was repeated a second night and, the following morning, 37 units were alarming and non-functional. The bigger problem was that many of these 37 failed units had not failed the night before, and many of the units that failed the first night ran successfully through the second night. There clearly was a variable failure mechanism at work in the population of units that we did not yet understand.
At this point, I refused to sign off on the engineering change notice, or to release any of the units for shipment. My decision was greeted with anger, and the change notice and release documentation were signed off at a higher level within the organization.
Thus, the 53 units that made it successfully through the second night’s run-in, even though a significant number of them had failed the night before, were shipped out to customers. Can you guess the fate of those 53 units just a few weeks later, following numerous customer complaints that they were failing during operation? They had to be…how should I relay this? Don’t speak the word! That-Which-Must-Not-Be-Said.
Has anyone else lived through a pressure-packed nightmare like this? Tell us about it in the comments below.