Michael Barbella, Managing Editor03.07.17
It was a legendary flub.
Even now, more than three decades after its demise, the memory of New Coke still leaves a bad taste in our mouths. It purportedly ranks among the world’s worst business blunders, despite the fact the widely-panned beverage ultimately became one of history’s most fortuitous and informative human failures.
The story of New Coke is a familiar tale, having been rehashed ad nauseum in countless textbooks, magazine articles, marketing case studies, and boardroom folklore. The narrative reads like the plot of a modern-day Aesop’s Fable: Seasoned beverage company feels threatened by a younger rival, so it completely transforms itself to boost its popularity—only to eventually realize the change was unnecessary.
There is much more to the story, of course, but the moral is pointedly clear: Never mess with a classic.
While the creation of New Coke was certainly a team effort, its failure has long been blamed on poor market research, although conspiracy theorists have accused the Coca-Cola Company of deliberately orchestrating the slipup to generate publicity for its aging brand. Coca-Cola denied staging the gaffe, but the hypothesis nevertheless gained traction from New Coke’s solid sales and the company’s soaring stock price upon reintroducing the classic formula. “In May [1985], Coke sales shot up a sparkling 8 percent over the same month in 1984, double the normal growth rate,” TIME reported. “Some of the increase included sales of old Coke still on store shelves, but most of it was the new drink.”
Such robust sales, however, contradicts the marketing misstep theory.
Though it was never truly embraced by the soda-drinking public, the degree of consumer backlash against New Coke has been greatly exaggerated over the years. Legend hints at classic Coke-related anxiety headaches and deliverymen assaults during stocking of the new formula in the spring of 1985. At its peak, Coca-Cola reportedly received 1,500 daily complaints about the change, with resistance coming mostly from the South, around the company’s Atlanta, Ga., home. It was there that consumers felt most passionate about the reformulation and most betrayed by Coca-Cola management.
Ironically though, that same part of the country gave New Coke high scores in blind taste tests the previous year. In an effort to ensure a successful product launch, Coca-Cola spent $4 million on market research, interviewing nearly 200,000 consumers in every major U.S. market about its new soda recipe. Besides beating arch-rival Pepsi by as much as six to eight percentage points, cola drinkers chose New Coke over the classic formula by 55 percent to 45 percent in blind taste tests, according to a 1992 Marketing Research article. Classic Coke loyalists preferred the new drink 53 percent to 47 percent, and in taste tests where the sodas were identified only as “new Coke” and “old Coke,” cola lovers preferred the new formula over the old one by 61 percent to 39 percent, the magazine reported.
Coca-Cola clearly did its homework before revamping its beloved beverage, yet the product still fizzled. And whereas various factors likely contributed to New Coke’s misfire, many marketing experts believe the company’s testing process ultimately helped doom the new formula.
Coca-Cola began investigating public enthusiasm for a new soft drink in 1982. In addition to taste tests, the company conducted in-depth interviews with 2,000 consumers about a new ingredient for both Coke and Pepsi, and asked for their reactions about the change (Would they try the new drink? Would they switch brands? Would they be upset about the change?). Coca-Cola also used focus groups, a favorite marketing tool.
Those focus groups revealed an important trend that should have raised concerns for the company, scholars contend. Although individual taste test results showed the majority of consumers preferred the New Coke recipe and welcomed the change, a small group of passionate consumers (from the South, no doubt) felt alienated and betrayed by the new formula. When that vocal minority was placed in focus groups with the accepting majority, the opposing faction negatively influenced the response of the group as a whole.
Coca-Cola knew about the focus groups dichotomy but still moved forward with its reformulation plans, assuming the alienation felt by a faithful customer minority would eventually fade. It was a logical conclusion based on the company’s market research and testing data, neither of which accounted for the focus group influence.
Coca-Cola’s epic stumble illustrates the key role testing plays in a product’s success. It also shows the importance of understanding and applying the most suitable testing methods for any given product.
Such prowess is particularly crucial in healthcare, where product design or operational flaws can potentially harm patients. Consequently, companies specializing in medical device testing must be schooled in both technique and application, capable of expanding existing test methods and developing new ones to accommodate the newer materials, smaller parts, and additional electronics being incorporated into modern healthcare equipment.
“Every testing [firm] can follow the standards—we can all read the book and follow the guidelines,” said John Bolinder, vice president of marketing and communications for Nelson Laboratories LLC, a Salt Lake City, Utah-based provider of full lifecycle microbiology testing services for the medical device, pharmaceutical, tissue, and natural products industries. “Have you ever made a cake? If I were to make a chocolate gourmet cake, for example, and my wife were to make the same cake with the same recipe, I guarantee you it would come out different. Testing standards are no different. The standards don’t always tell you how to do the test...they may tell you what you need to do, but in a lot of cases you have to have enough experience to know how to apply that knowledge to get to good, solid test results and interpretation of data. Without that expertise and experience, you have to be really careful. Knowledge is the key.”
And that key can easily unlock the regulatory ramparts associated with the product development process. Despite the existence of global harmonized standards, testing requirements still vary between countries; nearly all nations, for example, require mobile and wireless technology-embedded medical devices be certified, but the specifications deviate between borders. Some countries accept foreign test reports as part of the application process while others insist the testing be completed by a local laboratory (usually a federal facility or lab closely affiliated with the government). The United States is more of a hybrid—some regulatory agencies like the Federal Communications Commission (FCC) will only accept test results from U.S-recognized labs, whereas others willingly honor overseas outcomes.
Similarly, there are globally harmonized standards for devices with Bluetooth technology, but country-specific requirements for RFID-powered equipment. In the latter case, a wireless transmitter used in Mexico might not be allowed in China.
“As the [medical device] industry is becoming more global, clients that sell to multiple markets want testing that will be accepted by international regulatory agencies in Europe, Asia, and North America. Our clients want to use one test that each of the international regulatory agencies asks for but each international regulatory agency will ask for tests with different requirements,” noted Kenneth Eddington, product manager, chemistry/post-market services for NAMSA, a Northwood, Ohio-based provider of regulatory, laboratory, clinical, and compliance services to medical device and healthcare manufacturers. “For example, a chemistry test may have different testing requirements in Japan than in other countries. The various international regulatory agencies call the test the same name but requirements for the test may vary. As more companies become global, they want to conduct testing that is acceptable to all global regulatory agencies, and that can be a bit of a challenge.”
Other challenges have arisen in recent years with the advent of mobile Health and the Internet of Things (IoT). The consumerization of healthcare is increasing device complexity and rewriting the rules of doctor-patient engagement, prompting oversight from both the FCC and U.S. Food and Drug Administration (FDA). Mobile medical devices (cell phones, tablets, etc.) using Medical Body Area Networks (MBAN) must now be licensed by the FCC and approved by the FDA, and functional testing has grown to include WiFi or cloud data storage—concepts foreign to the healthcare sector only a decade ago.
The MBAN protocol requires significantly less power than Bluetooth, ZigBee, or WiFi, and works on a regulated frequency band. Design verification and test strategies for medical devices using this technology are similar to those already implemented for numerous mass-market consumer products using multiple antennas and protocols in the 2.4 GHz spectrum.
More complicated verification procedures are required for wearables. Industry Canada (IC), the regulatory body for wireless product certification in the Great White North, recently revised exemption limits and requirements for specific absorption rate (SAR) testing. As a result, some products and technologies that were previously exempt from SAR testing will now require such verification in order to obtain IC certification. SAR testing is used to quantify the rate or amount of radiofrequency energy absorbed by the human body; the corresponding limits or thresholds are measured in W/kg. Although American and Canadian SAR standards are different, the FCC provides guidance on testing requirements. Product separation from the body, device output power, and operation frequency are the main determinants of SAR testing.
“The breadth of IoT testing can be quite complex,” explained Jim McGovern, validation manager for DDL Inc., a provider of package, product, and materials testing for the medical device industry. The company operates facilities in Eden Prairie, Minn., and Fountain Valley, Calif.
“The devices need to comply with standards and be compatible with other systems,” he continued. “The testing must address security issues such as data encryption, data protection, identity and authentication, and secure data storage. Beyond these, there is also the need for evaluation of the back end of the IoT environment. Does the device deliver what it was designed to deliver?”
Testing Justifications
The medtech industry is quite the paradoxical sector.
While it spawns its fair share of revolutionary technology, the industry itself is slow to evolve. Case in point: Reimbursement is still spotty for healthcare providers promoting patient wellness programs, even though the Patient Protection and Affordable Care Act established clear mechanisms to mandate coverage for certain efforts (genetic testing, for example).
The industry has also dragged its feet on Big Data, mobile medical apps, and wireless device technology.
It’s not surprising then, that FDA officials took the better part of two decades to update ISO 10993, the most widely-used international standard for assessing the biocompatibility of medical devices and materials, and determining the appropriate biocompatibility steps for a biological evaluation. The testing required by ISO 10993 is dependent on the type of product or material and its intended use, as well as the nature and duration of contact between the medical device and the body. Evaluating the biological effects from exposure to a medical device or material can involve such testing as cytotoxicity, sensitization, irritation or intracutaneous reactivity, systemic toxicity, sub-chronic toxicity, genotoxicity, implantation, and haemocompatibility.
Issued in mid-June last year, the FDA’s final guidance on ISO 10993-1 reflects a shift from biocompatibility rote testing to a more thoughtful biological safety evaluation conducted within a risk management process. Such a pensive approach can involve chemical characterization, which uses analytical chemistry to identify and quantify the amount of chemicals extracted from a device and evaluate the toxicological risk associated with exposure.
“Although the FDA made this an official guidance document in June 2016, it has been in draft format since 2013 and many of its premises are based on the outdated Blue Book Memorandum #G95-1, which came out in 1995,” said Don Tumminelli, manager of validation and testing for HIGHPOWER Validation Testing & Lab Services Inc., a Rochester, N.Y.-based provider of verification, validation, and testing services for the medical device industry. “The bottom line for medical device manufacturers is that the FDA wants to see an increase in biocompatibility testing of new devices, which may include new composite materials, 3D printing, etc. Device manufacturers should be familiar with this guidance and the ISO 10993-1 standard and be prepared for more scrutiny from the FDA on these types of tests.”
The FDA clearly outlines its expectations in its newest guidance for ISO 10993, providing details on risk-based biocompatibility approaches, chemical assessments, and biocompatibility test article preparations. The agency’s recommendations for risk-based biocompatibility evaluation are molded from elements of ISO 10993-1:2009. Specific steps for manufacturers include:
“There were a lot of changes in the [ISO 10993] guidance. The biggest change, though, is the FDA finally recognized that toxicology and chemistry is okay, and is requesting a risk-based approach to the evaluation,” Nelson Labs’ Bolinder said. “Those are the key changes that caused industry to rethink how they approach [biocompatibility] testing. Previously it was just ‘check off a box and go on your merry way.’ Now, FDA wants you to stop and think about why you picked the test that you picked, justify those tests—why they are clinically relevant to that device—and from there, get the tests completed. It may not be sufficient to do just the traditional animal and in-vitro assessments; you may have to go one step further with chemistry and then apply your toxicological review and understand how going one step further with chemistry might change completely how you see a material or you see a finished product where the animal or bench study would not have picked it up. Then you have to determine the level of safety you are willing to accept. The approach is a change in mindset for the FDA—the risk-based approach with written justification for both test selection and interpretation of data. That’s a big deal.”
But not the only big (testing) deal to come from the agency last year. In early February, the FDA updated its guidance on human factors requirements, releasing two drafts and one final document in an effort to improve medtech usability. The standards detail ways manufacturers can incorporate human factors (HF) engineering into medical device development to both increase patient and user safety, and minimize the potential for user error. It also clarifies expectations for HF validation testing for premarket submissions.
The guidance advises medtech manufacturers to focus specifically on the user interface, which includes elements such as displays, controls, packaging, product labels, and instructions for use. The document also clarifies some previously vague key terms like “critical task,” defined in the new standards as “a user task which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user...” This definition clarifies the tasks considered critical to device development that must be included in the risk analysis process and subsequent HF validation testing.
Another significant language change in the guidance involves device safety and efficacy. Medtech manufacturers must now be able to deem their products safe and effective for use rather than “adequately” safe and effective, implying that data gathered during HF testing may have to meet higher standards of rigor and defensibility than previously required. The tweak in wording could prove difficult for companies used to proving “substantial equivalence,” in which a product merely has to be proven as safe and effective as other similar products already approved and sold on the market.
Perhaps one of the most important clarifications in the new human factors guidance is the specification of products that need HF data to support their premarket applications. A companion document released by the FDA provides manufacturers with a list of 16 high-priority devices that need either an HF report and data, or a detailed explanation for the lack of such data. The list includes devices like ablation machines, artificial pancreas systems, automated external defibrillators, implanted infusion pumps, insulin delivery systems, and ventilators. Devices not included on the list may still need HF data if “analysis of risk indicates that users performing tasks incorrectly or failing to perform tasks could result in serious harm.”
“In the last couple of years, regulatory bodies have been requiring clearer [testing] data. Years ago, manufacturers could send in their raw data showing the results of the tests they did,” Eddington said. “The FDA and other regulatory bodies now don’t want just raw data, they want manufacturers to interpret what they are doing, explain why they are doing it, and convince them the testing they conducted is safe. They want the rationale behind the tests being performed.”
Producing such justification can be challenging for medtech manufacturers without the resources or expertise to perform the required verification for their products. Partnering with contract testing service providers can fill that knowledge gap and help companies meet higher regulatory expectations for rigor and defensibility in validation testing, and broaden their design risk mitigation strategies.
The increasing complexity of medical device development is driving the need for more rigorous, reliable, and reproducible testing. Tackling test method validation early in the product development process can help manufacturers avoid costly delays in commercializing their devices.
“One of the things we recommend is designing for X-ray [inspection],” affirmed Gil Zweig, CEO of Glenbrook Technologies Inc., an X-ray imaging technology developer in Randolph, N.J. “If, for example, you’re making an injection-molded catheter hub with lumens, you would want to be able to see inside that catheter hub for the possible presence of blood-collecting voids. There are certain materials you can use to make the hub more radiopaque. It’s better to know that kind of information at the beginning of the product development process.”
Certainly, designing for testing is an important attribute to consider when partnering with a contract testing service provider. Perhaps more critical, though, is the desire to establish a long-term relationship for the sake of patients.
“It is important to have a partner that takes the time to understand the customer’s unique needs, interests, and operational strategies,” observed Christopher Scott, vice president of medical device testing at Eurofins Medical Device Testing, an international laboratory services company which provides a broad range of testing capabilities to the medical device industry. “Companies will benefit from aligning themselves with a testing laboratory that is committed to maintaining a long-term relationship, not simply conducting a transactional test. When a true partnership is established between manufacturer and testing lab, both parties share the desire to celebrate the successful commercialization of a safe and effective medical device.”
Even now, more than three decades after its demise, the memory of New Coke still leaves a bad taste in our mouths. It purportedly ranks among the world’s worst business blunders, despite the fact the widely-panned beverage ultimately became one of history’s most fortuitous and informative human failures.
The story of New Coke is a familiar tale, having been rehashed ad nauseum in countless textbooks, magazine articles, marketing case studies, and boardroom folklore. The narrative reads like the plot of a modern-day Aesop’s Fable: Seasoned beverage company feels threatened by a younger rival, so it completely transforms itself to boost its popularity—only to eventually realize the change was unnecessary.
There is much more to the story, of course, but the moral is pointedly clear: Never mess with a classic.
While the creation of New Coke was certainly a team effort, its failure has long been blamed on poor market research, although conspiracy theorists have accused the Coca-Cola Company of deliberately orchestrating the slipup to generate publicity for its aging brand. Coca-Cola denied staging the gaffe, but the hypothesis nevertheless gained traction from New Coke’s solid sales and the company’s soaring stock price upon reintroducing the classic formula. “In May [1985], Coke sales shot up a sparkling 8 percent over the same month in 1984, double the normal growth rate,” TIME reported. “Some of the increase included sales of old Coke still on store shelves, but most of it was the new drink.”
Such robust sales, however, contradicts the marketing misstep theory.
Though it was never truly embraced by the soda-drinking public, the degree of consumer backlash against New Coke has been greatly exaggerated over the years. Legend hints at classic Coke-related anxiety headaches and deliverymen assaults during stocking of the new formula in the spring of 1985. At its peak, Coca-Cola reportedly received 1,500 daily complaints about the change, with resistance coming mostly from the South, around the company’s Atlanta, Ga., home. It was there that consumers felt most passionate about the reformulation and most betrayed by Coca-Cola management.
Ironically though, that same part of the country gave New Coke high scores in blind taste tests the previous year. In an effort to ensure a successful product launch, Coca-Cola spent $4 million on market research, interviewing nearly 200,000 consumers in every major U.S. market about its new soda recipe. Besides beating arch-rival Pepsi by as much as six to eight percentage points, cola drinkers chose New Coke over the classic formula by 55 percent to 45 percent in blind taste tests, according to a 1992 Marketing Research article. Classic Coke loyalists preferred the new drink 53 percent to 47 percent, and in taste tests where the sodas were identified only as “new Coke” and “old Coke,” cola lovers preferred the new formula over the old one by 61 percent to 39 percent, the magazine reported.
Coca-Cola clearly did its homework before revamping its beloved beverage, yet the product still fizzled. And whereas various factors likely contributed to New Coke’s misfire, many marketing experts believe the company’s testing process ultimately helped doom the new formula.
Coca-Cola began investigating public enthusiasm for a new soft drink in 1982. In addition to taste tests, the company conducted in-depth interviews with 2,000 consumers about a new ingredient for both Coke and Pepsi, and asked for their reactions about the change (Would they try the new drink? Would they switch brands? Would they be upset about the change?). Coca-Cola also used focus groups, a favorite marketing tool.
Those focus groups revealed an important trend that should have raised concerns for the company, scholars contend. Although individual taste test results showed the majority of consumers preferred the New Coke recipe and welcomed the change, a small group of passionate consumers (from the South, no doubt) felt alienated and betrayed by the new formula. When that vocal minority was placed in focus groups with the accepting majority, the opposing faction negatively influenced the response of the group as a whole.
Coca-Cola knew about the focus groups dichotomy but still moved forward with its reformulation plans, assuming the alienation felt by a faithful customer minority would eventually fade. It was a logical conclusion based on the company’s market research and testing data, neither of which accounted for the focus group influence.
Coca-Cola’s epic stumble illustrates the key role testing plays in a product’s success. It also shows the importance of understanding and applying the most suitable testing methods for any given product.
Such prowess is particularly crucial in healthcare, where product design or operational flaws can potentially harm patients. Consequently, companies specializing in medical device testing must be schooled in both technique and application, capable of expanding existing test methods and developing new ones to accommodate the newer materials, smaller parts, and additional electronics being incorporated into modern healthcare equipment.
“Every testing [firm] can follow the standards—we can all read the book and follow the guidelines,” said John Bolinder, vice president of marketing and communications for Nelson Laboratories LLC, a Salt Lake City, Utah-based provider of full lifecycle microbiology testing services for the medical device, pharmaceutical, tissue, and natural products industries. “Have you ever made a cake? If I were to make a chocolate gourmet cake, for example, and my wife were to make the same cake with the same recipe, I guarantee you it would come out different. Testing standards are no different. The standards don’t always tell you how to do the test...they may tell you what you need to do, but in a lot of cases you have to have enough experience to know how to apply that knowledge to get to good, solid test results and interpretation of data. Without that expertise and experience, you have to be really careful. Knowledge is the key.”
And that key can easily unlock the regulatory ramparts associated with the product development process. Despite the existence of global harmonized standards, testing requirements still vary between countries; nearly all nations, for example, require mobile and wireless technology-embedded medical devices be certified, but the specifications deviate between borders. Some countries accept foreign test reports as part of the application process while others insist the testing be completed by a local laboratory (usually a federal facility or lab closely affiliated with the government). The United States is more of a hybrid—some regulatory agencies like the Federal Communications Commission (FCC) will only accept test results from U.S-recognized labs, whereas others willingly honor overseas outcomes.
Similarly, there are globally harmonized standards for devices with Bluetooth technology, but country-specific requirements for RFID-powered equipment. In the latter case, a wireless transmitter used in Mexico might not be allowed in China.
“As the [medical device] industry is becoming more global, clients that sell to multiple markets want testing that will be accepted by international regulatory agencies in Europe, Asia, and North America. Our clients want to use one test that each of the international regulatory agencies asks for but each international regulatory agency will ask for tests with different requirements,” noted Kenneth Eddington, product manager, chemistry/post-market services for NAMSA, a Northwood, Ohio-based provider of regulatory, laboratory, clinical, and compliance services to medical device and healthcare manufacturers. “For example, a chemistry test may have different testing requirements in Japan than in other countries. The various international regulatory agencies call the test the same name but requirements for the test may vary. As more companies become global, they want to conduct testing that is acceptable to all global regulatory agencies, and that can be a bit of a challenge.”
Other challenges have arisen in recent years with the advent of mobile Health and the Internet of Things (IoT). The consumerization of healthcare is increasing device complexity and rewriting the rules of doctor-patient engagement, prompting oversight from both the FCC and U.S. Food and Drug Administration (FDA). Mobile medical devices (cell phones, tablets, etc.) using Medical Body Area Networks (MBAN) must now be licensed by the FCC and approved by the FDA, and functional testing has grown to include WiFi or cloud data storage—concepts foreign to the healthcare sector only a decade ago.
The MBAN protocol requires significantly less power than Bluetooth, ZigBee, or WiFi, and works on a regulated frequency band. Design verification and test strategies for medical devices using this technology are similar to those already implemented for numerous mass-market consumer products using multiple antennas and protocols in the 2.4 GHz spectrum.
More complicated verification procedures are required for wearables. Industry Canada (IC), the regulatory body for wireless product certification in the Great White North, recently revised exemption limits and requirements for specific absorption rate (SAR) testing. As a result, some products and technologies that were previously exempt from SAR testing will now require such verification in order to obtain IC certification. SAR testing is used to quantify the rate or amount of radiofrequency energy absorbed by the human body; the corresponding limits or thresholds are measured in W/kg. Although American and Canadian SAR standards are different, the FCC provides guidance on testing requirements. Product separation from the body, device output power, and operation frequency are the main determinants of SAR testing.
“The breadth of IoT testing can be quite complex,” explained Jim McGovern, validation manager for DDL Inc., a provider of package, product, and materials testing for the medical device industry. The company operates facilities in Eden Prairie, Minn., and Fountain Valley, Calif.
“The devices need to comply with standards and be compatible with other systems,” he continued. “The testing must address security issues such as data encryption, data protection, identity and authentication, and secure data storage. Beyond these, there is also the need for evaluation of the back end of the IoT environment. Does the device deliver what it was designed to deliver?”
Testing Justifications
The medtech industry is quite the paradoxical sector.
While it spawns its fair share of revolutionary technology, the industry itself is slow to evolve. Case in point: Reimbursement is still spotty for healthcare providers promoting patient wellness programs, even though the Patient Protection and Affordable Care Act established clear mechanisms to mandate coverage for certain efforts (genetic testing, for example).
The industry has also dragged its feet on Big Data, mobile medical apps, and wireless device technology.
It’s not surprising then, that FDA officials took the better part of two decades to update ISO 10993, the most widely-used international standard for assessing the biocompatibility of medical devices and materials, and determining the appropriate biocompatibility steps for a biological evaluation. The testing required by ISO 10993 is dependent on the type of product or material and its intended use, as well as the nature and duration of contact between the medical device and the body. Evaluating the biological effects from exposure to a medical device or material can involve such testing as cytotoxicity, sensitization, irritation or intracutaneous reactivity, systemic toxicity, sub-chronic toxicity, genotoxicity, implantation, and haemocompatibility.
Issued in mid-June last year, the FDA’s final guidance on ISO 10993-1 reflects a shift from biocompatibility rote testing to a more thoughtful biological safety evaluation conducted within a risk management process. Such a pensive approach can involve chemical characterization, which uses analytical chemistry to identify and quantify the amount of chemicals extracted from a device and evaluate the toxicological risk associated with exposure.
“Although the FDA made this an official guidance document in June 2016, it has been in draft format since 2013 and many of its premises are based on the outdated Blue Book Memorandum #G95-1, which came out in 1995,” said Don Tumminelli, manager of validation and testing for HIGHPOWER Validation Testing & Lab Services Inc., a Rochester, N.Y.-based provider of verification, validation, and testing services for the medical device industry. “The bottom line for medical device manufacturers is that the FDA wants to see an increase in biocompatibility testing of new devices, which may include new composite materials, 3D printing, etc. Device manufacturers should be familiar with this guidance and the ISO 10993-1 standard and be prepared for more scrutiny from the FDA on these types of tests.”
The FDA clearly outlines its expectations in its newest guidance for ISO 10993, providing details on risk-based biocompatibility approaches, chemical assessments, and biocompatibility test article preparations. The agency’s recommendations for risk-based biocompatibility evaluation are molded from elements of ISO 10993-1:2009. Specific steps for manufacturers include:
- Risk assessment: Materials, processing of materials, and manufacturing methods should be included in this evaluation.
- Risk identification: Manufacturers should account for potential risks, including chemical toxicity, physical device characteristics, and processing parameters that could pose risks to patients and users.
- Available risk information: Manufacturers should determine whether existing information (literature, standards, clinical, and pre-clinical data) can adequately identify and mitigate risks posed by their devices in order to avoid additional testing.
- Submission: Manufacturers should submit their risk assessments at the beginning of the biocompatibility sections of their applications to the Center for Devices and Radiological Health. Clear connections between identified biocompatibility risks and data available to mitigate those risks should be made in these submissions. In addition, all biocompatibility testing and evaluation methods used to mitigate risks should be well-documented.
“There were a lot of changes in the [ISO 10993] guidance. The biggest change, though, is the FDA finally recognized that toxicology and chemistry is okay, and is requesting a risk-based approach to the evaluation,” Nelson Labs’ Bolinder said. “Those are the key changes that caused industry to rethink how they approach [biocompatibility] testing. Previously it was just ‘check off a box and go on your merry way.’ Now, FDA wants you to stop and think about why you picked the test that you picked, justify those tests—why they are clinically relevant to that device—and from there, get the tests completed. It may not be sufficient to do just the traditional animal and in-vitro assessments; you may have to go one step further with chemistry and then apply your toxicological review and understand how going one step further with chemistry might change completely how you see a material or you see a finished product where the animal or bench study would not have picked it up. Then you have to determine the level of safety you are willing to accept. The approach is a change in mindset for the FDA—the risk-based approach with written justification for both test selection and interpretation of data. That’s a big deal.”
But not the only big (testing) deal to come from the agency last year. In early February, the FDA updated its guidance on human factors requirements, releasing two drafts and one final document in an effort to improve medtech usability. The standards detail ways manufacturers can incorporate human factors (HF) engineering into medical device development to both increase patient and user safety, and minimize the potential for user error. It also clarifies expectations for HF validation testing for premarket submissions.
The guidance advises medtech manufacturers to focus specifically on the user interface, which includes elements such as displays, controls, packaging, product labels, and instructions for use. The document also clarifies some previously vague key terms like “critical task,” defined in the new standards as “a user task which, if performed incorrectly or not performed at all, would or could cause serious harm to the patient or user...” This definition clarifies the tasks considered critical to device development that must be included in the risk analysis process and subsequent HF validation testing.
Another significant language change in the guidance involves device safety and efficacy. Medtech manufacturers must now be able to deem their products safe and effective for use rather than “adequately” safe and effective, implying that data gathered during HF testing may have to meet higher standards of rigor and defensibility than previously required. The tweak in wording could prove difficult for companies used to proving “substantial equivalence,” in which a product merely has to be proven as safe and effective as other similar products already approved and sold on the market.
Perhaps one of the most important clarifications in the new human factors guidance is the specification of products that need HF data to support their premarket applications. A companion document released by the FDA provides manufacturers with a list of 16 high-priority devices that need either an HF report and data, or a detailed explanation for the lack of such data. The list includes devices like ablation machines, artificial pancreas systems, automated external defibrillators, implanted infusion pumps, insulin delivery systems, and ventilators. Devices not included on the list may still need HF data if “analysis of risk indicates that users performing tasks incorrectly or failing to perform tasks could result in serious harm.”
“In the last couple of years, regulatory bodies have been requiring clearer [testing] data. Years ago, manufacturers could send in their raw data showing the results of the tests they did,” Eddington said. “The FDA and other regulatory bodies now don’t want just raw data, they want manufacturers to interpret what they are doing, explain why they are doing it, and convince them the testing they conducted is safe. They want the rationale behind the tests being performed.”
Producing such justification can be challenging for medtech manufacturers without the resources or expertise to perform the required verification for their products. Partnering with contract testing service providers can fill that knowledge gap and help companies meet higher regulatory expectations for rigor and defensibility in validation testing, and broaden their design risk mitigation strategies.
The increasing complexity of medical device development is driving the need for more rigorous, reliable, and reproducible testing. Tackling test method validation early in the product development process can help manufacturers avoid costly delays in commercializing their devices.
“One of the things we recommend is designing for X-ray [inspection],” affirmed Gil Zweig, CEO of Glenbrook Technologies Inc., an X-ray imaging technology developer in Randolph, N.J. “If, for example, you’re making an injection-molded catheter hub with lumens, you would want to be able to see inside that catheter hub for the possible presence of blood-collecting voids. There are certain materials you can use to make the hub more radiopaque. It’s better to know that kind of information at the beginning of the product development process.”
Certainly, designing for testing is an important attribute to consider when partnering with a contract testing service provider. Perhaps more critical, though, is the desire to establish a long-term relationship for the sake of patients.
“It is important to have a partner that takes the time to understand the customer’s unique needs, interests, and operational strategies,” observed Christopher Scott, vice president of medical device testing at Eurofins Medical Device Testing, an international laboratory services company which provides a broad range of testing capabilities to the medical device industry. “Companies will benefit from aligning themselves with a testing laboratory that is committed to maintaining a long-term relationship, not simply conducting a transactional test. When a true partnership is established between manufacturer and testing lab, both parties share the desire to celebrate the successful commercialization of a safe and effective medical device.”