Seth Goldenberg and Chris Knerr, Veeva MedTech04.04.24
There’s no shortage of hype or promise regarding artificial intelligence (AI) and machine learning (ML). This is especially true in the medtech industry. As of October 2023, the U.S. Food and Drug Administration (FDA) approved over 700 AI/ML-enabled medical devices.1 All these approvals are for products labeled as software-as-a-medical device (SaMD),2 which have a distinct regulatory process from software in a medical device.
The clinical care benefits of AI in medtech are clear but the clarity of AI’s benefit in operations is needed. An important first step toward establishing an AI strategy that delivers value is knowing how the technology is applied in clinical and operational settings.
None of the 700 FDA-approved AI- and ML-enabled devices use generative AI or large language models (LLMs). Instead, SaMD devices rely primarily on ML, a technology using large-scale statistical prediction and pattern recognition algorithms to process data sets and identify trends. We can see this pattern’s reality with the prevalence of radiology SaMDs in the market. Radiology has a vast “corpus” of images with relatively high consistency, which lends itself to the type of analytical pattern recognition where ML excels.
On the clinical care side, AI and ML have the potential to transform the industry by deriving novel insights from the troves of data generated during healthcare delivery. Global healthcare technology leader Medtronic uses AI and ML to find polyps during colonoscopies, process surgical videos, and improve the accuracy of atrial fibrillation detection.
Even with the success of using AI for clinical care, many medtech companies haven’t mastered leveraging AI to improve internal processes. Device and diagnostic manufacturers often struggle to manage vast quantities of enterprise and third-party operational data across disparate systems.
Some software vendors position AI applications as cure-alls but many organizations don’t have a foundation of structured data and documents to derive valuable insights from the technology. With so many differing messages, following are four questions to help determine AI and ML readiness and evaluate technologies.
Operational use cases are usually broader. Despite medtech companies’ continued investments in data lakes, they frequently struggle with data availability, interoperability, and quality. It’s equally as important to evaluate the risk level of different applications. For example, given the state of LLMs, which are known to have “hallucinations,”4 it’s not appropriate to use them to create regulated content.
More fundamental (and straightforward) AI and ML models focused on pattern recognition can deliver a reasonable risk-balance for document classification and metadata creation, where the risk of an error is a false positive or false negative. These models—trained on high-quality, consistent data—can often produce results at more than a 95% confidence level. We can then choose sampling and verification protocols appropriate to the task’s risk level.
In others like electrophysiology, data can be more complex or require synthesizing from various sources to produce useful or novel outputs. This is compounded by statutory and provider PHI requirements for handling patient data. A clear strategy to clean, aggregate, harmonize, and deidentify clinical patient data, including data from multiple sources, is crucial for creating downstream analytics and modeling to drive better clinical outcomes.
A data strategy to centralize and govern enterprise operational data in a single source of truth provides a foundation to gain value from AI applications. The process can help streamline processes, improve quality, and foster collaboration across siloed teams. Without a consistent data model and governance in place, it will be very challenging for companies to leverage operational data.
After laying this groundwork, medtech companies can use AI to optimize tasks and workflows that are highly inefficient today. However, typical data variability across most organizations will be a barrier to getting there.
The ML component of AI “trains” on data sets, so it’s susceptible to inaccurate outputs based on data sparsity or overrepresentation. Misrepresentative racial and gender findings caused by the lack of healthcare data for women and minority populations are an example of how this can be challenging for medtech. It amplifies any statistical bias inherent in the data set on which training occurs. ML is generally good in pattern recognition where it has large volumes of data; it's notoriously bad at understanding rare or “long tail” scenarios.
Proofs-of-concept (POCs) are common in order to test AI value propositions. This is a tried and solid approach to identify and further develop the most promising use case. Industrializing these POCs into compliant, production-ready applications requires a different approach and skillset. This point in the process introduces scaling risks for operational AI.
Risk can also increase in complaint handling. AI can automate repetitive tasks to free up skilled clinical complaint staff for higher-value work but a misclassified adverse event by the algorithm can be catastrophic.
This type of risk is material, given that rare adverse events fall into the long tail AI and ML models have trouble classifying and predicting. The same data sparsity and overrepresentation risk occurs if models train on “critical” or “well-processed” documents only. That type of selection results in a training set that is unrepresentative of actual data the model must process once implemented.
The FDA has expressed its goal to provide the “least burdensome approach to support iterative improvement through modifications to an AI and ML-enabled device while continuing to provide a reasonable assurance of device safety and effectiveness.”
Together with Health Canada and the MHRA, they’ve defined 10 guiding principles for good ML practice. They have also developed robust plans to align regulatory processes with a streamlined change management approach for AI devices5 that allows software to evolve and “learn” within predefined boundaries, maximizing patient benefits while containing change control risks.
The guidance aims to foster engagement and collaboration, providing a starting point for medtech companies that are looking to use AI for clinical purposes. The industry is moving positively with AI and ML but more work lies ahead. The advancements that AI and ML will enable can benefit patients through earlier access to innovative technologies, more accurate diagnoses, and real-time device monitoring.
In the end, tapping into the potential of clinical care AI and operational AI will unlock significant value and innovation for medtech companies. Before jumping into the deep end, it’s critical to understand the use cases and risks associated with these applications. With answers to these four questions, medtech companies can adopt AI for new ways of working that drive operational excellence.
References
Seth Goldenberg is responsible for the strategic direction of Veeva MedTech. He works across the strategy, sales, marketing, services, and product sectors in order to ensure customer success across clinical, quality, regulatory, and commercialization in the medical device and diagnostic industry. Seth has 20+ years of experience helping medtech companies navigate complex regulations and improve market access. Seth holds a doctorate in pharmacology from the University of Washington and a master’s in biomedical engineering from Drexel University.
Chris Knerr is responsible for helping Veeva MedTech’s chief information officer and IT leadership customers maximize the value of their digital transformation strategy and roadmap. Chris is a seasoned medtech executive and industry leader—both as a practitioner and an entrepreneur—whose experience and expertise spans from leading mega-programs at Johnson & Johnson to serving as the chief Digital Officer for a PE portfolio firm, to co-founding and leading an artificial intelligence/machine learning tech startup. Chris earned a BA in philosophy from Columbia University and an MBA from Cornell University, where he has appeared as a periodic guest lecturer in digital transformation, analytics strategy, and “real-world project management.”
The clinical care benefits of AI in medtech are clear but the clarity of AI’s benefit in operations is needed. An important first step toward establishing an AI strategy that delivers value is knowing how the technology is applied in clinical and operational settings.
None of the 700 FDA-approved AI- and ML-enabled devices use generative AI or large language models (LLMs). Instead, SaMD devices rely primarily on ML, a technology using large-scale statistical prediction and pattern recognition algorithms to process data sets and identify trends. We can see this pattern’s reality with the prevalence of radiology SaMDs in the market. Radiology has a vast “corpus” of images with relatively high consistency, which lends itself to the type of analytical pattern recognition where ML excels.
On the clinical care side, AI and ML have the potential to transform the industry by deriving novel insights from the troves of data generated during healthcare delivery. Global healthcare technology leader Medtronic uses AI and ML to find polyps during colonoscopies, process surgical videos, and improve the accuracy of atrial fibrillation detection.
Even with the success of using AI for clinical care, many medtech companies haven’t mastered leveraging AI to improve internal processes. Device and diagnostic manufacturers often struggle to manage vast quantities of enterprise and third-party operational data across disparate systems.
Some software vendors position AI applications as cure-alls but many organizations don’t have a foundation of structured data and documents to derive valuable insights from the technology. With so many differing messages, following are four questions to help determine AI and ML readiness and evaluate technologies.
1. How Can the Technology Fit Potential Use Cases?
Since each functional area has different opportunities and pain points, an internal assessment should distinguish between clinical and operational use cases. Patient-interacting software technology teams are separate organizations from the CIO’s office. They have their own tech stacks, tools, and clinical use case priorities based on R&D funnels that are therapeutic area specific. These have a clear regulatory approval path with a growing set of SaMDs already in the market.3Operational use cases are usually broader. Despite medtech companies’ continued investments in data lakes, they frequently struggle with data availability, interoperability, and quality. It’s equally as important to evaluate the risk level of different applications. For example, given the state of LLMs, which are known to have “hallucinations,”4 it’s not appropriate to use them to create regulated content.
More fundamental (and straightforward) AI and ML models focused on pattern recognition can deliver a reasonable risk-balance for document classification and metadata creation, where the risk of an error is a false positive or false negative. These models—trained on high-quality, consistent data—can often produce results at more than a 95% confidence level. We can then choose sampling and verification protocols appropriate to the task’s risk level.
2. Do We Have High-Quality Data Available to Train AI Models?
AI requires data at significant volumes to obtain insights or automate tasks. Medtech companies typically have the necessary quantities of clinical data to train AI models in domains like radiology abnormality detection.In others like electrophysiology, data can be more complex or require synthesizing from various sources to produce useful or novel outputs. This is compounded by statutory and provider PHI requirements for handling patient data. A clear strategy to clean, aggregate, harmonize, and deidentify clinical patient data, including data from multiple sources, is crucial for creating downstream analytics and modeling to drive better clinical outcomes.
A data strategy to centralize and govern enterprise operational data in a single source of truth provides a foundation to gain value from AI applications. The process can help streamline processes, improve quality, and foster collaboration across siloed teams. Without a consistent data model and governance in place, it will be very challenging for companies to leverage operational data.
After laying this groundwork, medtech companies can use AI to optimize tasks and workflows that are highly inefficient today. However, typical data variability across most organizations will be a barrier to getting there.
3. What Risks Does AI Introduce?
Adopting new technologies always carries risks and AI is no exception. These risks are often poorly understood because of AI technology’s complexity and marketing hype. On the clinical side, AI poses new risks like re-identifying individuals and increasing exposure to data breaches.The ML component of AI “trains” on data sets, so it’s susceptible to inaccurate outputs based on data sparsity or overrepresentation. Misrepresentative racial and gender findings caused by the lack of healthcare data for women and minority populations are an example of how this can be challenging for medtech. It amplifies any statistical bias inherent in the data set on which training occurs. ML is generally good in pattern recognition where it has large volumes of data; it's notoriously bad at understanding rare or “long tail” scenarios.
Proofs-of-concept (POCs) are common in order to test AI value propositions. This is a tried and solid approach to identify and further develop the most promising use case. Industrializing these POCs into compliant, production-ready applications requires a different approach and skillset. This point in the process introduces scaling risks for operational AI.
Risk can also increase in complaint handling. AI can automate repetitive tasks to free up skilled clinical complaint staff for higher-value work but a misclassified adverse event by the algorithm can be catastrophic.
This type of risk is material, given that rare adverse events fall into the long tail AI and ML models have trouble classifying and predicting. The same data sparsity and overrepresentation risk occurs if models train on “critical” or “well-processed” documents only. That type of selection results in a training set that is unrepresentative of actual data the model must process once implemented.
4. How Can We Use AI and ML to Spark Further Innovation?
The current state of AI is a starting point, not a magic wand. Medtech companies should be cautiously optimistic about incorporating AI into their strategy. We’re still in the early stages and these areas will grow significantly.The FDA has expressed its goal to provide the “least burdensome approach to support iterative improvement through modifications to an AI and ML-enabled device while continuing to provide a reasonable assurance of device safety and effectiveness.”
Together with Health Canada and the MHRA, they’ve defined 10 guiding principles for good ML practice. They have also developed robust plans to align regulatory processes with a streamlined change management approach for AI devices5 that allows software to evolve and “learn” within predefined boundaries, maximizing patient benefits while containing change control risks.
The guidance aims to foster engagement and collaboration, providing a starting point for medtech companies that are looking to use AI for clinical purposes. The industry is moving positively with AI and ML but more work lies ahead. The advancements that AI and ML will enable can benefit patients through earlier access to innovative technologies, more accurate diagnoses, and real-time device monitoring.
Establishing a Strategy for AI
The path to adopting AI and ML that incorporates appropriate, compliant controls can be complex. Start by establishing sound fundamentals with a technology platform strategy, trusted data governance, a fitting use case that is supported by technology, and clear business value propositions. These are an excellent foundation for medtech IT and operational leaders to take advantage of the growth and cost efficiency that AI can support, all while driving digital automation and improving teams’ effectiveness.In the end, tapping into the potential of clinical care AI and operational AI will unlock significant value and innovation for medtech companies. Before jumping into the deep end, it’s critical to understand the use cases and risks associated with these applications. With answers to these four questions, medtech companies can adopt AI for new ways of working that drive operational excellence.
References
Seth Goldenberg is responsible for the strategic direction of Veeva MedTech. He works across the strategy, sales, marketing, services, and product sectors in order to ensure customer success across clinical, quality, regulatory, and commercialization in the medical device and diagnostic industry. Seth has 20+ years of experience helping medtech companies navigate complex regulations and improve market access. Seth holds a doctorate in pharmacology from the University of Washington and a master’s in biomedical engineering from Drexel University.
Chris Knerr is responsible for helping Veeva MedTech’s chief information officer and IT leadership customers maximize the value of their digital transformation strategy and roadmap. Chris is a seasoned medtech executive and industry leader—both as a practitioner and an entrepreneur—whose experience and expertise spans from leading mega-programs at Johnson & Johnson to serving as the chief Digital Officer for a PE portfolio firm, to co-founding and leading an artificial intelligence/machine learning tech startup. Chris earned a BA in philosophy from Columbia University and an MBA from Cornell University, where he has appeared as a periodic guest lecturer in digital transformation, analytics strategy, and “real-world project management.”