The Promise and Perils of AI in Healthcare

Ai Healthcare Blog 030923 022723 v02 2x

The advent of AI chatbots such as ChatGPT, Bing, Bard and Jasper is increasing the already intense focus on Artificial Intelligence (AI) and Machine Learning (ML) and its wide-ranging implications for every aspect of modern society.

That’s certainly true in healthcare, where the ramifications for AI-powered chatbots alone are profound. One could imagine clinicians using ChatGPT to document and synthesize interactions with patients, generating chart notes for delivery into an Electronic Medical Records (EMR) system; or as a tool to improve early diagnosis, by using it to scan the entire corpus of medical research and historical cases, and generate possible diagnoses.

The implications for AI and ML overall in healthcare are even more profound, because the amount of data in healthcare is not only massive but increasing at an exponential rate.

In the medical device space, the ability to process that data and identify anomalies has proven to be incredibly beneficial in radiology, which generates vast amounts of imaging data. Similarly, AI/ML could be used with infusion pumps, which also generate an enormous amount of data; algorithms could be used to determine optimal flow rates for certain patient populations, or to evaluate and improve error detection, or to predict potential occlusions.

In fact, any device that generates large amounts of data, such as a Continuous Positive Airway Pressure (CPAP) machine used to treat sleep apnea, is an ideal candidate for AI/ML technology.

In the pharmaceutical world, AI/ML offers benefits in multiple categories, starting with patient identification. That’s not as simple as saying, “Hey, this person is diagnosed with this particular diagnosis code, they might be a candidate for drug X.”

Consider patients with uncontrolled high cholesterol, who must meet several criteria before becoming eligible for a biologic treatment. Using EMR data, AI/ML could identify patients that meet those criteria, for example flagging all patients with six months of high LDL measurements who also failed statin treatment, with a family history of cardiovascular conditions. Moreover, these criteria can get better over time as the algorithm learns and adjusts from past predictions.

In the pharmaceutical world, AI/ML offers benefits in multiple categories, starting with patient identification. That’s not as simple as saying, “Hey, this person is diagnosed with this particular diagnosis code, they might be a candidate for drug X.”

For example, immunotherapy can be very beneficial for treatment of patients with non-small cell lung cancer, however this therapy is only effective to a small subset of these patients. AI/ML can be used to analyze genomic, molecular and imaging biomarkers to identify specific signatures of patients that would benefit from immunotherapy. These types of analysis need to be done on a large amount of data, which makes it very challenging for clinicians to perform. Having these types of algorithms assist clinicians in identifying specific patient groups that can benefit from particular immunotherapy treatments would result in more precise therapy utilization, providing better outcomes for these patients.

Then there’s drug discovery, where the tech is being used to analyze drug candidates and identify possible research targets. Or diagnosis and monitoring, where AI/ML could recommend appropriate lab and imaging tests to assist with a diagnosis based on medical records and doctor notes, then help monitor the patient’s condition by analyzing data from home-based medical devices once they’ve returned home.

But the technology’s very nature poses a challenge when it comes to regulation. With a traditional medical device, the clearance process involves providing a set of inputs, and showing a consistent, expected set of outputs. However, ML algorithms are designed to learn as they go, adjusting on the fly not only the output, but the behavior of the algorithm itself. That means that the two main regulatory criteria – safety and efficacy – could also change (for better or worse) as the algorithm learns, raising the question: How do you characterize and understand those potential changes, which could impact the safety and efficacy of the actual software medical device?

For example, closed-loop insulin management systems employ an algorithm that uses the patient’s weight, blood glucose level, and food intake to calculate insulin dosage. Powered by AI/ML, these systems not only evaluate those data, but also adjust the calculation based on the individual patient data and, potentially, data from the broader patient population. If that evaluation reveals that the previous recommendations result in, say, less than 90% time-in-range, the algorithm can continuously adjust to get that time-in-range higher.

So how do you make sure that this algorithm adjusts in a way that remains safe and effective, and doesn’t, for example, result in higher probability of hypoglycemia or hyperglycemia?

Understanding the boundaries of those changes, the space in which the algorithm can evolve, is critical to ensuring safety and efficacy. In the closed-loop insulin management systems example, the guardrails would establish the range of insulin dose recommendations and blood glucose levels within which the algorithm would operate, preventing either from becoming too high or too low.

Just this month, the FDA released a draft guidance to provide marketing submission recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions and is welcoming comments from the industry until July 3, 2023.

A secure platform for compliant digital health solutions, like the BrightInsight Platform, can provide all the building blocks – the framework and capabilities – to support the deployment and execution of AI/ML algorithms. It’s designed to take in data from a variety of sources – devices, EMR systems, or medical records, for example – and run it through any algorithm. The platform is in a controlled environment that ensures data security, privacy and regulatory compliance, enabling our customers to be assured of the boundaries within which their product will operate.

And because the BrightInsight Platform is modular, flexible and scalable, the algorithm can be easily and frequently updated as it learns and changes. The system also records an audit trail, to aid in understanding how the algorithm is evolving, and assessing whether further changes need to be made.

Biopharma and medical device companies looking to harness the power of AI/ML to improve patient outcomes need to treat the technology with thoughtfulness and care, establishing boundaries in advance to ensure that the evolution of its algorithms improves the true bottom line in healthcare: safe and effective care for patients.

Originally published at

Back to Blog