AI’s Transformative Impact on Healthcare in 2026
I have to admit to an instinctive gut reaction of skepticism when Kimberly Powell of NVIDIA on Monday said that “AI is a once-in-a-lifetime shift for the healthcare industry.” Intellectually, I am excited by AI and its possibilities, but I well know healthcare’s slow and steady approach and rigid workflows can chew up technology gains and the harsh realities that present when technology meets a highly regulated industry.
We have seen over the years multiple pronouncements that some new program, innovation or technology would change everything – remember the initial hoopla around electronic medical records, retail clinics in pharmacies, or health system apps? But by Wednesday at the 44th Annual J.P. Morgan Healthcare Conference, I admit that I have become a true believer. Why is that? For my answer, it’s the old lightbulb joke about the psychiatrist who was asked how long it would take to change a light bulb. His response – does the light bulb want to change?
The answer that has been evident in presentation after presentation at the conference is “yes, the industry wants to and is ready to change.” The combination of financial, political, and regulatory changes have made it clear that doing the same thing is not tenable in the long-term, and certain industry leaders have made one big change that is different this time around – dramatically picking up the pace of change.
Sarah Murray, Chief Health AI Officer at UCSF Health, on a morning panel Wednesday with Nate Gross, M.D. (who leads OpenAI’s Health business) and others convinced me with this one statement:
“We’ve changed what a pilot looks like. We most recently deployed an inpatient stabilization pilot enterprise-wide. The idea was to turn it on across the entire enterprise and if it was causing harm we could turn it off. This is different from two years ago, when we would do a clinic-by-clinic pilot. Our queue has 90 tools in it right now, not including deploying [a secure, HIPAA compliant] ChatGPT Enterprise to everyone at UCSF.”
I have spent years warning against the healthcare industry’s “aviation disease” as I called it, with pilot programs flying everywhere for long periods of time, but very few of them landing successfully. UCSF’s willingness to do larger scale, enterprise-wide pilots brings the likelihood of not only quicker implementation, but also the opportunity for real-time improvement and more willingness to “fail forward,” the Silicon Valley expression that suggests that businesses quickly figure out what works and what doesn’t, learn lessons rapidly and try again without wasting time.
We have a large number of leading health systems that have adopted ambient listening at scale, which is just the tip of the iceberg. Many are now moving onto adjacent activities, such as pre-charting, prior authorizations, coding, clinical decision-making and referrals/ordering. Companies like AIDoc (clinical AI) and Abridge (ambient listening) each have more than two hundred hospitals and health systems as clients. AI-driven revenue cycle management is accelerating and was discussed in yesterday’s blog post.
We also heard over the last three days of the conference a lot of discussion about whether humans will be replaced by AI. The answer is that it depends on what you have been doing. If you have been a biller or coder, then AI may be seen as a threat, but we have heard repeatedly that the best results come from collaboration between AI and humans. AI interpretation of imaging was thought to sound the death knell for radiologists, but two separate conference speakers agreed that the need for radiologists is growing –especially for those who have skills in working with AI. And that’s the key, as shared by Sarah Murray: “AI will not replace you, but you could be replaced by someone who uses AI.”
Can AI Increase TAM?
In addition to the pace of change, the opportunities across use cases left me salivating.
Dave Wessinger, CEO of PointClickCare — which provides the software platform used by many skilled nursing facilities (SNFs) in the U.S. — spoke about incorporating AI over the last two years into their core product: “We’ve done this [software work] for thirty years and added tremendous value. The value we will deliver in the next two years will be more than in the entire last thirty years.” Wessinger described how their AI products take the 150-page paper manual on how to screen patients for the crucial “yes or no” SNF admission decision – “do we want this patient, and will this patient be a risk or be profitable?” – and automate it, allowing for quick and effective screening that can drive better and higher occupancy, while de-risking SNF owner’s’ litigation exposure by flagging issues such as whether you are admitting a sex offender or a habitual plaintiff to your facility. AI also helps with staffing decisions and care processes, resulting in more effective care, more efficient staffing, and greater retention. The inclusion of AI may double or triple PointClickCare’s total addressable market (TAM), unlocking savings and revenue never before realized.
Eron Kelly, who formerly was CEO of Inovalon after stints at leading technology companies, now is helming ConcertAI as its CEO. The company has over 2,000 clients and has transformed into an agentic AI and real world data company with one of the largest oncology datasets. ConcertAI stressed the importance of an AI platform that can access, manage, and transform both first-party data the company develops and third-party data from other sources. The sharing, integration, and use of large datasets can fuel value creation. In another example, Virta’s CEO touted that they have the largest dataset of human biomarkers for metabolic disease.
Speaking to this point, Gene Woods, CEO of Advocate Health, contextualized large datasets for the conference audience. He shared that Advocate has 90 terabytes of data from its operations in six states and explained that this effectively means taking all the books ever written, multiplying it by three, and that gets you to 90 terabytes. It’s a lot of data – and it was featured as one of Advocate’s strengths, as it moved into a single systemwide instance of EPIC, a trend we have been seeing more and more with health systems who wish to really be able to access their data, coordinate care, and improve patient care and access.
Multimodal Data – The Abundance Revolution
The other interesting opportunity is the ability to make meaning out of multimodal data. Multimodal data means data of different types and from different sources. Think about the difference between data from lab tests, imaging, genomic testing, immune system histories, electronic health records data, data from remote patient monitoring, prescription orders and fulfillment history, wearables and connected devices like digital scales, and even video, social media, or search histories. Multimodal data, if used correctly, can strengthen the physician’s inference as to what is happening with the patient and aid in spotting weak signals before a crisis erupts.
We have moved in recent years from a scarcity of data – what do we know about this patient and why can’t we access it when we need it – to an abundance of data. That is also driving a move from low to high resolution, the ability to really see disease progression in detail at multiple levels of scale, connect the dots to the patient’s overall health, and tie together their diagnostic testing and treatment regimens with data previously thought unrelated, such as the environmental history of their home or neighborhood (such as the reported clusters of Parkinson’s disease found around TCE plumes underground).
With such rich multimodal data, how do we make meaning out of it? The answer, of course, is to apply AI, while still complying with healthcare regulatory limitations for privacy and other concerns. One of those concerns is intellectual property rights, the negotiations as to which can be complex. Add to that the need for a data sharing agreement, and negotiations can take months or years, creating impediments to realizing the benefits of AI when applied to data at scale. And if data is not at scale or current it may not be helpful, as there may not be enough incidence in the population reflected in the dataset for an AI model to appropriately find patterns and produce useful insights. Models that don’t return relevant information or aren’t updated regularly quickly lose the trust and loyalty of their intended user population, who learn to look elsewhere for higher-quality or more relevant results. (That’s how Google won the browser wars, by providing better search results more consistently, fitting the taste and judgment of its users).
A Federated AI Learning Model to Address Privacy, Security and Data Rights Issues
An interesting approach to solving these complex data sharing and scale issues is CAIA, the Cancer AI Alliance platform pioneered by the Fred Hutch Cancer Center, Dana-Farber Cancer Institute, Memorial Sloan Kettering Cancer Center, and the Johns Hopkins Sidney Kimmel Cancer Center (www.canceralliance.ai). Oncology focused, the initiative uses a federated AI learning approach where researchers train AI models but patient data never leaves the cancer centers. They leverage a common data architecture without compromising security or privacy. Too often these days, in almost any domain of medicine, critical insights are siloed and hidden in vast disconnected datasets. But it would be a heavy lift to share those datasets with another party, or even to keep them updated and current if combined.
Under the federated deep learning model, each enterprise participant receives a global model and trains it within its own enterprise, using its own data locally. Getting from a generalized global model to a specialized model requires training with a large number of cases that sufficiently represent the clinical environment in which the model will be used. And that is hard within the scope of a single institution, no matter the quality of the institution.
Once the local enterprise has trained its local model for a couple of iterations, those local parameters (but not the private data set) are shared back to a centralized orchestration layer/aggregator node that acts as a “reasoning machine” that aggregates the parameters of the several local models to produce a new global model, which is then sent back to the participating institutions for additional rounds of training and aggregation until the desired accuracy and parameters for the global model are met. This approach means that each enterprise has its own model, while also benefitting from the other models that were trained on large, diverse data sets, thereby increasing the utility and accuracy of the global model. So, this approach has great upside, but limited downside as data remains private and secure, with clear intellectual property rights. It also makes it possible to exit a federated model if desired, as the withdrawing institution still would have its trained model with the benefits of the aggregated federated learning process, allowing the institution to continue ‘its work independently.
Clearly, this is very exciting as a modality to accelerate and leapfrog barriers in oncology research and treatment planning, as effectively trained models connect the dots across independent sources of data at various institutions, or even within a single institution. If a health system for example is on multiple platforms internally and integration would be expensive or time-consuming, this federated learning process may allow an effective workaround through the disparate training of more effective, specialized AI algorithms that then could be consolidated into a unified global model, while ongoing local post-training can keep models current and robust.
This type of work also allows for the combination and use of multimodal data to build as a data object a patient’s entire cancer journey, ultimately letting researchers, physicians, and care navigators better understand what patients with certain attributes may expect as they proceed in their personalized care journey. Right now, it is hard to provide more personalized predictions for patients, as the level of attribution of personalized data within an end to end journey that incorporates multimodal data (genomics, testing, physician visits, etc.) has been very limited. Moving from the “well, generally, patients with this type of cancer experience….” to “patients with the following characteristics that match yours and who have preceded you on this health journey found the following effects and results with…” That level of granularity (that “resolution” in the data) allows for much more effective patient communications and expectations and enhances trust and alignment.
Thyme Care, an oncology navigation company that has created a large, 1400-provider oncology network, currently supports over 90,000 active cancer patients and helps patients and health plans reduce their total cost of care while increasing quality and outcomes by addressing all aspects of patient needs through a personalized care plan, intelligent AI-driven monitoring with prioritization, and playbooked interventions that are deployed at the right time. With the average oncology patient incurring an expense of $75,000 per year, companies like Thyme Care can guarantee a specific reduction in expense by working on closing gaps, guiding interventions, and providing support to keep patients out of hospital emergency rooms or inpatient beds. Thyme Care uses AI for its orchestration layer, bringing together multimodal data to create analytics, predictions and timely interventions. What are the other opportunities for a federated AI learning model outside of oncology?
The Next Frontier: Brain Diseases and Neuroscience
One area that was discussed broadly by conference participants was the increasing importance of neuroscience as a line of business for hospitals and health systems and the benefits of effective AI in prevention, identification of issues such as stroke or aneurysms, and improved treatments and outcomes.
Advocate highlighted neuroscience, noting that their system is doing about 25,000 neurosurgeries per year and is applying AI to imaging to reduce harm from strokes. The Cleveland Clinic is building a new 1-million-square foot neurological institute hospital, because it believes that the next frontier in healthcare is brain disease. Multiple other leading health systems in Boston and Chicago also stressed the key role of neuroscience specialties in the future business model of health systems. This is all the more so because of the movement of first musculoskeletal/ortho and then cardiac procedures out of the hospital operating room and into ambulatory surgery centers.
Health systems need to develop a new driver of significant margin, and an industry sector that includes Alzheimer’s, LATE, stroke, Parkinson’s, ALS, migraine, and other brain diseases can be a fertile ground for innovation and margin growth through research, new life sciences driven treatments and AI insights and innovation.
Ambient Listening
Abridge CEO Dr. Shiv Rao said two things that caught my attention in his J.P. Morgan presentation. First, he said that there is a supply and demand mismatch in healthcare today. That clearly is the case with regard to the demand side, as while patients are sicker today than in the past they also are living longer and expecting more care. On the supply side, we’ve all heard the multiple statistics about the shortfall in primary care physicians, child and adult psychiatrists, anesthesiologists and other specialties. We have not increased the supply of physicians, either in terms of a much larger increase in residency and fellow training positions, nor through the adjustment of compensation incentives to drive more would-be physicians to train in needed specialty areas.
So, Dr. Rao is correct that there is a supply and demand mismatch, with the prospect of AI (and specifically agentic and generative AI) potentially serving as the glue that will hold the healthcare system together for a while, allowing better support of existing physicians and increased productivity through reduced administrative burdens, while also potentially addressing patients’ need for education, guidance and agency. Another helpful approach voiced by Pete McCanna of Baylor Scott & White was taking the traditional supply driven approach in healthcare and making it into a demand driven approach, where the healthcare system reorganizes around what its customer needs and wants and therefore using precious assets more effectively.
The other interesting characterization by Dr. Rao was that healthcare is about conversations. Conversations that occur between all parts of the healthcare system and those who it serves carry important information and takeaways that need to be captured. Ambient listening is doing a good job of capturing, sorting and transmitting information, while saving its users from some or most of the endless keyboarding, late night charting and pre-charting, and care and feeding of the electronic health record. There was not a single dissatisfied commentary from the podium about ambient listening in any of the conference presentations I attended. In fact, there was strong excitement about what’s next. According to Abridge, ambient listening companies will be tackling pre-charting, prior authorizations, patient-facing summaries, and adjacent activities. We also can expect to see a virtuous cycle in accuracy and insight in the underlying AI systems being used for ambient listening. This is because, per Abridge, the AI model can benefit from all user edits to its initial work product, learning how to frame information, strengthening connections between language elements, and expanding contextual capabilities.
I had to laugh though when during this wonderful presentation about AI, data and language processing by Dr. Rao, he said that he has learned that when the anecdotes and the data don’t match, he should believe the anecdotes. Going back to the Anthropic approach (and our discussion of “taste” in yesterday’s blog post) this is about as good a story as I can think of to show the value of human judgment in discerning what feels right. And it underscores again the value of collaboration between AI and humans, rather than the replacement of humans.
Listen to this post
link
