Assessing the Case for AI Regulations in Healthcare


The first day a Ventura (California)-based doctor used a new artificial intelligence (AI)–assisted tool to record patient conversations and update the electronic medical record was also the first day she made it home for dinner in a long time. The algorithm brought her to tears of relief.

photo of artificial intelligence

That was the report Jesse Ehrenfeld, MD, president of the American Medical Association, heard in early January.

The healthcare industry is abuzz with anecdotes like this one. Doctors are finishing on time, seeing more patients, and spending more of each visit talking to patients — all thanks to AI. “Anything that allows us to turn our time and attention back to our patients is a gift,” Ehrenfeld told Medscape Medical News.

AI has the potential to do just that — to make medicine more efficient, affordable, accurate, and equitable. To a large degree, it’s already changing the practice of healthcare. As of October 2023, the US Food and Drug Administration (FDA) has authorized nearly 700 AI and machine learning–enabled medical devices. New companies continue to emerge, promising software that can revolutionize everything from billing and administration to diagnostics and drug discovery.

But no matter its potential, experts agree that AI can’t have free reign. Without oversight, the benefits of AI in healthcare could easily be usurped by its harms. These algorithms — many of which have access to vast swaths of data and the ability to change and adapt on their own — must be kept in check. But who will build the necessary guardrails for this budding technology and how they will be reinforced — that’s a question no one can answer yet.

The Risks: Medical Devices That Change

Currently, most of the algorithms approved by the FDA are “locked,” Lisa Dwyer, partner at King & Spalding and former senior policy advisor at the FDA, told Medscape Medical News. However, many upcoming algorithms are adaptive, adjusting their behavior based on inputs they continue to learn.

“What do we do with FDA products that continue to change?” Dwyer posed. It’s a question that she got to ask FDA Commissioner Robert M. Califf directly in an interview in January.

In the interview, the Commissioner acknowledged that there are a lot of unknowns around adaptive AI, but that post-market assessment and reporting to the agency after deployment will be essential.

“However, that’s an enormous task and [requires] a lot of resources the FDA doesn’t necessarily have,” Dwyer said.

The Risks: Bias

AI is also as biased as the data used to train it. Policing algorithms that used historical arrest data to predict crime reinforced racial profiling. Google’s online ads showed high-paying job postings to men more often than to women. Computer-aided diagnosis systems have a lower accuracy for Black patients than for White patients.

“If we are not very intentional, two things will happen,” Ehrenfeld said. “One is we will make existing health inequities worse. And two, in certain circumstances, we will unintentionally and insidiously harm patients.”

To fend off dangerous bias, regulators will have to evaluate more than the algorithms, themselves. They have to consider how the AI is applied, “the settings and workflows [the AI] would be embedded in, and the people that would be affected,” according to Alison Callahan, PhD, a clinical data scientist on the Stanford Health Care Data Science team.

The team Callahan is part of simulates how different AI tools play out in specific healthcare systems. They test the efficacy of an algorithm in various use cases and look at outcomes for specific patient populations to see if an algorithm will benefit patients in the real world. We “firmly believe in the importance of a more holistic evaluation, not just the model but how it will be used…before it’s put into place,” Callahan said.

The Risks: Hacking and Surveillance

High-powered algorithms hungry for more data can be inherently at odds with patient security and privacy, according to Eric Sutherland, senior health economist and AI expert at the Organization for Economic Cooperation and Development.

AI runs on data and more data mean more accurate algorithms, but it’s also a risk for patients. Sutherland said that massive datasets that power AI tools are a target for hackers. Regulations must oversee how health data are stored and who has access to protect patients best.

Because of its ability to identify complex patterns, AI also poses a unique ability to infer information that a patient never intended to share. One algorithm can guess the location of your photos. AI-powered chatbots can guess your personal information using what you type in the chat. And based on tone of voice, AI can tell if you will leave your partner. The technology’s ability to discern sensitive information risks the unauthorized sharing and surveillance of that information.

“There is a human right to privacy and a human right to benefit from science,” Sutherland said. The key question for regulating bodies is how to maximize algorithms while minimizing harms to patient safety, he said.

The Risks: Accuracy and Liability

No existing test or treatment is perfect, and tools that utilize AI won’t be either. But what error rate are we willing to accept from an algorithm?

False positives misuse healthcare resources, and false negatives can cost patient lives, Dwyer said. Regulations will decide an acceptable error rate and ways to track an algorithm in case data get dirty (faulty) or algorithms go awry.

Sutherland said that regulators must also decide who bears the liability when errors happen. If an algorithm misdiagnoses a person, who is accountable for that error: The software developer, the health system that bought the AI, or the doctor that used it?

Uncharted Waters

In October 2023, President Biden issued an executive order for Safe, Secure, and Trustworthy AI. It called on developers to share their safety data and critical results with the US government and Congress to pass data privacy legislation.

“It’s an unbelievably dynamic technology,” Michelle Mello, professor of health policy and law at Stanford, California, said. “Which makes it tricky for Congress to sit down and make a law.” For regulations to be effective, they must be “very nimble,” she told Medscape Medical News.

Many existing regulations meant to protect patients will also apply to AI, said Anna Newsom, chief legal officer at Providence , a west coast–based health system. “For example, a large language model may utilize protected health information, thereby implicating HIPAA.”

The FDA already evaluates any algorithms considered medical devices — those intended to treat, cure, prevent, mitigate, or diagnose human disease.

The agency also looks into different regulatory paradigms for vetting software-based medical devices. Between 2019 and 2022, the FDA piloted a precertification program that assessed organizations instead of individual products.

Precertified companies were eligible for a less cumbersome pre-market review. The downside is “you are relying solely on post-market surveillance” with this approach, Ehrenfeld said.

“From a pragmatic standpoint, the FDA could probably never hire enough reviewers to review every product,” Ehrenfeld added. As for post-market surveillance of every adaptive algorithm, “we simply do not have the infrastructure in the US to do that at scale. It does not exist,” he said.

The reality is that the FDA will need help.

Mello said AI oversight could follow the traditional regulatory model: Congress passes laws, and an agency is responsible for issuing rules for AI safety. Or, she said that AI could be treated like physician quality of care, which is largely left up to third-party organizations with a light touch from the government. A third option is something in between, where the government is involved but less heavily as the first approach, said Mello.

Commissioner Califf and other experts agree that a public-private partnership will be the best solution. Califf said it would take a “community of entities” to assess algorithms and certify that they will do good and not harm before and after deployment.

But it’s not yet clear who those entities will be. A recent article published in JAMA suggested a nationwide network of health AI assurance labs to monitor AI. In this scenario, the government would fund certain centers of excellence to vet, certify, and keep tabs on algorithms used in healthcare.

Whatever the strategy, the United States is expected to introduce meaningful parts of the regulatory framework within the next 1-2 years. “I don’t think it will be a big statute,” Mello said. Some of the processes outlined in the executive order have 6-month and 1-year deadlines. So those will play out. And we will likely see some of these assurance labs up and running in the next couple of years, she said.

As for doctors, whether you’re excited or concerned about AI, “you’re not alone,” Ehrenfeld said. Recent American Medical Association data reported that 41% of surveyed physicians were equally excited and concerned. The Medscape Physicians and AI Report: 2023 found that 58% of physicians were not yet enthusiastic about AI in the medical workplace.

“There is so much possible. We want [AI] in healthcare,” Ehrenfeld said. “But it’s good to be cautious because patient lives are on the line.”

Donavyn Coffey is a Kentucky-based journalist reporting on healthcare, the environment, and anything that affects the way we eat. She has a master’s degree from NYU’s Arthur L. Carter Journalism Institute and a master’s in molecular nutrition from Aarhus University in Denmark. You can see more of her work in Wired, Teen Vogue Scientific American, and elsewhere.


Leave a Reply

Your email address will not be published. Required fields are marked *