Legal questions, liability and consent

Hospitals and health systems are incorporating artificial intelligence more broadly, and the challenges go beyond technology.

Hospitals should have multi-disciplinary teams weighing issues related to AI, says Kathleen Healy, an attorney with Robinson+Cole who focuses on healthcare issues.

Legal questions, liability and consent

Providers also face a landscape of evolving laws and regulations when it comes to AI, says Kathleen Healy, an attorney with Robinson+Cole who focuses on healthcare regulatory issues.

“One of the things that I think is just incredibly challenging about AI is you see this proliferation of laws and regulations that are starting to emerge,” Healy says.

In a conversation with Chief Healthcare Executive®, Healy offers perspective on some of the legal complexities involving AI in healthcare, and some of the issues hospital and healthcare leaders should be considering, including the standard of care, liability and patient consent.

Hospitals and health systems are increasingly using AI, particularly in business functions. Hospitals are utilizing AI technology to review and process claims, Healy notes.

But hospitals are also using AI in patient care, particularly in medical imaging. In a speech at the HIMSS Global Health Conference & Exhibition in March, Robert C. Garrett, CEO of Hackensack Meridian Health, predicted that AI offers the potential to improve the health of billions, especially in improving access to care.

Standard of care

Hospital leaders need to consider how they incorporate AI into patient care.

“I think one of the things they really need to be careful about is using AI as an enhancement, and not a substitution for medical judgment, and bearing in mind that they still are responsible for providing care at the standard of care,” Healy says.

And Healy adds, “I think they need to think about how the standard of care will likely evolve over time, because of AI.”

As AI tools become more widely used, Healy says there could eventually be a “reasonable machine standard of care, if you will.”

Legal experts have said the evolving standard of care could raise additional complications for health systems. Samuel Hodge, a law professor at Temple University, told Chief Healthcare Executive® in a 2022 interview that as AI is more available, hospitals could be held to higher standards if patients file lawsuits.

“Previously, in a malpractice case, the standard of care is the average physician in the locale where the physician practices,” Hodge said. “With AI technology, the duty of care may be elevated to a higher standard, and it may be then a national standard, because everybody is going to have access to the same equipment.”

If a physician uses an AI tool in patient care in some way, and the patient develops complications, Healy says it’s not a simple answer to determine liability.

Acknowledging some of the legal issues, Healy says factors could include questions such as, “Was the physician reasonably diligent in reviewing and applying his or her own medical judgment when that case was reviewed? Was the AI tool performing as it was required to perform under the contractual terms of use? Were there extenuating circumstances? Was there bias?”

The federal government is also moving to regulate the use of AI in healthcare, including issues of bias and racism. The U.S. Department of Health and Human Services recently adopted a rule aimed at preventing discrimination in healthcare, which contains provisions relating to AI. The rule requires providers to reduce the risk of discrimination relating to the use of AI and other tools used to support diagnosis.

Researchers have raised concerns about the need to remove racism from clinical algorithms, and have also found AI chatbots have produced responses reflecting racial bias.

When asked about the prospect of bias in AI, Healy says, “I think it is a real and legitimate concern.” And it’s one that health systems and hospitals must address, especially in light of federal regulations.

Questions of consent

Hospitals also must wrestle with another issue involving the increasing use of AI: patient consent. Healy says there are some questions regarding consent and AI.

For example, doctors are using AI technologies to record conversations with patients, which can quickly produce records of the patient visit. Healthcare leaders are embracing such tools to ease documentation and administrative burdens on their clinicians.

But as health systems use these tools, they should be thinking about consent. If doctors are using these tools, Healy says, are they getting consent from patients? Should they ask for consent if another clinician enters the room and is part of the patient conversation? Healy says she spoke to someone at an academic health system that is thinking about asking for consent as soon as patients enter the door, because AI is becoming increasingly embedded in the workflows.

In terms of patient care, hospitals and other providers need to be thinking about getting the consent of patients if AI technology is being used to support a diagnosis. Consent requirements are likely to evolve, Healy says.

“We very well may see the patient consent requirements evolve to include a disclosure of the use of AI and the risks and benefits associated with it, including potentially a risk of deep fakes and inaccuracies,” Healy says.

Hospitals also must think about what they will do if they ask for consent to use AI tools to support a diagnosis, and the patient refuses, Healy says.

Surveys show Americans are still a bit leery of the use of AI in patient care. A KPMG survey of consumers found a majority of Americans have optimism about the use of generative AI to answer questions virtually or schedule appointments, but only a third of respondents (33%) said they were optimistic about AI leading to a better diagnosis.

What hospitals should be doing

Hospitals and health systems are going to have to understand the new legal landscape regarding AI.

“They need to develop policies and procedures to comply with the laws and regulations,” Healy says.

Hospitals should do an inventory of the AI tools that they are using, and make sure staff members are up to speed on regulations that apply to those tools, she suggests. Health systems also will need to monitor those tools for non-compliance with regulations.

“I do think they want to think about where AI, and how AI, is used in their organizations,” Healy says. “And I think they want to assemble a team to start to look at the legal issues as well as the operational issues. And I think they want to do that with real regularity, and just be sensitive to the potential positive and negative impact of AI.”

Health systems need to develop strong governance policies, and they should have teams looking at AI issues. Healy suggests hospitals should form multi-disciplinary teams that include members from different parts of the organization.

Hospital CEOs should identify “who should have eyes on this issue.”

“There’s so many different issues involved, and I think identifying those issues and then identifying point people to keep an eye on the issue would be something that I would suggest,” Healy says.

link

Leave a Reply

Your email address will not be published. Required fields are marked *