AI Lawsuits Against Insurers Signal Wave of Health Litigation

0

The increasing use of artificial intelligence and advanced technology in health care is setting into motion a wave of lawsuits that attorneys expect to grow in the months ahead.

Insurers Humana, Cigna, and UnitedHealthcare are facing class actions from consumers and their estates for allegedly deploying advanced technology to deny claims.

The litigation comes amid increasing efforts from Congress and the Biden administration to develop a legal framework around the quickly expanding role of artificial intelligence in health care.

“AI is finding its way into every aspect of our lives. And regulators and legislators are trying to keep up with it, but not doing a great job of that,” said Ryan Clarkson, founder of Clarkson Law Firm, the public interest firm behind the suits against insurers.

Two of Clarkson’s lawsuits focus on technology used to make coverage decisions for beneficiaries in Medicare managed care plans.

These private Medicare Advantage plans are under scrutiny from lawmakers, regulators, and patients for using advanced approval—or “prior authorization”—to deny coverage that’s typically granted in fee-for-service Medicare. More than 30 House Democrats say the problem may have worsened with the plans’ increasing use of “AI or algorithmic software managed by firms” like naviHealth and CareCentrix “to assist in their coverage determinations.”

These AI-powered tools analyze data that helps insurers make coverage decisions. Insurers “have to” use advanced technology, said Michael Levinson, a Berger Singerman LLP partner who is also a licensed physician. “The volume of claims is tremendous.”

Eric Hausman, a spokesman for UnitedHealth Group, said in an e-mail response that the company’s “naviHealth predict tool” isn’t used to make coverage determinations.

Coverage decisions are based on the terms of a member’s plan and coverage criteria from the Centers for Medicare & Medicaid Services, Hausman said.

“Adverse coverage decisions are made by physician medical directors and are consistent with Medicare coverage criteria for Medical Advantage Plans,” Hausman said. “This lawsuit has no merit, and we will defend ourselves vigorously.”

Video: Can Laws Keep Up With AI’s Fast Pace?

A Similar Story

Clarkson has spoken with dozens of Medicare Advantage policyholders, whistleblowers, nonprofits, and public interest lawyers. All of them, he said, tell a similar story: Insurers are using AI and algorithms “to try and improve their bottom line under the guise of delivering better service to their policyholders.”

The firm has identified at least three additional insurers using advanced technology to prematurely deny claims, said Glenn Danas, a partner at Clarkson Law.

He also noted that doctors, diagnostic companies, and others have reached out about improper technology use, as they’re “very frustrated that they’re not able to deliver the sort of care they want to.”

“This is going to be part of the front end of a series of waves of lawsuits at the intersection of technology and all different aspects of our personal and business lives, whether it’s in the privacy context or whether it’s in insurance coverage or whether it’s in antitrust or consumer products,” Clarkson said. “I expect it to be at the fore for some time.”

President Joe Biden signed an executive order to establish AI standards in October 2023. For health, the order calls for “responsible use of AI in healthcare” and affordable drug development.

It also requires the Department of Health and Human Services to set up a safety program to take in reports on “harms or unsafe healthcare practices involving AI.”

The executive order is “pretty vague,” but sets out “reasonable goals” for the HHS, said Nicholson Price, a University of Michigan law professor and faculty affiliate at Harvard’s Project on Precision Medicine, Artificial Intelligence, and the Law.

The Food and Drug Administration is already “moving at a good clip,” Price said. The agency has established a Digital Health Center of Excellence, which works on ways to deal with technologies as they advance and become more integrated with health-care processes.

Likewise, the HHS’ Office of the National Coordinator for Health Information Technology issued a rule in December 2023 requiring more transparency around AI.

“Here’s this new technology. We need to figure out how it’s going to be used and how we’re going to regulate it and provide oversight,” Levinson said.

Medicare Advantage

Insurers’ use of advanced technology to analyze claims is “very frequent,” Levinson said.

Clarkson’s lawsuits accuse UnitedHealthcare and Humana of using AI to decline care for beneficiaries in Medicare Advantage plans. Prior authorization of coverage requests is designed to contain costs and reduce spending for unnecessary care, problems that have dogged traditional Medicare for years.

While MA plans must follow coverage rules for traditional Medicare, they can also use additional “clinical criteria” when making a coverage decision. This can include coverage criteria created by private health-care management companies, the HHS Office of Inspector General said in a 2022 report.

Yet what qualifies as appropriate “clinical criteria” wasn’t entirely clear. The OIG report called on the CMS to issue clarifying guidance, including “specific examples of criteria that would be considered allowable and unallowable.”

The CMS responded by finalizing rules requiring Medicare Advantage plans in 2024 to “ensure that they are making medical necessity determinations based on the circumstances of the specific individual,” rather than by “using an algorithm or software that doesn’t account for an individual’s circumstances.”

The CMS rule adds that Medicare Advantage plan coverage denials “based on a medical necessity determination must be reviewed by a physician or other appropriate health care professional.”

But if “coverage criteria are not fully established” in Medicare regulations, the rule says MA plans “may create internal coverage criteria under specific circumstances” and “an MA plan is permitted to choose to use a product” to “assist in creating internal coverage criteria.”

‘Deeply Concerned’

Not everyone is confident Medicare Advantage plans will adhere to the new rules.

The 30-plus House Democrats want the CMS to issue additional guidance to “increase oversight of these tools used by MA plans,” their letter said.

In a letter to CMS Deputy Administrator Meena Seshamani, Ashley Thompson, senior vice president for public policy analysis and development at the American Hospital Association, said the organization is “deeply concerned” that MA plans will “apply their own coverage criteria that is more restrictive than Traditional Medicare, proliferating the very behavior that CMS sought to address in the final rule.” That could lead to “inappropriate denials of medically necessary care,” Thompson’s letter said.

Six provider groups representing Medicare beneficiaries and post-acute care providers expressed similar concerns in another letter to Seshamani.

The groups want the CMS to issue additional guidance that prohibits “algorithms or artificial intelligence from use in coverage denials” and limits “other uses of these tools until a systematic review of their use can be completed.”

America’s Health Insurance Plans spokesman David Allen said in an email statement that the group is “ready to work with the Administration on developing appropriate federal oversight to ensure the responsible use of AI while maintaining America’s leadership in health care advancements.”

The statement added that “any regulatory frameworks that require government reporting should build off existing industry standards and focus on high-risk applications.”

Litigious Future

Nevertheless, AI’s use in claims denials is “likely to get much more common in the near future,” Price said.

“We’ll see more lawsuits like this,” Levinson said. “More and more plaintiffs attorneys will likely be looking at other insurers to see whether there might be potential for litigation, until we get to a point where the technology is regulated and better understood.”

Geoffrey Lottenberg, a Berger Singerman LLP attorney, said AI’s use for claim denials should ultimately yield “case law that will interpret bad faith statute.”

If one of the Clarkson cases “goes the distance” and “defines the scope” of AI use in claim denials, that could spur states or the federal government to set standards, Lottenberg said.

link

Leave a Reply

Your email address will not be published. Required fields are marked *