Site icon Health Harbor

AG in Texas Is Nation’s First to Bring Gen AI Enforcement Action in Health Care

AG in Texas Is Nation’s First to Bring Gen AI Enforcement Action in Health Care

The first state attorney general settlement with a company accused of deceptively marketing generative AI software serves as a warning that federal regulators aren’t the only cops willing to slap cuffs on the nascent technology.

The Sept. 18 Texas attorney general settlement with Irving, Texas-based Pieces Technologies is also “an important reminder that states do not need new legislation to regulate the use of AI,” Manatt Phelps & Phillips warned in a recent client alert.

Pieces’ software gobbles up health care worker notes, charts and other data. It then outputs lightning-quick a highly detailed patient summary—potentially saving doctors and nurses considerable time and effort.

But given the potential risks of AI to patients, health care applications of the technology are among the areas of highest concern for regulators. Texas AG Ken Paxton said AI companies “owe it to the public and their clients to be transparent about the risks, limitations and appropriate use.”

The settlement prohibits the company from misrepresentations about the accuracy, reliability or efficacy of its products. Among other terms, Pieces agreed to clearly and conspicuously disclose known harmful or potentially harmful uses of its products and disclose the data and models used to train its AI products.

In a statement, Pieces said the assurance of voluntary as compliance (AVC) put forth by the Texas AG makes no mention of the safety of its products “nor is there evidence indicating that the public interest has ever been at risk.”

“Pieces vigorously denies any wrongdoing and believes strongly that it has accurately set forth its hallucination rate, which was the sole focus of the AVC.”

The hallucination rate refers to content not based on real data but that produced by a machine learning model’s creative interpretation of its training data.

Pieces said there is no industry-wide classification system for generative AI hallucinations for inpatient clinical summation. “Pieces is, in fact, a trailblazer in the development of a risk classification system, which took several years to build and is an effective way to monitor and ensure highly reliable and quality outcomes of its AI-generated working summaries.”

Interestingly, even as state and federal governments ponder new legislation and regulations to govern AI, Paxton brought his case against Pieces under provisions of the 1973 Texas Deceptive Trade Practices-Consumer Protection Act, noted Hogan Lovells senior associate Sophie Baum, who also co-wrote a client advisory.

“We see this trend continuing. Already a few state attorneys general—including Massachusetts—have issued reports or advisories on the use of AI,” Manatt Health partner Eric Gold told Law.com.

“These states seem to be focused, through the lens of current state law, on situations where AI may negatively affect consumers.”

Paxton did not bring action against any of the four hospitals using Pieces Technologies’ software. But he stressed that health care entities “must consider whether AI products are appropriate and train their employees accordingly.”

Indeed, said Alexandra Moylan, a shareholder at Baker Donelson, using third-party technology in high-risk settings like the delivery of health care “could certainly be a problem for hospitals and healthcare providers especially if associated with patient injury,”

Questions that may arise in a tort claim could swirl around such factors as to what degree did a hospital conduct initial and ongoing internal checks involving human oversight on the outputs, Moylan said. And were the AI vendor’s instructions followed in deployment of the technology?

“Entities that deploy AI systems, especially in higher risk areas like health care, should undertake efforts to conduct risk assessments of these technologies, implement policies and train employees on appropriate inputs and use prior to deploying them,” Baum said.

Companies that use AI systems and make claims regarding those systems without appropriate safeguards “are setting themselves up as potential targets for both litigation and regulatory scrutiny or enforcement,” Baum added.

Moylan, who also recently drafted a client note on the Texas case, said thorough and ongoing risk assessments and risk management policies will be particularly important to not only mitigate risk and potential liability, but also to comply with newer AI-specific statutes such as the Colorado Artificial Intelligence Act, which takes effect in February 2026.

The first comprehensive state law targeting development and deployment of AI is aimed squarely at high-risk AI systems such as health care, housing, education and legal services. In particular, the Colorado measure aims to crack down on “algorithmic” discrimination based on a protected characteristic such as race, religion and national origin.

To enjoy a presumption the deployer of AI used reasonable care, that firm must satisfy a number of requirements, including establishing a risk management policy, an analysis of risks and a description of data the AI system inputs and outputs.

“Under certain state AI laws, obligations flow down to AI deployers, particularly with respect to transparency for end users,” Gold said. “In some cases, deployers must also implement risk management policies and conduct impact assessments concerning AI.”

Organizations that develop or deploy AI systems will need to consider whether the system might be classified as “high risk.” And both developers and deployers of high-risk AI systems will need to consider the similarities and differences between the Colorado’s law and the EU AI Act to help ensure that their compliance programs are appropriately scoped, Baum said.

“Each AI user’s mitigation strategies should be informed by the nature, scope, context and purpose of the AI system,” said Amy MacDonald, an associate in Manatt’s privacy and data security practice.

“Disclosing to patients whenever they are interacting with an AI system is always a good idea.”

In addition, complying with an AI risk management framework such as the U.S. Department of Commerce’s National Institute of Standards and Technology’s AI risk management framework, is also a good strategy, MacDonald said.

“Compliance with this type of ‘gold standard’ AI framework is already included as a defense in some recent AI legislation,” MacDonald added.

link

Exit mobile version