This audio is auto-generated. Please let us know if you have feedback.

Dive Brief:

  • A healthcare generative artificial intelligence company has settled with the Texas attorney general over allegations it made false and misleading statements about its products’ accuracy.
  • Pieces Technologies, which offers AI documentation tools, developed a series of metrics advertising its products’ low error rate that Texas Attorney General Ken Paxton said were “likely inaccurate,” potentially deceiving hospitals using the tools, according to a press release from the attorney general last week. 
  • Pieces denies any wrongdoing and believes its error rate is accurate, the company said in a statement last Wednesday. It clarified in its own release that it signed an assurance of voluntary compliance and that it was not a financial settlement. Paxton called the settlement the first of its kind.

Dive Insight:

Dallas-based Pieces said its documentation products had a “critical hallucination rate” of less than 0.001% and a “severe hallucination rate” of less than one in 100,000, according to the settlement. Hallucinations are false or misleading results generated by AI models. 

But the attorney general argued the metrics were inaccurate, possibly misleading its hospital customers about the tools’ safety and accuracy. At least four major Texas hospitals use Pieces products, according to the attorney general’s press release. 

“AI companies offering products used in high-risk settings owe it to the public and to their clients to be transparent about their risks, limitations, and appropriate use. Anything short of that is irresponsible and unnecessarily puts Texans’ safety at risk,” Paxton said in a statement. “Hospitals and other healthcare entities must consider whether AI products are appropriate and train their employees accordingly.”

Pieces offers tools that generate summaries of patient care, draft progress notes within electronic health records and track barriers to discharge, among other products. 

In a statement, Pieces argued the press release issued by the attorney general misrepresented the settlement, arguing the order doesn’t mention the safety of its products or offer evidence that public interest was at risk.

The company added there isn’t an industrywide standard for classifying risk of hallucination in AI clinical summaries, and it took Pieces several years to build its system. The risk classification system identifies random clinical summaries for review, and an adversarial AI flags any that may contain a severe hallucination, using evidence from medical records, a Pieces spokesperson told Healthcare Dive. Identified summaries are referred to a physician, who conducts a review, assesses any severe hallucinations, corrects them and provides comments on the changes.

“Pieces strongly supports the need for additional oversight and regulation of clinical generative AI, and the company signed this [Assurance of Voluntary Compliance] as an opportunity to advance those conversations in good faith with the Texas [Office of the Attorney General],” the company wrote. 

AI is one of the most hyped emerging technologies in the healthcare sector, but the U.S. doesn’t yet have many concrete regulations for overseeing AI implementation in healthcare. Some experts and policymakers have raised concerns that a rapid rollout of AI tools could create errors or biases that worsen health inequities. 

The HHS is working on an AI task force that would develop a regulatory structure for healthcare AI. The agency also this summer reorganized its technology functions, placing oversight of AI under the newly renamed Assistant Secretary for Technology Policy and Office of the National Coordinator for Health Information Technology, or ASTP/ONC.

The Pieces settlement comes months after the HHS reorganization. While there are no financial penalties, Pieces is required to disclose the definition of its accuracy metrics and the methods it used to calculate those measurements, if they’re used to advertise or market its tools. It must also notify current and future customers about any known harmful or potentially harmful uses of its products, notify its directors and employees about the order and submit to compliance monitoring.

The order will last for five years, but Pieces can request to rescind the settlement after one year at the earliest.

Texas’ investigation into Pieces included an interview and “extensive” written documentation of Pieces’ AI hallucination risk classification system, the reported metrics, supporting evidence and calculations, as well as information about the company, according to a Pieces spokesperson. The attorney general’s office did not respond to a request for comment on how it conducted its investigation. 

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *