hhs-artificial-intelligence-task-force-takes-shapeHHS Artificial Intelligence Task Force Takes Shape

This audio is auto-generated. Please let us know if you have feedback.

ORLANDO, Fla. — Details are emerging on a new HHS task force faced with a monumental task: creating a regulatory structure to oversee utilization of artificial intelligence in healthcare.

An executive order signed by President Joe Biden in October directed the HHS to create a comprehensive plan for assessing AI before it goes to market, and monitoring performance and quality once the technology is actually in use. The executive order gave the task force 12 months to deliver after it starts work. 

That clock is ticking. And it’s a tall order: Though individual agencies have promulgated rules around AI in the past, regulators are faced with putting guardrails around ever more advanced models, including some that automatically learn and evolve in real time without human feedback.

The task force is thinking about how to create appropriate safety programs and strategies to manage those technologies, according to Greg Singleton, chief artificial intelligence officer at the HHS.

“The real-time learning environments — we’re going to have to come up with and develop some sort of assurance, monitoring, risk-management practices around — just to kind of put a buffer around those, so we’re comfortable with them. And a lot of good work is going into that,” Singleton said during a Wednesday panel on AI policy at the HIMSS conference in Orlando, Florida.

Task force makeup

The HHS’ task force is composed of senior members of the administration — mostly the heads of the agencies making up HHS: the CMS, the Food and Drug Administration, the Office of the National Coordinator, the National Institutes of Health and the Centers for Disease Control and Prevention, according to Singleton and another HHS source.

Beneath those leaders, the task force has specific working groups around core issues in AI: drugs and devices, research and discovery, critical infrastructure, biosecurity, public health, healthcare and human services, internal operations, and ethics and responsibility, the HHS AI head said.

Those working groups raise questions and bring suggestions to senior leadership on the task force, which meet monthly and provide guidance on how to proceed.

The task force also plans to engage with the private sector to get their perspective on where it should focus, Singleton said. In the absence of a comprehensive federal strategy, AI developers and their technology and hospital partners have been standing up governance groups to self-police their utilization of AI.

“I’m expecting we’ll have a series of listening sessions and engagement opportunities, perhaps some request for comment, a number of communications coming out,” Singleton said. “There will be more details to come.”

U.S. lags in regulations

Despite numerous bills that have bubbled up in Congress, the U.S. has taken a largely hands-off approach to AI oversight so far, even as the technology becomes increasingly pervasive in the public sphere and a number of industries, including healthcare.

Tech giants like Google and Microsoft are notching partnerships with major hospital systems to use their reams of patient data to train and deploy AI models. Use cases for the technology run the gamut from streamlining nurse handoffs between shifts to improving contouring for radiotherapy planning to nudging at-risk patients when they’re overdue for a checkup.

The U.S.’ inaction stands in sharp contrast to the European Union, which on Wednesday approved the AI Act — one of the first laws in the world to erect comprehensive strictures around AI. The AI Act divides the technology into levels of risk ranging from “unacceptable” to “low hazard,” which determine how much scrutiny an AI application will face.

There are concerns the law — which also includes steep fines for noncompliance — will stifle AI progress in the bloc. 

Regulators and legislators in Washington have said they don’t want to inadvertently tamp down innovation in the private sector as the U.S. — the home of most of the heaviest hitters in AI development, like Microsoft, Alphabet and Meta — continues to jockey with China for AI supremacy.

Building on agency authorities

Still, federal agencies have taken some steps they could build on to oversee AI in the healthcare space, Singleton said.

The CMS is exploring how the authorities already granted to it by Congress around the administration of patient benefits and reimbursements could be extrapolated to regulate AI, Singleton said. There’s some precedent: The agency finalized a rule last spring prohibiting Medicare Advantage plans from using AI to make bulk determinations on coverage decisions.

Similarly, the HHS Office for Civil Rights is looking at how Section 1557 of the Affordable Care Act, which prohibits discrimination in the administration of care, could apply to AI governance, according to the HHS AI lead.

In addition, the ONC, which oversees U.S. health IT, finalized a rule late last year requiring government-certified electronic health records using algorithms for clinical decision support to disclose how the AI is maintained and updated. The FDA also has an approval process in place for medical devices using AI. Regulators have cleared almost 700 to date.

Approving AI-based medical devices is entirely different than approving models that constantly evolve, such as generative AI, Singleton said. Still, the technology isn’t entirely new for regulators.

“Those are just examples of how we’re approaching it, looking at what we have in our stable and where that aligns with our values,” Singleton said. “Is it the end result answer? Does it give the answer for what AI should be in 10 years? No.”

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *