how-healthcare-organizations-should-prepare-for-generative-aiHow Healthcare Organizations Should Prepare For Generative AI

This audio is auto-generated. Please let us know if you have feedback.

Generative artificial intelligence is a buzzy emerging technology in the healthcare sector, eliciting hopes the tools could lessen administrative work and easily analyze health data

More than half of surveyed of healthcare executives reported plans to buy or implement the products within the next year, according to a Klas Research report published in December. But some experts and researchers have raised concerns about inaccuracy, data security and bias when implementing generative AI too rapidly.

Michael McCallen

Permission granted by Deloitte

To use and scale the tools successfully, leaders will need to consider the full lifecycle of AI — from the idea stage through deployment and monitoring, according to Michael McCallen, managing director in Deloitte’s Health Care Strategy practice.

A recent survey by Deloitte found executives have some blind spots when it comes to integrating generative AI, prioritizing data considerations over workforce or consumer concerns. 

McCallen joined Healthcare Dive to discuss why an AI governance structure is key to scaling the technology and how healthcare organizations can prepare their workforces to use generative AI.

This interview has been edited for clarity and length. 

HEALTHCARE DIVE: What were some of your main takeaways from the recent Deloitte survey?

MICHAEL MCCALLEN: This survey really confirmed, I think, where the industry is in general, which is that it’s very much in the early innings — kind of a test and learn cycle. There are a lot of pilots and things that are going on. But in terms of scaling, there’s a lot of work to be done.

It also identified the opportunity that we’ve got to embed some of those things earlier in terms of mitigating bias, having really clear governance, understanding what the impacts are going to be as we scale on the workforce, etc. So that when you get there, those aren’t surprise gotchas, but more of a thoughtful plan on how you’ve got those under control and risk-mitigated.

Does it seem like a warning to you that healthcare executives aren’t thinking that through at this time? Or is it just that AI is at a really early stage?

I don’t know if I’d go red flag. I think it’s more of a caution. It hasn’t been thought through as fully as it needs to be for AI to be safe and impactful at scale. 

But I think there’s still time. And we’re certainly working with clients on embedding better governance and how to think about bias now, before they’re getting the broader impacts of generative AI at scale.

You mentioned in the report that having a governance structure is really important. What should an AI governance structure look like for a healthcare organization? How will having one prevent potential harm?

What we want to think through is, “How does the organization think about the full lifecycle of AI?”

From ideation to design, development, deployment and then ongoing operations and monitoring — you really have the whole end-to-end process of AI managed. Then the goal of that framework is to make it transparent and explainable.

You want to make sure that it’s fair and impartial, so that there’s not bias in how [the AI is] working. You want to make sure it’s robust and reliable, meaning that there’s guardrails in place so that it’s not answering the types of questions you don’t want it to answer. And that when it is answering questions in the domain, you’re getting a result that is reliable.

You also want to be really clear on privacy, and be very aware of what the consumer preferences are. You want to make sure that the AI is safe and secure, and there’s not a potential for bad actors to influence how that AI is responding.

And then you want to really understand the impact it’s having. Because you’re putting that out in the world, you’re responsible and accountable for it as well. 

When we talk about upskilling the workforce, how would you prepare a traditional nurse or doctor for generative AI?

For a few organizations, we’ve created and are helping run like a “Gen AI 101,” which explains what the technology is, what it does well and what it doesn’t do well.

It’s like outlining the realm of the possible in order to get people more comfortable with what the potential future is going to look like. 

Then the second step is having a clear path and saying, “If you’ve got questions, this is how you can learn more,” or where you should ask. Having that open communication allows for additional learning and comfort as people go forward. 

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *