You Ask, I Answer: How Not To Use Generative AI In Healthcare?

You Ask, I Answer: How Not To Use Generative AI In Healthcare?

In today’s episode, I share critical dos and don’ts for using AI in healthcare. You’ll learn why models shouldn’t operate unsupervised, and how to maintain data privacy. I’ll explain the risks of third-party systems, and why local models may be best. You’ll benefit from understanding disclosure needs, and the “money or your life” concept from Google. Join me for an in-depth look at responsible AI use cases in sensitive domains.

You Ask, I Answer: How Not To Use Generative AI In Healthcare?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Amy asks, what advice do you have about how not to use generative AI, particularly for concerns of privacy and authenticity? There’s so many ways to answer this question.

Okay, first, don’t use language models for tasks that are not language.

That one, you would think it’d be obvious, but it isn’t because people, the general public does not understand that.

Language models are good at language, but they’re not good at not language.

People have a tendency to think of AI as this all-knowing, all-seeing oracle, and a lot of that can be blamed on pop culture.

A lot of that can be blamed on Hollywood, on Terminators and WALL-E and Short Circuit and all those films and TV shows that we grew up with where machines had these magical capabilities, Commander Data from Star Trek.

There is no way that that system that we watched growing up.

Would actually exist in that form with how today’s AI works.

There’s a whole other tangent to go on, by the way, but we’re going to give that a miss.

So use generative AI for what it’s good at.

So, for example, these tools are not great at generation, believe it or not.

They need detailed prompts, lots of examples to do a really good job.

So you definitely want to not use them to just crank out generic content.

And that’s that’s pretty easy.

You don’t want to use them to try new math.

They’re bad at math.

They can’t count a language model under the hood is a word prediction machine.

That’s what it does.

It predicts words.

And so if if you’re trying to get to to predict things that are not words, it’s not going to do a very good job.

So the workaround for that is you have the tools, right? Code, right? Because writing code, code is language and then the code can do math.

So that would be another thing.

Don’t use tools.

Don’t it’s not that you shouldn’t use AI for this.

You should not use AI in an unsupervised manner for anything high risk.

Right.

So what do I mean by that? These tools are very good at things like image analysis.

They could take an image, an X-ray or a CT scan and provide an analysis of it.

You would not under any sane circumstances just hand that to a patient.

Say, Hey, here’s the spit out.

You’ve got this.

It might be right.

It might not be right.

But that is a very high risk situation where you want human review.

And this is a part of generative AI that I don’t think people give enough thought to.

Yes, it is capable of doing a lot of tasks very quickly and at a very high quality.

But for tasks where you need to, we have a level of risk.

You need human review.

So there may be fewer writers writing, but you may have more reviewers reviewing.

Those writers may become reviewers.

They may be doing QA on what the models put out because they can hallucinate, they can make things up, they can just go off the rails and you absolutely positively need to have human beings fact checking anything as high value.

Things that are not as risky will be things like summarization.

And even there they can screw up, but they screw up less.

Things like drafting commodity emails like, hey, rescheduling this meeting for next week, is this OK? Right.

That’s that’s a lower risk transaction.

Then here’s your medical diagnosis in SEO.

There’s this term that Google uses called your money or your life.

And essentially Google treats in SEO, Google treats any page content that is around finance and health with added scrutiny.

That is a really good rule of thumb.

That’s a really good benchmark for AI, your money or your life.

Are you telling people things as a model, telling people things that could have financial or or health care impacts, not that you shouldn’t use AI, but you should never let it be unsupervised.

You or another human being who has subject matter expertise should be supervising what that model does at all times.

And it should never be able to go directly to the public.

Other ways to not use AI.

A big one is data privacy.

Here’s the golden rule.

And this is something I say in our generative AI for marketers course, which you can get a trust inside AI slash AI courses.

If you are not paying, you are giving.

Giving away your data, right? If you’re not paying with money, you’re paying with data.

So.

If you’re using any of these free tools, you’re paying with your data and in health care in particular, that’s bad, because if you’re putting protected health care information that is other people’s health information into a third party, you are violating so many laws.

That’s not even funny.

So that would be an example of how not to use A.I..

You would want to use a system where that was governed by your overall health care information technology policies.

You would want to use systems maybe that maybe there’s some some data you don’t even want in the hands of third party contract or no contract, right? Because there’s always the probability that you work with a third party and that third party gets compromised somehow.

And then you got to send out that whole paper mail saying, oh, hey, by the way, if your information was leaked or hacked or whatever, you may in those situations want to run A.I.

locally on servers under your control, behind your firewalls, supervised by your I.T.

team to protect that information.

And that would then be as as secure as the rest of your I.T.

infrastructure.

But that’s another area that, again, people don’t think of.

If you’re not paying money, you’re paying with data and.

In health care, that’s not allowed in pretty much every place on the planet.

Even in the U.S.

where business regulations are notoriously lax for everything else.

So those are the the how not to use A.I.

things in health care in particular.

The other thing I would say, it’s not that you again, you don’t want to not use A.I.

You want to disclose you want to disclose the use of A.I.

everywhere, everywhere that you use A.I.

Disclose that, hey, we used A.I.

for this the terminology Microsoft did this at their Microsoft Ignite.

And I really like this language for content they made with A.I.

and then, you know, a human being supervised and edited.

It always said this content made in partnership with A.I.

using the whatever model.

I really like that language because it is a partnership in many ways.

And it’s not that you’re just letting the machines do things and, you know, you’re you’re like Homer Simpson, just asleep at the wheel.

No, you are an active partner, too.

So machines are doing stuff, you’re doing stuff.

And the final product should be the best of both worlds.

It should be the speed of A.I.

with the quality that the quality of human review.

That’s a good way to approach A.I.

and a good way to approach disclosure, the transparency and say this is this is made in partnership with A.I..

So hopefully that helps.

Thanks for tuning in.

I’ll talk to you next time.

If you enjoyed this video, please hit the like button.

Subscribe to my channel if you haven’t already.

And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.

♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This