You Ask, I Answer: Generative AI Hallucinations?

You Ask, I Answer: Generative AI Hallucinations?

In today’s episode, Brian sparks an intriguing discussion about the potential risks, such as hallucinations or incorrect responses, associated with large language models. I delve into how these models, despite their complex architecture, are essentially involved in a word guessing game, which can lead to unpredictable responses. I underscore the importance of supervision, subject matter expertise, and fact-checking when using these models. Tune in to learn more about this crucial, often overlooked aspect of AI tools. Let’s make the digital world safer and more reliable together.

Summary generated by AI.

You Ask, I Answer: Generative AI Hallucinations?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Brian asks regarding inherent risks, you talked about privacy bias and copyright when it comes to large language models, what are hallucinations or potential incorrect responses? Well, yes, of course, that’s, that’s always a risk.

models, large language models in particular, can hallucinate, they can come up with incorrect information.

And the reason for this is because they don’t have any reasoning capability, not really.

There is reasoning that kind of happens as a result, when a model is very large, the just the nature of the interlinking probabilities creates a reasoning like emergent property.

But for the most part, at the end of the day, all these models are just doing is predicting the next word, right? That is all they are doing.

How long they’ve trained for how many parameters, what their weights are all that stuff.

It’s just a word guessing game for them internally.

And so when they are given a response, they’re given a prompt that doesn’t make sense, they will hallucinate, or they do have a prompt that makes sense, but they don’t know the answer.

They will hallucinate, they will just make stuff up.

One of the most famous tests for this is to ask a model who was president of the United States in 1566.

The way these models work, they look at the words and phrases, they break them up and they look at what has proximity to those terms.

And early, early in the GPT models, they would say things like Christopher Columbus, because it was the name that was most closely associated with maybe early time periods and the United States was eventually the United States and that’s a completely wrong answer.

Today’s models don’t make those mistakes because they’ve been trained better and bigger and stuff, but that is always a risk.

So there’s two things you need to do to reduce the likelihood of risks.

Number one, don’t let models behave and act and do stuff unsupervised, right? You should always be checking their work and saying, Oh, you know, is it still doing what it’s supposed to be doing? That’s number one.

And number two, whatever tasks you’re having the model perform, you should have some subject matter expertise in those tasks, so that you can judge whether the output is correct or not.

If I ask a model to look at gastroesophageal reflux disease, acid reflux disease, it can give me some answers and I haven’t the foggiest clue whether it is correct or not, because I don’t specialize in that.

That is not what I do.

I’m not a I’m not a doctor.

I don’t even play one on YouTube.

And so it could tell me things that are blatantly wrong.

And I won’t know unless I have, you know, I take the time to corroborate that to go good to Google search on the answer and validate it from reliable sources that what it told me is correct.

Under no circumstances, particularly for high stakes stuff, should you ever just be using responses from large language models willy nilly with no fact checking right in the same way that you wouldn’t do that from a search engine.

Right? This is not new.

This is just a different technology.

Now you would not just copy paste something from the first result on Google for your query, without looking at it without reading it without going, that doesn’t make sense.

Or Ooh, I don’t trust that source.

You know, I was I asked Bing a question the other day that gave me a response and the citation, which is very important.

The citation it gave was to a a known disinformation source.

I’m like, that’s wrong.

And I gave feedback.

I said, you know, thumbs down, this is an incorrect response is factually incorrect.

Whether Microsoft uses that information or not, I don’t know.

But even regular old fashioned search engines can give you incorrect responses, right, they can come up with something they can find something that is factually just flat out wrong.

There’s a greater risk in large language models because they don’t do citations newly as well as search engines do right when you ask chat GPT for an answer, and then you ask it to cite its sources.

Sometimes those sources are just made up.

There’s a very famous case, a legal case not too long ago, where a lawyer got in a lot of trouble because chat GPT cited cases that don’t exist looks good.

When he he published it, but didn’t exist.

So you’ve got to fact check these things.

humans should be fact checking what AI produces for the foreseeable future, right for the foreseeable future, because there’s just too many ways for these tools to go off the rails and is much easier and safer to fact check them yourself.

And if you don’t have subject matter expertise, and the things you’re having generate content for a I wonder why you’re generating content on those things and be find someone who does have the expertise so that they can correct what the models are spitting out.

It’s a good question.

It’s an important question.

So thank you for asking.

I’ll talk to you next time.

If you’d like this video, Go ahead and hit that subscribe button.

(upbeat music)


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This