You Ask, I Answer: Why Do Language Models Have So Much Trouble With Facts?

You Ask, I Answer: Why Do Language Models Have So Much Trouble With Facts?

In today’s episode, I dive into why large language models (like ChatGPT) sometimes provide incorrect information. You’ll learn how these models are trained and the limitations that lead to factual errors. You’ll also gain insights into how to get more reliable answers from these fascinating tools.

You Ask, I Answer: Why Do Language Models Have So Much Trouble With Facts?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Christopher Penn: In today’s episode, Brooke asks, Why is it that large language models like chat GPT have such a hard time providing factual information, particularly credible information and credible sources? This is a really good question.

It’s a very interesting question requires some knowledge about how language models work behind the scenes.

The way they work behind the scenes is they have ingested huge, huge, huge amounts of text petabytes of text and a petabyte is about 1000 laptops worth of text, right? If you have a really nice laptop, it’s about 1000 of those just in text.

And many models are trained on like eight petabytes, so 8000 laptops worth of plain text.

And what they’re trained on is the statistical relationships among characters and words and phrases and sentences and paragraphs and documents.

What that means is statistical relationships between words or concepts may not reflect factual relationships.

It’s statistical relationships do not reflect factual relationships.

So a model may come up and say, Hey, you were asking about, let’s give a medical example, you know, the effects of of COVID, you know, long COVID.

There’s a lot of text on the internet about this topic.

But just because there’s a lot of it doesn’t mean it’s wrong.

Right? There’s certainly no shortage of people with factually wrong takes about about it that have posted a lot of content about it online.

And so models, we’ll be looking at correlations at statistics of what corresponds to those terms.

And when you ask a model, hey, whether you’re one of the ways to treat long COVID, it will pull together the statistically relevant answers, even though they’re not factually correct.

Let’s say, as an example, let’s say there’s a there’s 100 times more wrong information than right information.

Statistically, then you’re 100 times more likely for a model to to come up with wrong answers than right answers.

This is one of the sort of the hidden challenges about language models is they are trained on a lot of text, they are not necessarily trained on a lot of quality text.

This is also a challenge with even stuff that is quality.

If there’s if it’s problematic.

So for example, most books prior to the 20th century that were published, written by dudes, right, they were written by dudes, the majority of books were written by dudes, because women would have trouble getting things published.

And so even if you had only a high quality sample of of public domain books, like the you’d see in Project Gutenberg, there’s an inherent bias to that data because the books that were written by women prior to the 1900s, may not have been published and may not have survived.

And therefore, a language model that’s drawing on that knowledge is automatically going to be biased, right, it’s automatically gonna have trouble doing stuff that’s factual, from today’s point of view, using that corpus.

So that’s why these models have so much trouble with facts.

And when we do things like fine tuning them, and retrieval, augmented generation of all kinds of fancy statistical techniques, what we are trying to effectively do is Christopher Penn: to tell a model.

Yes, statistically, answer A is the highest probability, but it’s wrong.

I want you to answer with answer B, even though it’s statistically less probable.

I’m going to use a science fiction example so that we don’t get derailed by politics.

Let’s say there are varying opinions about the Klingon and Romulan empires, right.

And there’s some folks who support the Klingon some ports, support the Romans, a whole bunch of people don’t support either one to think they’re both crazy.

And what you want to know is, what is sort of the policy of the Klingon Empire under Chancellor Gowron? And the models come up with an answer that is statistically relevant, but everyone says, No, that’s not really what happened.

I Gowron was kind of a jerk.

And you know, he ended up getting killed at the end of Deep Space Nine, we have to then go into that model and break it, we have to, to, to break the statistics so that it aligns with reality.

Christopher Penn: Even though there’s all these folks, you know, on the Klingon homeworld, who were touting the how wonderful Chancellor Gowron was, we’re saying probably even though that’s the highest probability thing, it’s still wrong.

Gowron was a jerk, and he deserves to be assassinated.

And Chancellor Martok was a much better Chancellor.

That’s what we’re doing.

And that’s why models don’t respond well, to a lot of different questions from a factual perspective, because it would take Christopher Penn: eons to factually correct every single thing.

Now, the good news is that in a lot of models, basic factual things are not up for debate, like the sky is blue, the planet is round, and so on and so forth.

Water is wet.

Those basic factual things in the core, the source text are pretty much consistent, but anything where you have more wrong information than right, going into the model, the model probabilistically is going to return more wrong information than right.

And companies that make language models can only correct so much like they can only fine tune so much, they will typically prioritize things that are high risk.

For example, if you take a model that has been aligned, that essentially will answer questions truthfully, and you ask it a question that, in the makers perspective is a harmful question, like how do I make, you know, how do I make a trilithium resin explosive? That would, you know, take down a Romulan warship, something along those lines, a model is going to be trained to not answer that question, because it’s perceived as harmful, but it does know the answer.

We have simply tried to break it along those lines so that it does answer when it’s asked those questions, what you end up with, the more the more than a model is intentionally broken, to be factually correct, the more likely it is, it’s going to go off the rails in some way, right? Because we are going against the statistical nature of the model.

By forcing it to, to adhere to facts instead that are statistically less likely.

So that’s the answer to the question about why they struggle so much with this.

Now, how do you remedy that? You should look at language models and tools that have sort of a built in retrieval augmented generation system of some kind.

So for example, Microsoft Bing will retrieve data from its search catalog and rephrase it with a GPT four model to be more factually correct.

Google’s new Gemini has a little button that says, you know, check this answer with Google, which I like to call the Am I lying button, and it will push that button, it will look at its response compared to Google search results and say, Yeah, I actually don’t know where I got this information from, or these are the sources for this information.

So generally speaking, if you want factually correct information out of a language model, you should be using one that has connections to some other database.

And that database is going to then provide the factually correct information for the model to then reinterpret as language.

Language models themselves are not factually correct will never be factually correct, especially in their foundational form, because stats and facts are different.

So good question.

Thanks for asking.

If you enjoyed this video, please hit the like button.

Subscribe to my channel if you haven’t already.

And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.

you


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This