You Ask, I Answer: ChatGPT Responses and Language?

Warning: this content is older than 365 days. It may be out of date and no longer relevant.

You Ask, I Answer: ChatGPT Responses and Language?

In this episode, I explore the impact of language on large language models like GPT-3. I explain the concept of ‘a word is known by the company it keeps’ and how using specific prompts can lead to the desired output. I encourage viewers to test different tones and language in their prompts and see the impact on the model’s responses. Join me as I delve into the power of language in this informative video. Hit the subscribe button now!

You Ask, I Answer: ChatGPT Responses and Language?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Christopher Penn 0:00
In this episode Carol asks for chat GPT-3.

If I am particularly friendly, effusive or polite in my prompts to it, will it respond similarly? Remember this expression.

This expression was coined by oh gosh, I’m trying to remember the mathematician was from a long time ago, but the quote is, a word is known by the company it keeps word is known by company by the company it keeps on.

What that means is that these large language models are essentially, really, really big matrices of word associations.

The more that the words are available to make decisions, which is why your prompts need to be as detailed as possible, the more the model is going to align with those words.

So if you’re using effusive or florid language in your prompts, guess what? That’s going to have word associations, and it will return data, Dana, that would be conversationally appropriate to those word associations.

What I would encourage people to do is within the boundaries of professionalism and politeness and stuff like that.

Do do test these things out, right, test out different kinds of language, a brusque tone, overly polite tone, very formal tone, and see how the model changes in its outputs.

Because the word associations that are being put into it, so it is, knowing that those word associations, politeness, overly verbose language, you’re gonna get that back and turn just as you would talking to a real human being.

Right? If somebody comes up to you says, Good afternoon, Madam.

That automatically sets a context for the kind of person you’re dealing with, as opposed to somebody you know, going with the same kind of situation code to go yo, what’s up? Again, just that little bit of language tells you what kind of conversational context you’re about to have.

The word is known by the company it keeps, right? So that’s the answer to that question.

And he will respond in ways that are expected for those words.

And the critical thing to remember is that with, with your prompts, if you’re not getting the expected outputs, it’s because there are not enough the words that are associated with the input to get what you want.

A lot of people will write like a paragraph long prompt, my prompts when I work with these tools are sometimes like a page or two of text, right? Because I want very specific words, very specific instructions.

There’s a concept called Waiting where you use the same direction or freeze several times in the prompt, so that that is given more weight.

For example, if I’m telling you to write a bunch of tweets, I will say, several times in the instructions in the prompt, always use the full URL in the tweet, use the full URL of the tweet, write the full URL in between.

And that in turn, gives weight to the added weight to those specific terms.

If we understand large language models on their architecture, we know better how to work with them and how to get the results out of that that we want.

And you don’t necessarily have to use the particularly friendly or effusive language with it, you can actually specify, respond in a casual tone responding to professional tone responding to cold tone, respond in a hyperbolic tone.

I did a thing recently where I took the biography that my my partner Katie robear, has on the Trust Insights website.

And I said, rewrite this as though Tony Robbins, we’re announcing you at a big event with a lot of Hyperbole and a lot of excitement and exciting language.

And it did it did a very credible job of it.

Maybe one day I’ll do a read through of its response and my best imitation to just show what that would look like but you can just tell it tone as well.

So to be explicit, you should use this kind of tone in your responses.

So that’s the answer to the question.

The words you put in lead to the words you get out.

Thanks for asking.

If you’d like this video, go ahead and hit that subscribe button.


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This