You Ask, I Answer: Over-Reliance on AI?

In today’s episode, I address a question from Ashley about the biggest overreliance on AI in business today. My answer is clear: it’s in generative AI. Many businesses are falling into the trap of using tools like ChatGPT to generate large volumes of boring and low-quality content. This overreliance stems from a combination of shiny object syndrome and a lack of understanding of prompt engineering. To get meaningful results from AI-generated content, it’s essential to provide detailed prompts and ensure quality output. Machines are as smart or as dumb as the prompts we give them, and assumptions that the machine knows what we mean or that quantity can compensate for quality are mistaken. By educating and guiding users in better AI practices, we can achieve better results and rely on AI more sensibly. Don’t forget to hit that subscribe button if you find this video insightful.

Summary generated by AI.

You Ask, I Answer: Over-Reliance on AI?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Christopher Penn 0:00

In today’s episode, Ashley asks, Where do you see the biggest over reliance on AI in business today? Oh, in generative AI, because that’s the area that certainly has the most adoption tools like ChatGPT, for example, the over reliance on the creation of boring content, it’s just boring, boring content.

And people are using these tools to crank out a lot of very boring content.

And this is a function of a couple things.

One, there’s still a lot of shiny object syndrome around generative AI, oh, look what this thing can do.

I can have a write a blog post, I can write an email, I can have it outline social media content.

And yes, the tools can do all those things.

But what’s happening is that people are jumping into these tools, without really getting a hang of prompt engineering, without really getting the hang of making highly detailed prompts that allow you to get the exact results that you want, or at least closer to the exact results and caring about the quality of the output.

Right? If you were just using these tools to save time, and nothing else, and quality is irrelevant.

I mean, the tools will do that.

But you’re not going to get any results out of your content marketing.

And arguably, if you’re you’re creating crap quality content today, and you’re now using AI to create crap quality content, I guess you’re saving time, but you’re still not going to get any any results out of out of what you’re doing.

So a big part of the over reliance on AI is the over reliance on assumptions that you make, that the machine knows what you want it to do.

Think of it like this, think of a machine as like a junior designer or a junior writer, someone who’s wanting to use experience? How detailed is your creative brief to someone like that need to be? Right? They just started working at your company, they don’t they know nothing about your company, they know nothing about your brand.

They know nothing about your voice, etc.

How detailed does your creative brief have to be? Hopefully, the answer is very detailed, very detailed to say like, this is exactly what I’m expecting, this is the output, these are the steps I want you to take.

This is how you know you succeeded, go and do the work.

If on the other hand, you just said Oh, go write a blog post about it.

Border Border Gateway Protocol, you’re not gonna get something very good out of that, right, you’re not gonna get something that’s high quality content out of a poor quality prompt, you asked for something super generic, you’re gonna get something super generic and return, it’s probably not going to involve anything that is new or insightful.

On the other hand, if you can the person like a page of bullet points they needed to make about the content, then they will generate that and they’ll they’ll start to learn your voice and your tone and all the things you want them to do.

And AI is exactly the same AI is as smart or as dumb as the prompts we give it because these tools are just probability machines.

At the end of the day, they’re just probability machines.

And all they do is assemble probabilities.

And the more we give them that in terms of guardrails and guidance and direction, the better they perform.

So the over reliance is on assuming the machine knows what you mean.

And assuming that the machine is always right, and assuming that quantity makes up for quality.

That was that’s never been true.

It’s still not true.

Machines are still not generating high quality, with no guidance or input from us.

Are they generating high quality when we strongly guide them? Yes, yes, they are.

Are they generating high quality when we invest the time to fine tune them and build custom models? Yes, yes, they are.

But out of the box, and untrained, and with poor prompts, Oh, of course, they’re not going to generate quality, nor would a human in that same role.

So that’s the area of biggest over reliance on AI, just those assumptions.

If you can stop people from making those assumptions, teach them educate them, help them develop better AI practices, you’ll get better results.

And then you will more sensibly rely on AI and get good results out of it.

Thanks for the question.

Talk to you next time.

If you’d like this video, go ahead and hit that subscribe button.


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!