You Ask, I Answer: Favorite Uses of Generative AI Workflow?

In today’s episode, I reveal my favorite AI use case: coding. You’ll learn how models struggle to create but excel at interpreting. By treating them as smart interns and having them build custom tools, you’ll boost productivity exponentially. I explain why their statistical reasoning causes mistakes, and how supervision helps. Join me for actionable tips on incorporating AI as a virtual developer.

You Ask, I Answer: Favorite Uses of Generative AI Workflow?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Amy asks, What are your favorite use cases for AI in your workflow right now? That’s a really tough question to answer.

Because one of the things I try to do with artificial intelligence, particularly generative AI, is use it for everything, use it for as much as I can so that I can figure out what it’s not good at.

This is something that Professor Ethan Mollick of Wharton Business School talks about frequently use AI for every task that is a good fit for it.

So generative AI typically is in one of two formats, you’re either generating text, or you’re generating images.

So with text, it’s language, any kind of language based tasks, so writing a comparison, editing, coding, you name it, if it’s uses language, it’s a candidate for testing to see if artificial intelligence is a good fit to help out with that job.

And so there’s literally no task in in language, Christopher Penn: that I’m not trying to use AI for, in some capacity.

One of the things I typically don’t use it for is, believe it or not content creation, like writing new content.

And the reason for that is the the language models themselves.

Their ability to generate content is actually, believe it or not, one of the things they’re worst at they are like most, most data based pieces of software, they’re better at taking existing Christopher Penn: data and interpreting it than they are in making net new data.

That’s why you can hand a huge PDF off to a language model like the ones that power chat GPT and say, Hey, answer me these questions about this data within this PDF, and it will do a really good job really good job with that.

On the other hand, if you say make me a new research paper on this thing, it’s going to struggle, right? It’s gonna, it’s gonna require much, much more detailed prompting, much more skill and finesse.

When you look at the six major categories of use cases for generative AI, question answering, without providing the data and generation are the two things where it almost every model doesn’t do a good job with that.

And when you look at, at how these models are constructed, when you open it up and look under the hood, it makes total sense.

There’s a great talk by Andre Karpathy not too long ago saying that the foundation model, before any tuning is done before it’s made usable, the foundation models themselves hallucinate 100% of the time they just, they don’t generate coherent language, what they do is generate statistically relevant language.

And then you have things like supervised fine tuning and reinforcement learning with human feedback.

These techniques that essentially try to coerce that jumble of statistics into coherent language, meaningful language, and then to some to as good a degree as we can manage correct language.

So for example, in the older models, like the original GPT two that open AI released, gosh, been three or four years now.

If you were to ask GPT two, who the President of the United States was in 1492.

Often you would get an answer like Christopher Columbus, because you would have these statistical associations, President of the United States is associated with people of importance.

1492 is associated with Christopher Columbus, a person of importance.

And so statistically, the answer that would make the most sense to that question would be Christopher Columbus because of those associations.

That’s factually wrong, right? That is factually 100% wrong for a variety of reasons.

But statistically, in the foundation model, that makes sense.

So part of supervised fine tuning is trying to bring additional reasoning capabilities, additional senses of correctness to these language models.

So for using AI in my workflow, I use it a ton every day for coding, writing Python and R code regularly and frequently trying to automate as many repetitive tasks as I possibly can everything from interpreting spreadsheets, to downloading data to building reports reporting, at least for the work I do in the clients that I have, as part of Trust Insights.

Reporting is a huge chunk of what we do and the ability to do reporting, generate great results, high quality results, but do so using the capabilities of language models to make tools to make software is my top use case.

There, there will be so much more I would not get done on a regular basis.

If I did not have language models helping me write computer language to accomplish specific tasks.

Last week, I’m just thinking back at the week, I probably generated seven new pieces of software, seven Python scripts to deal with very specific situations that came up in client work.

Prior to language models, I would have had to write those by hand and I could have done it, I would have done it in R instead of Python, and it would have taken 1015 times the amount of time it took versus me.

Giving a detailed prompt and working with the language model to build the software for me, debugging it, you know, a couple of cycles debugging, and boom, we’re done.

So that’s my favorite use case.

It’s going to vary your favorite use case is going to vary based on the work you do and the language based work that you do or the work that you do that code can help you improve.

But one of the things that I see people not using it enough for is that code aspect.

There are many things that language models can’t do.

Well, math is one of them.

But language models can write language, like computer programming, to do the math for them.

So it’s a one step removed.

But not enough people think to themselves, if the language model can’t do it, can I have it make the tools it needs to be able to accomplish those tasks? And can I run those tools on its behalf? If you start thinking of language models, not as some sort of all knowing all powerful Oracle, instead, think of them as the world’s smartest interns, you will get you’ll be much more successful because you will be able to say, Okay, well, intern, what I really want you to do is build some software that does this.

Think of it like having a remote developer on demand, right? You work with a contractor on demand, say, I just need a piece of software to do this specific task.

And it will generate those those tools for you.

That’s my favorite use case category.

And that’s the one that I wish more people would use because it would save them so much time.

You will save time, you will save headache, and you will 2x 3x 5x 10x your productivity.

Once you’ve got your own custom tooling built by language models to help you out with as many repetitive parts of your job as you can.

So really good question.

Thanks for asking.

We’ll talk to you soon.

If you enjoyed this video, please hit the like button.

Subscribe to my channel if you haven’t already.

And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.

♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Get your copy of AI For Marketers

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!