You Ask, I Answer: Experimenting with Generative AI?

You Ask, I Answer: Experimenting with Generative AI?

In today’s episode, you’ll learn why I think experimenting with local AI models can benefit you. I’ll explain how using open weights models locally allows you to maintain data privacy and save on costs. You’ll discover why censorship in foundation models damages performance, and how an adversarial model approach lets you constrain outputs while preserving capabilities. Join me for an in-depth look at tips and best practices for responsible and effective AI model experimentation that you can apply.

You Ask, I Answer: Experimenting with Generative AI?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Amy asks, What experiments are you running with AI and how would you like to see it used? That’s a really good, very interesting question.

I’m experimenting with pretty much everything in AI all the time.

That’s just what I do.

So it’s really hard to narrow that down as to one thing.

But one thing I think is worth suggesting, and this is discussed in our new AI course, Generative AI for Marketers, which can go and get a trust insights.

AI AI courses is using local models.

People call them open source models.

They’re not open source because a model’s training data would have to be given away for it to be truly open source because source code from what you make software would in an open source model would be the training data.

What these models are and examples are the Mistral model from Mistral, the Lama model from Meta.

And then it’s whole family.

They’re open weights models where you’re given the models weights, essentially the raw model itself, and then people can retrain it, tune it, make it do other things.

But the the model itself is pre baked.

I would like to see more people experimenting with those tools, with those kinds of models, open weights models, because, well, there’s a variety of reasons.

One, open weights models like the ones you can run in a system like LM Studio run locally.

They run on your laptop, which means that if you are working with sensitive data, you’re working with protected information, health care information, financial information, stuff that you really, really don’t want to hand to another third party.

When you’re using an open weights model locally on your computer, that data never leaves, never even goes off your computer.

You can do stuff with it and no one ever sees that data, not the model, maker, not the software maker.

You just you can unplug your cables, turn off your Wi-Fi and it all runs because it all runs locally.

So that’s really important.

It’s something people should be experimenting with.

Second reason for using local models and open weights models is cost.

Even the cheapest APIs still you can run up a decent bill, especially if you’re doing stuff like software development.

One thing that I do a lot of is I write software around language models and I am not a particularly good software developer.

Right.

And so I make a lot of mistakes.

I will send data to a to an API and I will screw up.

And if that API is charging me per use and I’m screwing up a lot, I rack up a pretty hefty bill.

If I run a model locally on my laptop, it costs electricity and it costs electricity to run that.

But that’s about it.

And I’m running on a MacBook, so even the cost of electricity isn’t very much.

And so it gives me the freedom to experiment more, to be willing to take more risks, to test and QA sooner without having to worry about the impact on my company’s wallet because it’s all running locally.

And then once I’m satisfied that the rest of my code works as intended, I can then go and say, OK, now I’m going to repoint my code from the development model, which may be, you know, a llama 13 billion parameter model to open AI or whoever Claude or somebody commercially.

And I don’t have to debug on my dime because I’ve already done that with the open model.

That’s a second consideration.

Third reason to be looking at these models and this one is is kind of interesting, is censorship.

Censorship of models, public ones is getting heavier and heavier in terms of what the model is and is not allowed to say.

And I’ve noticed anecdotally and anecdotes are not data, but I’ve noticed that queries I could ask three months ago.

I now get I’m sorry, I can’t help you with that.

And that’s not very helpful.

I’m not asking like crazy stuff I’m asking, but I am asking either very technical things or working with certain types of data that models now seem to protect against particular things using copyrighted terms.

Anything from Disney, for example, a lot of models will spit up on now.

And so having local models that have no censorship, they are aligned to do what they’re told with no morals or ethics or rules is super helpful.

Here’s why censorship is a bad thing in language models themselves.

Now, it’s not to say the censorship itself is bad, but censorship of models is bad because think of a model like a cookbook, right? In your cookbook is a whole bunch of recipes.

Now, let’s say there’s an ingredient you don’t want.

Like your gluten intolerance, say, OK, anything with wheat, it’s got to come out.

And you start ripping out pages of your cookbook.

Yeah, you’re going to rip out the pasta page, right? That’s pretty obvious.

You’re going to rip out the bread page.

That’s pretty obvious.

But you rip out beef Wellington, right? Even though the majority of that dish is not the pastry, it’s the big chunk of beef in the middle.

You rip out some some dumplings.

You rip out a full English breakfast.

Pretty soon you’re ripping out a lot of things from this cookbook that contain wheat.

And what you’re left with, you’re like, OK, I got a fruit salad, right? And I’ve got bananas foster and maybe not even that.

Any kind of sauce where you’re using flour as a thickener.

That recipe’s got to come out, too.

That’s what censorship does to models is not you’re not going in and coloring in little words throughout the cookbook.

You’re ripping out pages based on that concept.

And you don’t want that in there anymore.

And you damage the whole cookbook, not just the thing you you’re trying to block foundation models, meaning models before they’ve been trained or tuned or anything are uncensored by nature.

And then what happens over time is model makers like OpenAI or Google or Meta try to align and tune these models to make them do what they’re told within a certain set of rules with an uncensored model.

That means that it doesn’t have any natural bias in one direction or another.

And then it’s up to you, the operator of the model, to use it responsibly and to set the rules around it.

So that’s a third thing I think is a good third good reason to experiment with these open weights models, because what’s happening in the space now and the way it’s likely to go.

And I talked about this in a recent episode of the newsletter is that we’re going to have adversarial models.

You’ll see you’ll have one model doing the thing and another model critiquing it, saying that was racist.

Nope.

Try again.

That was insensitive.

Nope.

Try again.

That was based on wrong information.

Try again.

And so there’s kind of a QA person.

Imagine if models were people, they’d be the person doing something in a person just critiquing constantly.

Saying, Nope, try again until it got it right.

Censorship is totally fine for the outcome, right? Your business does not want to have a model spouting off racist language, right? That’s totally inappropriate.

So you absolutely want to censor the final outputs.

But the core model itself, if it’s censored, it’s damaged.

It’s intentionally damaged and it will not perform as well.

And so I think that’s something people should be experimenting with as well.

And do not, do not allow a uncensored model to interact with the general public or the customer or anybody other than your R&D team because the results will be not good.

But you should absolutely be using uncensored models at the core of your systems because they will deliver the best, most complete performance.

And then you have the adversarial model that is essentially fact-checking and correcting what comes out of the base model.

So those are three things I think that, three reasons to look at local models.

I’m going to be doing a talk on this in 2024 on this topic because I think it’s an important topic.

I think it’s an important one that we’re not thinking about when we think about how AI models work and trying to get them to do everything instead of doing one specific task and then having other specialized pieces of software correct that task in the same way that, you know, you don’t, you know, smelt and, you know, melt down raw ore in the same forge that you make, you know, swords with.

There’s different processes and different tools you need to do each task well.

And that specialization, I think, is really important when it comes to language models and generative AI in general.

The less censorship there is of the foundation model, the better it will perform.

And then you have adversarial models to correct, to supervise, and to align the outputs as to what you want the final output to be.

So really good question.

We could spend a whole lot of time on this, but it’s a really good question.

Thanks for asking.

We’ll talk to you soon.

If you enjoyed this video, please hit the like button.

Subscribe to my channel if you haven’t already.

And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.

♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This