You Ask, I Answer: How to Regulate Generative AI?

Karolina asks, “What in your opinion should be done for generative AI not to go too far? how can we take benefits but within control?”

In this video, I discuss the topic of regulating generative AI, which is a challenging issue. We can’t turn off the technology, nor can we regulate it through a central authority because of the proliferation of freely downloadable open source models. Instead, we need to focus on regulating outcomes and enforcing existing laws to penalize people who misuse AI for illegal activities. For example, we could add extra punishment for crimes committed using AI. Overall, it’s a heavy topic that needs careful consideration, and I believe that regulating the technology itself may not be effective. If you want to know more, watch the full video and hit that subscribe button if you like it.

Summary generated by AI from the transcript.

You Ask, I Answer: How to Regulate Generative AI?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Christopher Penn 0:00

In today’s episode, Karolina asks, What in your opinion should be done for generative AI not to go too far, how can we take benefits but within control? It depends on what you mean by control.

Right? If you mean the ability to turn these things off, to stop use them, we’re past that point, if you mean the ability to regulate them, through a central authority, like a company like open AI, we’re past that point too, because there’s now a wave of freely downloadable open source models that are very high quality.

I talked about this in a recent episode of my newsletter, based on Facebook’s llama or stable LM open source models that are in the wild that you can download and put on your computer today.

And no one can control that system.

Right? No one can control you having that software or not.

So that’s largely moot.

The reality is that large language models are here to stay.

And that the technology itself really can’t be regulated.

Because it’s now so open, you can download one of the base models, and then fine tune it, train it to do whatever you want it to do.

You could train it to only do clam chowder recipes, right? You could train it to spew hate, you could train it to create propaganda and misinformation.

And because these models all are small enough, and today’s personal computers, your gaming laptop is powerful enough to do that fine tuning, there really is no way to regulate that right? Any more than you can regulate how someone’s going to use a chainsaw, right? Yeah, you can put safety warnings all over it, and stuff.

But if somebody wants to go off Friday, the 13th on on somebody else with a chainsaw.

There’s not really anything that a chainsaw maker can do to stop somebody from doing that.

Right.

So what do we do? What we do is we look at the outcomes, and we regulate the outcomes.

For example, in the USA, which is where I’m based, we have laws that add essentially more penalties onto a crime if that crime was done within a certain context.

For example, we have a category called hate crimes where if you commit a crime, there’s a sort of base level of punishment for that.

And then if it can be proven in a court of law, that it was a hate crime that you did that crime because of the person’s race, or sexual orientation, or veteran status or disability, you get extra punishment, you get extra punishment, on top of the punishment you’ve already gotten.

And so having laws that would essentially restrict what people do with these models, would be the way to go.

And we’re not talking about saying you can’t write certain things, whatever we’re talking about, essentially things that are already against the law, just in a enforcing those laws, which is a whole separate conversation, and be maybe adding an extra bonus penalty for if you use machines to do it, perhaps at scale.

So for example, fraud is illegal.

scamming somebody out of money, illegal, if you used, say, a machine to synthesize someone’s voice to create a fake ransom call.

That’s still illegal.

This is more illegal.

And so you could add a penalty saying if you if you misuse technology, in addition to the 25 years of jail time, you’re going to get for fraud for in this case, I’m just making up these numbers.

You would then say, and we’re going to add an extra 10 on 10 years of penalty on to this because you use AI to do it.

Right? So it’s kind of like, well, I mean, there are there are many different laws that have multipliers or variables that change the severity of the punishment.

If we want AI to succeed, if we want AI to be useful, when we want people to not abuse it, we have to a enforce the laws we already have, which is always a always a treat, and do so in a in a coherent, consistent way.

Meaning that some people don’t get a pass because of their background or who they know or how much they bribe the judge and be considering multipliers on existing laws to say like, yeah, use AI to do this, the crime is worse, right? We consider the crime to be worse, therefore the punishment is worse.

That’s what we can do.

Because we cannot control the mechanisms of self any more than you could control spreadsheets.

Right? If you think about if you use a spreadsheet to commit a crime, you can’t just turn off spreadsheets.

It’s impossible, right? You there’s just no way for you to stop people from using spreadsheets.

There’s open source ones is Google Sheets is Microsoft Excel.

And yeah, Microsoft could maybe turn off your specific copy of Excel, if they had the license information, but it just download, download OpenOffice, or Libre Office Online free runs on your computer very capable.

And now the person’s got a spreadsheet.

And if you didn’t want them to have a spreadsheet, you’re kind of out of luck.

But you can say, yeah, if you use the spreadsheet to commit this crime, we’re going to add an extra five years of penalty, you know, or whatever the however the legal system works in that country.

That’s essentially where we are today, with large language models with generative AI, in general, is saying, yeah, the tools are out there.

Now we got to regulate how people use them in and make clear there are criminal penalties for misusing them.

Not the general misuse of them.

But if you’re committing a crime, if you just use AI for it, we’re just going to make the penalty worse.

So that’s it.

It’s a heavy topic to talk about.

And it’s one that I feel like a lot of governments, a lot of legislators a lot of elected officials do not understand.

And they will propose legislation that is impossible to enforce.

And so, like many other things, they’ve tried to legislate and very difficult to enforce.

Regulation of this technology itself probably is not going to be super successful.

Anyway, that’s the answer, or at least that’s my answer to the question.

Thanks for asking.

I will talk to you soon.

If you’d like this video, go ahead and hit that subscribe button.


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!