Warning: this content is older than 365 days. It may be out of date and no longer relevant.

You Ask, I Answer: Stopping Misuse of AI?

Jesse asks, “How can we stop bad actors from using AI for malicious means, from deepfakes to surveillance to hijacking political systems?”

The short answer is you can’t. AI isn’t a mystical, monolithic technology in a black box. AI is a collection of mathematical techniques, techniques in statistics and probability. Can you stop bad actors from using math, using spreadsheets? Of course not. Most AI is open-source technology, as it should be, so that the maximum number of people can benefit from it and work on it – and critically, oversee it. The more people using it, publishing their code, and inspecting others’ code, the better.

What should be done about bad actors? The same thing that’s always been done: penalize them for the outcomes of their acts. Whether you use AI to commit a crime or just a handgun, you’ve still committed a crime and must be held accountable for it. A deepfake is still slanderous, and while the laws around them need to be fine-tuned, fundamentally we already agree, based on existing law, that fraudulent misrepresentation is a criminal act. A hostile government using AI to cause harm to citizens still has the same outcome as a hostile government using any other means – and we have plans and capabilities in place to deal with acts of war.

In the business sphere, this line of thinking is important. AI isn’t magic – it’s math. The faster we can get over thinking it’s some unknowable magic, the faster we can take advantage of AI for business purposes. When you realize that natural language generation is just predicting what the next word in a sentence will be based on what the next word has been in the past in similar sentences, natural language generation suddenly becomes both obvious and exciting for what you could do with it.

You Ask, I Answer: Stopping Misuse of AI?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Jesse asks, How can we stop bad actors from using AI for malicious means from deepfakes to surveillance to hijacking political systems? Well, the short answer is you can’t.

AI is not some mystical monolithic technology in a black box, right? It’s just a collection of mathematical techniques.

It’s desistance.

And probability, what’s the probability This is a picture of a cat or a dog? What’s the probability that the word I just said was cat or dog? Ai is just math.

Now it is math that is assembled in programming code.

And that math can get very sophisticated higher orders of calculus and linear algebra, and many other subsets of statistics and probabilities.

But at the end of the day, it really is still just mathematics.

Can you stop bad actors from using math? Can you stop them from using spreadsheets? Can you stop bad actors from using Adobe Photoshop? No, of course not.

Ai also is mostly open source code open source technology as it should be.

You want people using open source technology as much as possible.

For two reasons.

One, you want the maximum number of people to benefit from it and work on it work with it.

You know, people shouldn’t have to pony up a million dollars just to work on a technology if a high school kid downloads our studio or rodeo, the Python environment, they should be able to for free code with it and create new things and use existing technology to accelerate their projects.

That’s how innovation happens by allowing people to use advancements in science technology.

So we want that open source technology will some bad people Download it and use it.

Yes, that’s a given some bad people will download and use spreadsheets right.

But the societal benefit far outweighs the societal negatives.

The second reason you want to be open source and this is really, really important and was sort of the topic of heated topic from the 1990s.

till about the 20, early part of this decade is that closed source code is very difficult to inspect is very difficult to know if there are backdoors or bugs that have not been disclosed or holes in the system that people can take advantage of.

And that’s what bad actors will definitely do when your technology is open source.

Everybody can look at the code.

Everybody can oversee it.

The more people who are using open source AI, and publishing their code and publishing their libraries, the better because everyone else can look at it.

You know, who’s who has the expertise in the field.

You saw him say that the Isn’t look right, or there’s no fairness metric in there.

You didn’t think about that? Or what are you doing that for and be able to flag and detected.

There is tremendous progress happening in AI for using it to detect malicious use of AI, deep fakes, fake natural language generation, faked audio, fake video, you name it.

A number of organizations doing very good work on detecting misuse or malicious use of artificial intelligence.

So we want that and that and that is enabled by having the technology be open source.

So what do we do about the bad actors? The same thing we’ve always done with bad actors, right? You penalize them for the outcomes of their acts, whether use AI to commit a crime or a handgun, you still committed a crime right? And you still have to be held accountable for it.

That’s just the way things work or the way things should work ideally, right? A deepfake where you map somebody’s face on to a different person’s body.

And have them do things that they didn’t do and say things they didn’t say, That’s still slanderous.

Right? That is still a fundamentally a fraudulent misrepresentation of that person.

Right? We do have some work to do about refining the laws around these technologies, but fundamentally, we already agree based on existing law, that fraudulent misrepresentation is a criminal act.

Right.

If a hostile government’s using AI to cause harm to citizens, that still has the same outcome as a hostile government causing harm using any other means, right? If a hostile government convinces a whole bunch of people not to use vaccines, that’s fundamentally the same as a hostile government deploying a biological weapon.

The outcome, dead citizens from from biological weapons or biological means is the same.

And we already have plans and capabilities in place to deal with an act of war that involves biological weapons.

In fact, it has been long standing policy for the United States government to treat nuclear, biological and chemical weapons is equivalent.

And so you’re used one the other ones are on the table for us.

Now in the business sphere, this line of thinking is really important for businesses for marketing.

AI is not magic.

It is not magic, it is math.

And the faster we can get over thinking that AI is some unknowable magic, the faster we can take advantage of it for business purposes, when you realize that neural networks are just a way of doing large scale computation crunching really big spreadsheets really quickly.

It it does take the magical way.

It certainly takes the marketing angle away like misuse of spreadsheets is like you would never see that in that right.

Yay, everything uses spreadsheets.

The same is true of AI.

When you realize that natural language generation, it’s just predicting the next word in a sentence based on what the previous word is and in the past based on the data you gave the AI to learn From Well, the next word is typically been in sentences similar to that, right? natural language at that point it loses the magic.

It’s suddenly both obvious like, Oh, yeah, I’m just this is just a probability by say, wildlife.

What is the likely the next word, there’s a bunch of options.

But based on that technique, that sentence, you can make some pretty good predictions.

Probably not wildlife sausage, right? It’s probably like Wildlife, Sanctuary Wildlife Preserve Wildlife Federation, but not wildlife rutabaga doesn’t make sense.

At that point, natural language generation becomes obvious and exciting, not because the technology is cool, but because of what we can do with it.

Clay Shirky has a great saying from there, again for the early part of the decade.

When a tech when a technology becomes technologically uninteresting, suddenly it becomes decidedly interesting because now people will understand it and know it and can start using it.

And that’s the hump that a lot of people need.

Get over for AI.

Once you understand it’s not magic, it’s just math.

And we’ve been doing math for a while, suddenly you start to say, Okay, now I understand what I can use this thing for, and how to use it to stop bad actors.

Use it to identify bad actors and use it to advance the cause of humanity.

So really good question, complex question.

This is this answer could go on for very long time.

But that’s the short answer.

As always, please leave your comments in the comments box below.

Subscribe to the YouTube channel and the newsletter I’ll talk to you soon.

Take care what helps solving your company’s data analytics and digital marketing problems.

This is Trust Insights.

AI today and listen to how we can help you


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!