Warning: this content is older than 365 days. It may be out of date and no longer relevant.

You Ask, I Answer: Regulation of Marketing AI?

Jonathan asks, “What kinds of regulations do you expect to see in marketing AI or AI in general?”

You Ask, I Answer: Regulation of Marketing AI?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Christopher Penn 0:13

In today’s episode, Jonathan asks, what kinds of regulations do you expect to see in marketing AI or AI in general? What do I expect to see? Or what do I think we need because they’re different, they are different.

What I expect to see are minimal efforts at attempting to create regulations about fairness.

Right, and to some degree, about visibility about what machines do, and how they make decisions, particularly for consumer protections, being able to tell somebody and be able to justify, like, why was this consumer turned down for a mortgage or a loan or something? And in a court case, you know, being required to disclose Yeah, prove that your machine did not make this decision on the basis of race or religion or gender, right protected classes.

That’s what I expect to see the bare minimum regulations, because artificial intelligence right now is such a driver of profit, and income for companies that most companies would perhaps prefer to not have a whole lot of regulation about it.

What do we need, if we want to continue having a functioning society, we need to have regulations in place about interpretability and explainability.

And what I mean by that is that we need to have regulations in place that are no different than, you know, the ingredients.

The nutrition label on a package saying, here’s what’s in the box, right? If you put this in your mouth, here’s the things these are the chemicals that you are putting in your mouth, alright, sorbitol, gum, bass, glycerol, so on and so forth.

We require that a food right we require that if some of important things in our lives, we should be requiring that of our machines.

What is in the box.

For example, if you create a recommendation engine, tell me the basis on which it makes recommendations.

Right? Prove that the machine makes recommendations in a fair and balanced manner.

One of the things that was a discussion topic in this week’s newsletter, if you go to my website, ChristopherSPenn.com, you can see last week’s newsletter the AI and inequality issue.

There’s a bias, a natural bias towards bigger companies.

So in SEO, in particular, because the bigger you are, the more content you generate, the more content you generate, the more data there is to learn from from your company.

And as search algorithms improve, they learn from the data they’re given them, the bigger companies have more data, they’ll learn from that more.

When we talk about regulation of AI, we have to be having some serious discussions about expected outcomes.

What is the expected outcome of this software model? And then does it deliver on that and be clear and be regulating? These are the required outcomes, something simple like credit score decisions, credit score decisions must have outcomes that are identical for things when you control them for like protected classes.

So a black man and a Korean woman should have identical outcomes if their income and their employment and stuff is all substantially identical.

And if they don’t, right, if the Korean woman never gets the credit card loan, and the black man always does, and all controlling for everything else, everything else is equal, then, you know, you’ve got a race issue, possibly a gender issue, maybe both.

But those are both protected classes.

And so the the, what should be on the label on the box of that AI? Is this AI guarantees that it does not make decisions based on race or gender.

Right? That’s what should be on the box.

Is that what’s going to happen? Maybe it depends, like so many other things.

I expect it to be a patchwork quilt of regulations that vary from country to country, region to region, some regions of the world, you’ll have very, very stringent requirements.

For example, the EU is well known for having extremely stringent requirements on disclosing things right.

There’s a whole bunch of chemicals and things that you know and manufactured consumer goods, flat out been in the EU perfectly fine in other countries.

Now whether they’re actually safe or not nest The discussion topic, but what’s regulated is, and it would not surprise me if countries in the EU said, yeah, if you want to operate this AI here, here is what you must disclose.

Christopher Penn 5:15

That’s what I would hope to see in all forms of AI.

And the the thing you may say is, well, you know, it’s marketing.

It’s not like you’re you’re denying people loans or making healthcare decisions.

It’s just marketing.

It is it does marketing AI need regulation? Uh huh.

Sure, it does.

I was at an event a couple of years ago, and I was watching a presentation by a fairly well known insurance company.

And this insurance company said, we are not permitted under by law to discriminate in the issuance of policies based on protected classes, right, we cannot discriminate based on race or gender, or religion, etc.

And then very proudly on stage, these folks said, so what we’ve done is we’ve used machine learning to fine tune our marketing to make sure that less desirable people see our marketing.

So if we’re not marketing to them, they’re less likely to buy and therefore we don’t have to deal with those decisions.

Like, well, great, you just reinvented redlining.

Thanks, redlining, if you’re not familiar with the term from the 1930s in America, which, which banks would draw red lines around districts of cities and say, We’re not going to do business at these places, they were typically black American places, typically poor places, typically, minorities of some kind or another.

And redlining was declared illegal, a couple of decades later.

And this company is up on stage touting its use of marketing AI, to effectively have reinvented redlining, but doing so in a way that it does adhere to the letter of the law, but violates the spirit of it.

Because you don’t have to market some people.

You don’t have to spend marketing dollars to reach some people, that is true.

But the outcome is the same.

And that’s the difference with AI.

Right? Because we don’t necessarily know the inner workings of a deep neural network, we have to judge AI based on its outcomes, and whether or not we intend to discriminate, for example, if the machine does it, then it’s doing it.

Right, whether or not that was our intent, if it’s doing it, that is the problem.

And so we have to be very careful about regulating AI, not on the technology, not even on the data set, but on the outcome it delivers.

And if it’s delivering outcomes that are unfair, in to turn it off, until we can fix the problem kill until it can demonstrate that fairness is at play.

Right.

And it’s really challenging, it’s a much more challenging proposition than you might think.

Because when you deal with systemic biases, you get a lot of correlated outcomes, right? For people who are minorities, depending on the minority, there is a bias towards there’s a natural systemic bias says those people who are going to earn less money.

So you may program in to say, Okay, we’re, we’re not going to use race at all, we’re only going to focus on judgments on income.

Well, by default, you create an outcome, where there tends to be a bias against race, because it’s so tightly correlated with income.

So in those cases, you need to be able to, to display in your algorithms in your models, that there are data points that show that race is not a factor, right? If you have a black man and a Korean woman, and they have the same income levels, right, they should have the same probability of being approved for a loan or showing a marketing email or whatever the case may be.

That’s how you prove that.

protected classes are not in play by showing multiple examples where the protected class is not a differentiating factor in the decisioning of the machinery.

It’s very challenging.

It is very challenging, it is costly.

And this is another reason why companies don’t want to spend a whole lot of time on this and why it will have to be regulated because it is costly.

It is financially costly and computationally costly.

To prove that your machines are not doing bad things.

But you have to do it.

It has to be part and parcel of AI if it’s not.

We’re going to create a world that’s not super fair, not super fun to live in.

Right where if you are wherever you are in life in terms of income and jobs and and

Christopher Penn 9:59

pride Ducks and services you consume.

If you don’t regulate for fairness in AI, the machines will reinforce everything around you to keep you where you are.

So if you’re happy, and you’re with your lot in life, and you don’t care about advancing your career or what you buy and things like that, then that might be okay.

But if you aspire to move up the staircase of, of whatever you consider success in life, by definition, the machines which have trained on the data, you’ve provided them, about where you’ve been in your life, in the past, will continue to make recommendations based on those things.

Even if you as a person are no longer that person.

Right? Can you imagine getting recommendations from where you were 10 years ago in your life, or 20 years ago? Some of us like the the younger folks that like I’d be getting recommendations that My Little Pony Well, yeah, that’s kind of the point.

Right? The machines don’t adapt, unless they’re balanced for fairness and growth.

And so you would continue to get my little pony ads, even though you’ve long outgrown them.

That’s a fun example of what is otherwise a very insidious problem that is not visible, because we don’t know what the the expected outcomes are.

So that’s where we need to go with regulation of AI.

To say, these are the stated intended outcomes of this model.

And this is how well it complies with it.

And this is critical.

Human law appropriately has, or should have the presumption of innocence.

Right? You are innocent until proven guilty.

You can be accused of a crime.

But you are innocent of a crime until you proven guilty in a court of law.

That should not apply to machines, machines aren’t sentient machines are not sapient they are not self aware.

They do not have rights.

And so, as we regulate AI until it does become self aware, that’s a different conversation.

But today, we should be treating algorithms and models as guilty until proven innocent.

You are seen to be discriminating, prove to me that you’re not right.

If I accused an AI a machine a piece of code of being discriminatory one of the precedents we need to establish in law is that the machine is guilty until it can prove its innocence.

That’s how we create a more equitable AI ecosystem.

Presuming innocence Oh, no, it’s not doing that.

That’s not the way to go.

Not for machines for humans.

Yes.

For living creatures with essential software and have rights.

innocent till proven guilty is the way to go.

For machines, the opposite.

That’s a key point.

So really good question.

Long, long answer.

Thanks for asking.


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!