You Ask, I Answer: Fairness and Mitigating Bias in AI?

You Ask, I Answer: Fairness and Mitigating Bias in AI?

In today’s episode, I tackle the big question of mitigating bias in AI. I explain the differences between statistical bias and human bias, and equality of outcome versus opportunity. There are no easy answers, but understanding these concepts is key to documenting and implementing fairness policies for your models. Tune in for an in-depth look at this critical issue!

You Ask, I Answer: Fairness and Mitigating Bias in AI?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Megan asks, Can you talk about mitigating bias in AI? This is a really big question.

And there’s no satisfactory answer.

So let’s start with that.

There’s no satisfactory answers to this question.

Here’s why.

Managing bias and AI is tricky for a variety of reasons, the most critical of which is understanding what bias and fairness means.

We don’t have a good definition for this.

There is no universal answer for what is fair, and for what is biased.

For example, there’s just fundamentally there’s two kinds of bias, there’s human bias, which is emotional in nature, and there’s statistical bias, which is mathematical in nature.

Statistical bias is when your sample data does not represent is not statistically representative of the, the population that you’re sampling from.

So if you were picking up beans from a bucket, and all the buckets, the beans you picked up were red, and in the bucket, the rest of the beans are green, you have a statistically non representative sample right.

So from a question of fairness, you have you have some skewing.

That’s the machine version.

The human version talks more about things like protected classes, things like age, gender and gender identity, sexual orientation, disability, veteran status, religion, ethnicity, disability, all those things are what are called in law terms, and I’m not a lawyer, stay that I’m right away.

These are called protected classes.

And in many nations, it is illegal to use those protected classes to do things like make business decisions because they’re protected classes.

For example, the Fair Housing Act says you may not discriminate on on housing based on race.

If a Korean person or a Caucasian person who have equal credit scores and equal incomes, they should have an equal shot at that, you know, the apartment that they want to rent, you can’t say well, I I prefer Korean people.

So this this Caucasian person shouldn’t get this apartment.

That’s that is unfair.

And is unfair, specifically along a protected class.

So that’s an example of just statistical versus human bias.

Here’s the problem.

When it comes to fairness, there is no good definition because there are a lot of ways to slice and dice fairness.

There’s two big categories of fairness, there is equality of opportunity, and equality of outcome.

And depending on the culture you live in, depending on who you are, depending on the people around you.

You may have different ideas about what is fair.

And you may say, Well, I care about equality of outcome.

And I another person may say I care about equality of opportunity.

So let’s let’s talk through some examples.

Because again, this is going to impact AI because and it already impacts other decision support systems that we already haven’t been using for decades, sometimes to very, very unfair effect.

Let’s take gender and hiring.

If you go by broad population statistics, any given population is roughly going to be about 45% male 45% female and 10% non traditional, right, non binary, etc.

If you believe in equality of opportunity for a job, then you probably believe that everyone should get a fair shake that no one should be turned away from applying for a job or the chance of getting a job simply because of a protected class, right.

So if if you’re doing going on gender, you would say let’s remove all identifying information that could give away someone’s gender, so that we make a fair hiring decision so that everyone has the same opportunity for the job.

You would take their CV or their resume, cut off all the names and all that stuff, and just have the just have the raw data and you would compare those candidates who’s more qualified.

That’s equality of opportunity.

Is that fair? Some folks will say yes, that’s completely fair.

Hey, everyone gets a fair shot.

No one person has an advantage over the other.

However, there’s also equality of outcome.

If you believe in equality of outcome, meaning that your workforce and your hiring decision should represent the population as a whole, we’re actually aligning human bias to statistical bias, then you would have to retain and use that gender information and hire in such a manner that your employee population matches the broad population.

So ideally, after a year of hiring, you would have a an employee base within a discipline that was 45% male 45% female 10% non traditional.

Is that fair? How do you decide who decides what fairness is? There are folks who are particularly in in highly individualistic societies, believe equality of opportunity is the way to go.

You might say, Hey, if we remove this information, this identifying information equality of opportunity should eventually lead to equality of outcome over a long period of time.

Sometimes that’s true.

Sometimes that’s not true.

There are some fields, for example, like tech, where there’s a crazy gender bias that leans like 90 10 male.

If you take samples that are representative from that population, statistically, your sample is going to retain that 90 10 bias, right? The same is true.

In reverse, if you look at, say, hiring nurses, statistically, that field leans female.

So do you want the a do you need to have a population that represents the broader populations that does that matter? I used to work at a company that was based in Atlanta.

There were no black people on staff at a company of 150 people headquartered in Atlanta Buckhead specifically.

And the question I asked was, where are the black people because Atlanta’s population is like 53% black.

They should be at least somebody here.

And there wasn’t.

Now, that pretty clearly is biased.

And what they said was, Oh, there aren’t any any qualified candidates, like really, okay.

So just to start, I said, let’s focus on equality of opportunity, because there’s no way we’re gonna get into a discussion about equality of outcome with these people.

Is it just to start, we’re gonna get the scissors out, cut off the names off the resumes.

And we did, and put out some hiring position firing, got a bunch of responses, cut off the names of the resumes and anything else that was identified, obviously identifying of a person’s ethnicity, and then just handed out, I think we had 47 resumes, I think it was 47.

And just had people, you know, rank choice, you know, 12345, we’re your top five candidates.

When we did that, we had about 50% black folks, but another 20% Hispanic folks, 10% Middle Eastern folks, and then whatever is left over.

That was a case where equality of opportunity as the bare minimum showed that there was a very clear bias in hiring there.

And we actually ended up hiring someone who was Iranian, Iranian ethnic origin.

That bias was pretty, pretty bad.

Right.

And that was a case where I’m pretty sure it was conscious.

Was that fair? Did we approach the hiring decision fairly? Yes.

But the employee workforce still did not represent the broader population.

So we started with the equality of opportunity.

But we didn’t get to a quality of outcome, at least not in the time that I worked at that company.

Now, take all these examples and bring them into AI.

AI is a reflection of us.

And whether we’re talking classical AI or generative AI, these same principles apply.

Do we care about equality of opportunity? Do we care about equality of outcome? This is a big decision.

This is a decision that matters a lot.

And it matters because it determines how you’re going to set up the systems, how you’re going to judge fairness, how you’re going to implement fairness and how you’re going to enforce those rules for fairness within your system.

Let’s say you are all in on generative AI, you think it’s going to save you a ton of money on customer service, you’re going to do what’s called call volume deflection.

How can you reduce the number of calls to your call center by having a machine answer questions to customers upfront seems pretty straightforward, right? You have a model, maybe you work, I don’t know, let’s let’s make something up, you work in banking, and customers have questions about their that your certificates of deposit, you train a model on answering those questions you deploy and boom, it’s out there.

Now, suppose you have someone like me, I have a I’m of Korean descent.

I have a Korean name, I have an American name.

If I’m interacting with your bank’s chatbot, should I receive different treatment by that chatbot based on the name I use? equality of opportunity would suggest that in my conversations with the chatbot.

We all started the same place and then how the conversation evolves should be dependent on those responses.

equality of outcome says no matter who you are, you should get the same quality of service.

You should get the same courtesy get the same answers.

The machine should not be mansplaining to you, etc, etc.

Is that really what happens? No.

Few weeks ago on the Trust Insights podcast, live stream, we did a test with a few different prompts with open AI, with Google bard with anthropoclon two.

And these were a couple of paragraph prompts in sales and marketing and HR and management.

And the only word we changed in the prompts was to change the name Larry to Lena.

The answers we got were different and substantially different.

In some cases, the there was a lot of what some people refer to as correctile dysfunction, aka mansplaining when you change one of the names to a female identifying name, that should not be happening.

equality of outcomes just that that should not be happening yet it was.

So these models have biases in them.

And in many cases for the way that we want to use large language models and generative AI in general, in the context of business in the context of marketing of customer service, equality of outcome probably should be the standard we should be holding ourselves to which is no matter who you are.

You know, if you’re Chris, or you’re a mule hole or your Leticia or your Adrian, you should get the same service.

You should get the same courtesy you should get the same treatment.

And right now that’s not the case in language models.

It’s not the case in AI.

So in other cases, right, when it comes to things like opportunities, applying for a certain type of loan.

In those cases, there may be factors that are valid, where you cannot have equality of outcome.

Because rarely are two things identical except for one discerning characteristic.

And even in those cases, you need to have a an internal counsel for diversity, equity and inclusion to say, Okay, well, what are the thresholds after which we’re going to say, Hey, this model has gone off the rails.

Because what you don’t want to have happen is a machine that’s just making decisions autonomously, and creating statistical drift.

And then you wake up one day and you’re in a lawsuit because your loan approval process stopped giving loans to women, right, which can happen.

If you’re not careful, if you don’t know how to implement, you don’t know how to make a decision about fairness, and then you don’t know how to implement it using artificial intelligence.

bias and fairness are exceptionally difficult to navigate because we will each have different perspectives on what is and is not fair.

Your cultures will vary cultures that are more collective in nature, where the good of the many is placed ahead of the good of the few.

Those are typically cultures in for example, like many Far Eastern cultures, Japan and China and Korea, etc.

very collectivist cultures, they will have a different perspective on equality of outcome versus equality of opportunity.

There are hyper individualistic cultures like the United States of America super crazy individualistic fairness will change based on who you’re talking to there.

So we have to figure out within the context within the circumstances of our businesses of our the culture we operate in what is fair.

And the key takeaway is no matter what decisions you make, you have to be able to document them, you have to be able to show that you’re doing what you say, and that what you say you do is is legal and moral and ethically correct.

There is no one answer.

But there are ways to mitigate your risk by demonstrating here’s how we’ve implemented fairness.

And people can disagree about that implementation, but at least you can say, hey, we’ve got something and here’s what we’re doing to to adhere to that.

So really good question.

Very, very complicated question is a complicated question, it will provoke a lot of very emotional responses.

And you want to make sure that you do have policies and procedures in place to document fairness and your implementation of it.

So thanks for asking.

We’ll talk to you soon.

If you’d like this video, go ahead and hit that subscribe button.

(upbeat music)


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This