You Ask, I Answer: Vetting Marketing AI Vendors for Bias?

Warning: this content is older than 365 days. It may be out of date and no longer relevant.

You Ask, I Answer: Vetting Marketing AI Vendors for Bias?

Tracy asks, “What are some questions you should ask vendors to better understand what data they use in their algorithms to make sure it’s not biased?”

It’s not just questions we need to ask. Consider checking for bias to be like any other audit or due diligence. We will want to investigate the 6 main areas where bias creeps in: people, strategy, data, algorithm, model, and action/deployment. How do you do this? A lot of it comes down to vendors producing documentation. If they can’t, there’s likely a problem.

You Ask, I Answer: Vetting Marketing AI Vendors for Bias?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode Tracy asks, what are some questions you should ask vendors to better understand what data they use in their algorithms to make sure it’s not biased? So it’s not a question.

Well, it’s just questions we need to ask.

bias is like any other any other professional vetting that you want to do in that there are things to look for, and things to request from a vendor in the same way that you would vet a vendor for equal opportunity, employment for non discrimination for fiduciary responsibility.

There are so many different aspects to auditing and doing your due diligence on a company and checking for bias and artificial intelligence and machine learning models really shouldn’t be any different than checking to see if a vendor is You know, title seven compliant, right? If the vendor discriminates against people in hiring, you probably would want to do that.

Know that, you know, when you look at any of these audit forms you’re required to fill out if you’ve ever been through a corporate audit or delightfully fun.

But there are lots of questions about, you know, what’s your process around hiring, what’s your process around alignment to the Equal Opportunity Employment Act, all these different ways to look for problems.

When it comes to bias in AI and dealing with vendors, it’s important to understand what kinds of bias to look for there’s six places you want to look for it and we’ve got other videos in the show.

If you want to head over to the YouTube channel, you can see better definitions and stuff but the six areas we’ll bring this up here, the six areas where bias creeps in, in AI and machine learning are people strategy, data, algorithms, models and actions as So let’s talk about each one of these as it relates to a vendor.

Number one people is easy.

Who is has been hired? Right? Who are the people working on the models and algorithms? Who are the people building the software? If you look at the development team, or the engineering team, and you see a complete lack of diversity, there’s probably going to be a problem, right? Even if it’s not intentional, just having a monolithic view of the world, if it’s a bunch of, you know, say 20, mid 20s, Caucasian males, that’s your whole development team.

They have a natural mindset that does not include people who are black because they’re not in their experience, right? It’s not saying that they’re, they’re bad people, just they simply do not have experience if none of them are female.

They have no frame of reference for things that females people who identify as female might be interested in, right? So that’s an easy one.

Look at the people look at the composition of the people.

Look at the diversity of the people and if you don’t see any diversity, you know, there’s a problem.

This, by the way applies not just to AI and machine learning, but to every vendor.

If you’re hiring like a PR agency, go to that agency’s leadership team.

If you see a whole bunch of people who look exactly the same, there’s a diversity problem there is, which means there’s a diversity of ideas problem.

second strategy is where bias can creep in, what is this the the strategy that somebody’s going for? Really good example of this.

Facebook has a strategy of engagement, right? They care about getting eyeballs stuck to their site, which means their algorithms tend to promote things that keep people engaged, like making people angry and afraid all the time.

And so they’re, the outcomes from that strategy have been, as we’ve all seen, pretty substantially negative, right? We’ve seen a flourishing of hate groups and all these things because that’s the strategy did They intend to allow like Nazi groups to flourish? Probably not.

But is a natural outcome of an incomplete strategy or strategy that was not informed by a diverse set of objectives.

Yes.

Third, data bias creeps in, in data.

Where did the data come from? Right, where this is what’s called Data lineage or data provenance.

How good is the data? Is the data itself balanced? Is it representative IBM has a fantastic toolkit called the IBM fairness 360 toolkit.

If you’re fluent in Python, you can download this for free, run it on your data, declare any protected classes things like age, gender, veteran status, disability, sexual orientation, gender, identity, race, religion, and in your data, it will then say hey, this model does not look representative or this model has a lot of drift or this model is the state is likely to behave badly.

So checking your data To the lineage of the data is important where the data come from.

If your data came from sources that themselves are biased, that can be a big problem, for example, black American healthcare, all the data is wrong, right? Because of systemic racism, you cannot get really good large scale data on black American healthcare because there isn’t good data.

systemic discrimination has created an entire pool of corrupted data.

Number four algorithms.

So the algorithms are the individual choices that you make, for what your models going to do, what strategy you’re going to pursue from an algorithm point of view.

This is things like deciding if you’re going through a gradient boosting now or generalized linear regressions, all these different choices.

Bias can creep in here because if you have somebody who doesn’t understand the full objectives and doesn’t have a background in diversity, they may choose a competition.

efficient algorithm, but not necessarily one that is fair.

So this would be a case for example of using something like a straight up a gradient boosting model versus something like Pareto multi objective optimization.

The algorithms are very different.

Pareto optimization allows you to essentially do what’s called trade off analytics, you will get a less well performing model but it it performs against, you know, many many different objectives as opposed to one objective kind of like what Facebook versus like LinkedIn how they function, they function very differently because of their optimization algorithms.

Number five, the model itself the model can drift.

The model when it takes in data as it takes in new data over time, it can drift the most famous example this is the Microsoft tape chat bot, which was corrupted by trolls basically, within 24 hours, it became a porn spewing neo nazi chatbot It was a train properly, but it drifted and drifted it didn’t have guardrails to keep it on the rails.

So that’s a place where bias can creep in.

And last is the actions.

What do you do with the model? Right? What do you wear? What are you going to use this model for? This is a good example of this is a martech vendor I saw that was effectively reinvented redlining, right they they built a tool to identify ideal customers, and it reinvented redlining.

And so bias crept in and in what their model was going, they’re going to do with the model.

So that’s a very short period of time at all the places that bias can creep in, throughout the process.

When you’re auditing vendors, when you’re doing your due diligence, ask them for their documentation about how they prevent bias in each of these areas, right.

You would not get on a plane.

If you walked in the cockpit and you saw there was no quick reference handbook.

There was no preflight checklist and you know, the pilots are just kind of winging it right? They do not get on that plane.

Because that is an unsafe play, and there’s no documentation, there’s no process, there’s no validation that things are working as they should be.

The same is true with AI and bias, right? If a company has no documentation, no processes, no rigor, no checking for bias in each of these areas with real checklists, like real document checklists, here’s the bullet points that we look for at each stage of our projects, then there’s a good chance by scrapped in, and in turn, that means there’s a good chance that the what they produce is also biased too.

So look for those ask for those as part of your process, and if they can’t produce it, probably a problem.

Right? That’s the the easiest way to vet a vendor, ask them for the documentation called part of compliance or whatever.

And the vendors themselves should recognize that if they don’t have this, they themselves are at legal risk, right because they can’t prove then they’re not biased.

So, great question.

We could spend a whole lot of time on this.

Great question.

If you have follow up questions, leave them in the comments box below.

Subscribe to the YouTube channel on the newsletter, I’ll talk to you soon take care, one help solving your company’s data analytics and digital marketing problems.

This is Trust insights.ai today and let us know how we can help you


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This