The AI Bill of Rights published by the United States White House proposes that people should have the right to know if an automated system is being used and to understand how and why it contributes to outcomes that impact them. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation, including clear descriptions of the overall system functioning, and the role automation plays.
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Take my new Generative AI course!
- Got a question for You Ask, I'll Answer? Submit it here!
- Subscribe to my weekly newsletter for more useful marketing tips.
- Subscribe to Inbox Insights, the Trust Insights newsletter for weekly fresh takes and data.
- Find older episodes of You Ask, I Answer on my YouTube channel.
- Need help with your company's data and analytics? Let me know!
- Join my free Slack group for marketers interested in analytics!
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.
Welcome to part four of our examination of the AI Bill of Rights published by the United States White House, as proposed regulations for essentially the use of AI, and the rights that people should have when it comes to the use of AI.
It today we’re going to talk about notice an explanation.
So let’s dig into this one.
You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you, designers, developers, and deploys of automated systems should provide generally accessible plain language documentation, including clear descriptions of the overall system functioning, and the role automation plays.
Notice that such systems are in use the individual or the organization responsible for the system, and explanations of outcomes that are clear, timely and accessible.
All right, so this one is pretty straightforward, right? If a system is being used, that’s automated, you should know how it’s how it works, right? You should know what’s in the box.
And you should be able to say let’s let’s perhaps not use this if it’s not working right or at the very least be able to explain the outcomes.
Let’s look at a couple of the examples that they give in the paper number one lawyer representing an older client with disabilities who had been cut off a medicated fund to home health care systems couldn’t determine why, especially since the decision went against historical access practices.
In a court hearing, the lawyer learned from a witness that the state in which the older client lived had recently adopted a new algorithm to determine eligibility.
The lack of a timely explanation made it harder to understand and contest the decision.
A formal child welfare investigation is opened against a parent based on an algorithm and without the parent ever being notified that data was being collected and used as part of an algorithmic child maltreatment risk assessment.
The lack of a notice or an explanation makes it harder for those performing children maltreatment assessments to validate the risk assessment and denies parents knowledge that could help them contested decision.
Number three, a predictive policing system claims to identify individuals at the greatest risk to commit or become the victim of gun violence based on an automated analysis of social ties to gang members, criminal histories, previous experiences of gun violence and other factors, and led to individuals being placed on a watch list with no explanation or public transparency regarding how the system came to its conclusions.
Both police and the public deserves to understand why and how such a system is making these determinations.
A system awarding benefits changed, it’s great to invisibly individuals were denied benefits due to data entry errors and other system flaws.
These flaws were only revealed when an explanation of the system was demanded and produced, the lack of an explanation made it harder for errors to be corrected in a timely manner.
So this is about black boxes, right? As we use more and more sophisticated decision systems as we use more and more sophisticated AI like deep neural networks, there’s more and more that we don’t understand about what’s going on inside of the machine.
And this is part of the reason why there’s a major push towards interpretability and explainability.
In the context of AI interpretability means you have the ability to look at the code that is at use and diagnose it line by line, here’s what this line of code does, here’s what this line of code does, and so on and so forth.
explainability is looking at the outcome and being able to explain the outcome, here’s how the machine arrived at these conclusions.
The challenge that people are running into right now, and that tech companies in particular are very resistant to to go the interpretability route is that interpretability is dramatically more expensive for companies to do.
Because deep learning systems, you can audit them, you know, layer by layer, but it’s computationally very, very expensive to do so.
So you have a lot of big tech companies saying no, no explainability is all you need.
Which is not true.
Because, again, these deep neural networks are basically, if you don’t build interpretability, and they just big black boxes, and you don’t know how the system is making its decisions, all you know, is whether the decisions make sense or not.
The classic example of this is that researchers trained in image recognition algorithm to differentiate a wolf from a dog right and they fed it hundreds of photos of wolves and dogs and the system performed really well in in theory, and then they started feeding it real life stuff, and it failed all over the place.
And when someone went back and built interpretability into the system, like there was no way to explain the outcome.
But when they built interpretability into the system at again, considerable performance penalty.
It turns out the system was not looking for dogs or wolves or ears or jaw shape or stuff.
It was looking for snow, if there was snow in the photo was a wolf, at least in the training dataset.
And so the decision system behind the scenes was making decisions based on a non relevant factor.
You know, obviously if you’re building an image recognition system for wolves, that’s not so fine if you’re building systems that impact people’s lives.
So, even within marketing, right, who you market to has an impact.
I was talking to an insurance company a number of years ago.
And they were building a system to identify ideal customers, their ideal customers to them were people of certain affluent means.
And the ugly reality in the United States of America is that money tends to also have a very high correlation with race.
And as a result, the system they built, even though theoretically, it was not discriminating on race in practice, it absolutely was.
And so they effectively invented redlining.
Another company, I saw in at one of the MAR tech shows build a predictive algorithm for ideal best customers for Dunkin Donuts.
I don’t know if Dunkin was actually a client of theirs.
They were just using it as a demo.
But they showed us this map of the city of Boston said, here’s all the red dots.
Those are the areas we’re your ideal customers aren’t.
Here’s the black dots where there aren’t ideal customers.
And I looked at this map.
And I said, You invented redlining again, and like what’s that, like, for God’s sakes? They were essentially using I believe was income spending patterns.
But it also perfectly replicated the demographics of Boston.
Areas like Mattapan, Roxbury, Dorchester had no ideal customers, right because they’re predominantly black areas of the city.
They are also lower income areas of the city, but they’re predominantly black areas of the city.
Places like Cambridge Somerville, Boston, the financial district, all ideal customers.
Now, if you know anything about Dunkin Donuts, coffee, the only people in Boston who don’t drink Dunkin Donuts are dead.
Everybody else, regardless of race, ethnicity, any any protected class, everybody ascertain six significant portion of the population for every demographic drinks Dunkin Donuts, right.
So their algorithm was flat out wrong, it was it was discriminatory and wrong.
And there was no explanation of how it worked.
And that’s what this principle of of AI Bill of Rights is really all about.
It is about.
Can you explain how your system is making decisions? Think about this go into your marketing automation system, right? Or your CRM as a marketer? Do you know how the lead scoring system works? Can you explain it, you explain to somebody yes, you have a lead score of this, you were selected for this, you receive this email.
Because of this.
I have had even in my own stuff just for my personal newsletter, I’ve had to go digging around in my own system to figure out why somebody was getting an email from me when they said they didn’t want it.
And I dug into it.
And there actually been some alarming cases of bots submitting other people’s data, I was looking at this one person who’s based in Amsterdam, and there was there was what looks like bot traffic from a server farm somewhere in the USA here that submitted their information at a time that they wouldn’t have been online, subscribing to my newsletter.
And I can’t go back and hunt down exactly where that went.
But I have at least the IP logs to trace down.
But if I hadn’t been able to explain, I fail to dig into my system, I could have been held liable for a violation of of an international law.
That’s the thing is that for us as marketers, we’ve got to understand us systems, we got to know what systems are, what they’re doing, what decisions they’re making, you know, how does somebody have a lead or B lead in our system, right? Because you don’t want to discriminate if you are adhering to in the United States, title nine laws.
And your system is saying someone’s a better lead than someone else because of a protected class factor, like what gender they are, you’re breaking the wall.
Right? And that’s that’s going to get you in a whole bunch of trouble.
So you’ve got to know what’s going on in your systems be able to explain it, defend it, and then if there’s a problem, deal with it, deal with it.
So this is a very good principle and requiring explainability requiring interpretability of AI systems is essential.
And again, big vendors are going to resist this like crazy because it is expensive to do.
But the first lawsuit they lose you know for a billion dollars might convince them otherwise so that there may be some change on that front, but to protect yourself.
Know how your systems work.
Know how your vendor systems work, require transparency from them require technical details from them if they’re unwilling to provide those details.
You may have to change vendors, your legal department and your finance department certainly will advise you change vendors if it is creating substantial risk to your company so be aware of those risks as well in the in the final section of the AI Bill of Rights we’re going to talk about human alternatives so stay tuned for that if you’d like this video go ahead and hit that subscribe button
You might also enjoy:
- Almost Timely News, December 24, 2023: Why Mistral's Mixture of Experts is Such a Big Deal
- How To Set Your Consulting Billing Rates and Fees
- Almost Timely News, November 19, 2023: A Deep Dive on Prompt Libraries
- Transforming People, Process, and Technology
- Almost Timely News, December 17, 2023: Improving the Performance of Generative AI Prompts
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers