Mind Readings: AI Bill of Rights, Part 5: Human Alternatives, Consideration, and Fallback

Warning: this content is older than 365 days. It may be out of date and no longer relevant.

Mind Readings: AI Bill of Rights, Part 5: Human Alternatives, Consideration, and Fallback

The proposed AI Bill of Rights is a good start, but there is still a long way to go. Machines should not have the presumption of innocence and should be presumed guilty until humans can prove that they are right.

Mind Readings: AI Bill of Rights, Part 5: Human Alternatives, Consideration, and Fallback

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Welcome back.

This is the fifth and final part of our review of the AI Bill of Rights the document published by the United States White House, Office of Science, Technology and something rather, on the rights that people should have when it comes to dealing with AI.

Today is human alternatives, consideration and fallback.

So let’s dig into this.

You should be able to opt out where appropriate and have access to a person who can quickly consider and remedy problems you encounter, you should be able to opt out from automated systems in favor of a human alternative, where appropriate, appropriateness should be determined based on reasonable expectations in a given context.

And with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts.

In some cases, a human or other alternative may be required by law.

So this is a case where it’s human in the loop.

A human being should be able to interrupt an AI system or override it at any given point in time, right? If the system does something dumb, a person should be able to walk over to it and just push a big red override button say, Nope, you made a mistake.

A I’m overriding this be you need to learn from this mistake and and retrain and rebuild the model.

Alternatively, a human being or human decision makers have got to be able to hit the stop button and say, Okay, we’re just turning this thing off.

This system is not working, it’s creating negative outcomes.

It’s worse than then people it’s worse than not nothing at all.

So let’s turn this thing off.

Let’s look at a couple of the examples that are listed in this paper.

Number one, an automated signature matching system is used as part of the voting process, and many parts of the country to determine whether the signature on a mail in ballot matches the signature on file.

These signature matching systems are less likely to work correctly for some voters, including voters have mental or physical disabilities, voters with shorter or hyphenated names and voters who have changed the name, a human curing process, which helps voters confirm this signatures and correct other voting mistakes is important to ensure all votes are counted.

And as already standard practice as much of the country for both an election official.

And the voters have the opportunity to review and correct any such issues.

Yeah, AI is one place I don’t want to even touching politics, right? I am totally fine with old school paper, not even machine just good old school paper.

Because at least in the United States, electoral politics is now so toxic and so polarized, that there are a variety of players attempting to suppress votes, doing things like closing polling stations, in areas where their party of preference does not have a mathematical advantage.

You know, imposing all sorts of fraudulent laws that suppresses voting, running ads telling people of a sort of, you know, certain racial backgrounds that the elections on the wrong day.

AI has absolutely no business being in politics zero.

Just just doesn’t.

Number two, and unemployment benefit system Colorado required as a condition of accessing benefits that applicants have a smartphone in order to verify their identity.

No alternative human option was readily available, which denied many people access to their benefits.

That’s dumb.

Not everyone has a smartphone.

Number three, a fraud detection system for unemployment insurance distribute distributions incorrectly flagged entries as fraudulent, leading to people with slight discrepancies or complexities in their files having their wages withheld, and tax returns seized without any chance to explain themselves or receive a review by a person.

Number four, a patient was wrongly denied access to pain medication when the hospital software confused her medication history with that of her dogs.

Yeah, you know, I love technology.

I love data science and machine learning and artificial intelligence.

But if your system is so bad that you can’t tell the history in a human patient and a dog, you should not be using technology you should be doing everything the old fashioned way because wow.

Even after she tracked down an explanation for the problem, doctors were afraid to override the system and she was forced to go without pain relief due to the system’s error.

Number five a large corporation automated performance evaluation and other HR functions leading to workers being fired by an automated system without possibility of human review appeal or other form of recourse I have a fairly good idea which Corporation This is, they ship a lot of things and their trucks are outside your house fairly often.

Okay.

All of this is human in the loop stuff all this is making sure that human beings have primacy have the last word in any AI system whether it is medication systems performance evaluations, marketing automation, lead scoring at the end of the day.

A human has to have the last word if you have systems or you are building systems where the system is making decisions and a human cannot say, Nope, you’ve got a bad system.

Right? If you’re afraid of the system, you’re afraid to override it, you’ve got a bad system, right? Everyone who’s using a piece of machine learning, or any automation, frankly, should be 100%.

comfortable saying, Wow, that was really dumb.

Let’s not do that again.

Now, obviously, you do want some protections for people maliciously doing that, right? You don’t want people correcting or changing a system that is making correct decisions because of their own biases.

But generally speaking, the systems are probably going to make more mistakes than the humans are.

And at the end of the day, a human being should be the one saying, No, this is this is dumb.

This is this is not working as intended.

Take a look at your lead scoring in your marketing automation system.

Do you know how it works? Do you have the ability to override it? You know, can you say I’m going to manually adjust the score higher because I know this person that could be a good customer, or I know this person, and they’ve got all the buying signals, but ain’t never gonna buy anything.

It was put their lead score is zero.

And the system would be like, Well, no, they’re showing all these buying signals like No, I know this person, he’s never going to buy a bloody thing from us to just hit put at minus 1000, then no one ever talked to him again.

We all know people like that we all know situations like that.

And our systems have to be able to accommodate us, right? There is something to be said for change management for using automated systems and taking advantage of them and becoming comfortable with change.

But there’s also something to be said for change management and the other direction requiring a system to obey humans.

When you start turning over decision functions to machines that you cannot override, you have no say over bad things happen.

Right, we had to see plenty of examples, from the paper of bad things happening because people didn’t have the ability to push a big red stop button.

When you look, for example, in the automotive industry, look at Toyota’s production system.

And the assembly line, every employee on the assembly line has the ability to stop the line.

Now something has to be wrong, right.

You can’t just do it for fun.

But every employee has the authority to stop the line if they see that something has gone wrong.

That is not true at all artificial intelligence systems, right.

But it has to be a prerequisite for any system, we deploy that there has got to be a stop button that anybody can hit and require inspection require investigation.

When you get an email into your customer service inbox saying a customer had trouble buying something online, you should have a stop button.

It might even be a literal stop button on your desk saying okay, let’s take the system down and figure out what has gone wrong here and is it user error or is it machine error? If it is machine error, you need to fix it sooner rather than later.

So these have been the these five principles in the the AI Bill of Rights there.

They are generally very sound safe and effective systems.

Algorithmic discrimination protections, data privacy notice an explanation, human alternatives, consideration or fallback.

These are good ideas.

And again, many of them are already implemented in law in some fashion, right, particularly around the data privacy stuff, discrimination based on biases.

But I think where the US government was going with this document, what in particular was putting it all together? In the context of AI, we cannot build artificial intelligence systems.

Without these considerations, and a big part of our responsibility as data scientists, as marketers, as business owners, is to make sure that someone is asking questions from each of these five categories all the time, in every system, we build saying, hey, what could go wrong? Right? What’s How could this be misused? How could this go off the rails? How could the model not function like it’s supposed to? And what can we do to prevent that from happening? What are the realistic scenarios where a system is going to just blow up on us? What are the realistic scenarios where someone’s going to get screwed over that we did not mean to have that to have happen? Right? All of these things have to be part of our design process, our development process and our deployment process.

And if they’re not, sooner or later, in one or more locales where we do business, it’s going to be illegal, right? It’s going to be illegal because there’ll be enough high profile cases where somebody did something wrong.

Machines are not people, right? A person a human being in most democratic nations had this sort of presumption of innocence.

You are innocent until proven guilty.

That does not apply to machines.

And in fact, I would argue the reverse should be true of machines and machines shouldn’t be presumed guilty of doing something wrong until humans can prove that it’s doing something right that it’s not violating laws.

And I think that’s the one part that’s missing from this is that when it comes to machines who don’t have feelings of the egos to be hurt, there is no presumption of innocence.

And as business leaders, we should not presume that the machine is right.

We should presume the machine is wrong until we can prove that it is right.

And we take that perspective with us.

As we make our own AI journeys to AI maturity and deployment, we will do better, we will we will create better outcomes.

When we work with vendors, who are building systems on our behalf of running systems on our behalf The same applies, we presume that the vendors systems are wrong until the vendor proves that it is right.

That’s the approach we should all be taking.

Just because it looks cool, or looks expensive, or has fancy charts, doesn’t mean it’s right.

I’m in the midst of a coding project right now building an attribution model on Google Analytics 4.

I am using a certain type of machine learning technology.

I looked at the results.

This is not right.

Something is wrong here.

It just didn’t pass the sniff test doesn’t pass existing system tests.

It looks good.

It looks nice.

It looks expensive.

But it’s not right.

And just because it looks good doesn’t mean that a machine deserves presumption of innocence machines do not deserve the presumption of innocence.

Hey, humans do machines do not? So that’s our wrap up and review of the AI Bill of Rights.

What are your thoughts on these five categories? How do you feel about them? Do they make sense to you? Do you think that this proposed legislative agenda is going in the right direction? Do you feel like it’s not enough? I personally feel like it’s it’s falling short and for years.

It’s a good start, but there’s a long ways to go for me.

Thanks for tuning in.

I’ll see you next time.

If you’d like this video, go ahead and hit that subscribe button.


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This