Warning: this content is older than 365 days. It may be out of date and no longer relevant.

Mind Readings: AI Bill of Rights, Part 3: Data Privacy

The AI Bill of Rights contains a section on data privacy, which outlines the rights of individuals with regard to their data. This includes the right to know if their data is being used by machines for decisioning, the right to opt out of such use, and the right to access and delete their data. Companies must also obtain consent from individuals for the use of their data, and must provide notice and explanations for the use of data and machine learning.

Mind Readings: AI Bill of Rights, Part 3: Data Privacy

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Welcome back to our review of the AI Bill of Rights.

This is part three data privacy and this one got to read this whole thing aloud.

I’ve been sort of summarizing these but this one deserves to be read in full because a lot of is already law or becoming law.

And we need to pay attention to it as marketers data privacy.

You should be protected from abusive data practices via built in protections and you should have agency over how data about you is used.

You should be protected from violations of privacy through design choices that ensure such protections are included by default, including that data collection options conformed to reasonable expectations that only data strictly necessary for the specific context is collected.

designers, developers, and deployers of automated systems who seek your permission and respect your decisions regarding collection use access, transfer and deletion of your data in appropriate ways.

And to the greatest extent possible, were not possible alternative privacy by design safeguards should be used.

systems should not employ user experience and design decisions that obfuscate user choice, or burden users with defaults that are privacy invasive consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given any consent request should be brief, be understandable in plain language and give you agency over data collection, and the specific context of use.

Current hard to understand notice and choice practices for broad uses of data should be changed.

Enhanced protections and restrictions for data and inferences related to sensitive domains including health work, education, criminal justice and finance, and for data pertaining to your youth should be should put you first.

In sensitive domains.

Your data and related inferences should only be used for necessary functions and you should be protected by ethical review and use prohibitions.

You and your communities should be free from unchecked surveillance surveillance technologies should be subjected to heightened oversight that includes at least predeployment assessment of their potential harms and scope limits to protect privacy and civil liberties.

continuous surveillance and monitoring should not be used in educational work, housing or other contexts where the use of such surveillance technologies is likely to limit rights opportunities or access.

Whenever possible, you should have access to reporting that confirms your data decisions have been respected, and provides an assessment of the potential impact of surveillance technologies on your rights opportunities and access.

This section of the AI Bill of Rights is probably the closest to already being a reality.

You’ll notice the language sounds very similar to GDPR General Data Protection Regulation of the EU it sounds very similar to CCPA and CPRA, California’s consumer protections for citizens of California and households, they’re about data.

And this is also the section that companies resist the hardest, particularly marketers, because marketers let’s face it have an addiction to data that even if they don’t know how to use it, well, they have an addiction to it, particularly personally identifying information and demographic data sensitive data.

That’s got to stop.

That’s got to stop because legislatively, the world is pivoting towards enhanced privacy, which is a good thing.

Enhanced privacy is a good thing.

Not good for marketing, but good for people.

Let’s look at a couple of the examples that they cite in here of things companies have done wrong.

Number one, an insurer might collect data from a person’s social media presence as part of deciding what life insurance rates they should be offered.

Ya know? Number two, a data broke or harvested large amounts of personal data and suffered a breach exposing hundreds of 1000s of people to potential identity theft, gosh, Who could that be? A local public housing authority installed a facial recognition system at the entrance to housing complexes to assist law enforcement with identifying individuals you’d buy via camera when police reports are filed leaving the community both those living in the housing complex and not to have videos of them sent to local police departments and made available scanning by its facial recognition software in the last episode on algorithmic discrimination, and one of the things we forgot to talk about was that things like facial recognition don’t work.

The same for everybody.

They are trained on certain libraries of faces.

And this there’s a lot of issues with that.

But this case, this is a consent issue.

People who are not living at that housing complex did not give their consent to being videoed.

Companies use surveillance software to track employee discussions about union activity and use the resulting data to surveil individual employees and surreptitiously intervene in discussion.

Starbucks.

To be fair, there’s a lot of companies that do stuff like that Amazon, does that do? Allegedly, allegedly? I don’t believe any of those cases have come to court to decision in courts yet.

So they have to say allegedly, but that’s was allegedly behind these things.

So data privacy is really going to be challenging for AI, and for marketing.

Because we already have laws on the books saying you have to provide consent for a consumer must provide consent for the dated reuse.

And in California CPRA, which takes effect January one or 2023.

There’s a provision saying, consumers have the right to know if their data is being used by machines for decisioning, aka machine learning, and have the right to opt out of it.

Right.

So think about that, if you are building machine learning, based on the data within your systems, you have to if you’re planning on doing business with California at all, or Europe, you have to be able to exclude people’s data from machine learning.

Alright, that’s a pretty big deal.

There will be an entire cottage industry of folks helping to clean that stuff up, and to build what are called synthetic models, synthetic models based on data that conforms to the statistical patterns of users without using any actual user data, right? So if 40% of your database is women, and 52% of those women are people of color.

And of that, you know, 16% are Latina, then you will create a synthetic data set of artificial people that match those statistical criteria that you could use for modeling.

But none of the individual entries in that synthetic data are real people.

Right? They’re like, AI generated images of people.

They’re not real people, but they look enough like real people that you could use them in decisioning systems to look for patterns that you can make decisions on.

For consent, consent is one of those things that marketers have not really figured out.

Because we sort of assumed blanket consent.

And it’s becoming more and more challenging legislatively, because companies and various entities have said, No, you have to get consent per purpose per use.

So if you fill out a form, on my website, I have to list out all the things that I’m going to do with your data, I’m gonna subscribe to my newsletter, I’m going to use your data to make predictions about whether you know what email domain you use, and whether that is a predictor for whether you’re likely to be a customer or not.

And so on and so forth.

I would use your data to you know, for marketing, lead scoring, if you work for a certain type of company, to give you more points in our in our lead scoring system, all of these things have to be things that we as marketers have to be thinking about now, because it’s going to be legislatively required.

And again, this is one of those things where a lot of this is already law, certainly overseas in China, and in the EU, it is law, it’s operational law.

Now there are substantial civil and criminal penalties for breaking those laws.

And in the United States, there’s differential privacy laws all over the country, but California has some of the strictest ones, other states, Virginia, Massachusetts, New York, they’re also coming up with with privacy law scrutiny, a patchwork quilt, but the general guidance that we’ve seen, the fact of is, if you were conformant to GDPR, the EU legislation, you’re pretty much gonna check the box on everything else, because GDPR is the strictest implementation of privacy right now.

AI and machine learning are founded on data, right? You build models from data, though the fundamental technology underlying it is data.

And so if we are losing access to data, because we didn’t get permission for it, we’ve got to come up with other things, right? behavior based analysis is really useful, right? Do you really care who somebody is? Or do you just care that you see enough buying signals that you can nudge them? For example, if you go to the Trust Insights website, and you visit a blog post, and then you visit the about us page, and then the team page and then the Services page, you’re probably going to convert to something, right? I don’t need to know who you are your age or your location or your your ethnicity, to know that I should fire a pop up saying Hey, want to buy something.

Your behavior is indicative of buying behavior would know regardless of who you are, and that is the mind shift that marketers and particularly marketing technology vendors need to pivot to is let’s make sure we are focusing on behaviors and not individuals, and certainly not personally identifying information wherever possible, in order to conform To regulations as best as we can.

So that’s, that’s the data.

There’s a lot to unpack there.

But the bottom line is we need permission for everything on a case by case basis on the use by use basis.

And we should only be collecting data for actually going to use it.

So take a look at the data you collect.

Now, as a marketer, how much do you actually use? Is there stuff that you could just throw overboard and wouldn’t affect your decisioning at all right? If there is, get rid of it, get rid of it sooner rather than later, delete it from your systems.

And you are that much more protected from privacy regulations and from data breaches, too.

But this is a really important one.

In the next section, we’re going to talk about notice and explanations.

I’ll be tomorrow’s episode.

If you’d like this video, go ahead and hit that subscribe button.


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Get your copy of AI For Marketers

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!