Mind Readings: The Dangers of Excluding Your Content From AI

In today’s episode, I discuss the popular notion of excluding your content from AI and the implications this could have. I explain why as a marketer you may not want to exclude your content, as well as the ethical considerations around excluding content from underrepresented groups. I encourage thoughtful consideration of what should and should not be included in AI models, and urge copyright holders to explore licensing models rather than outright exclusion. Tune in to hear more of my perspective on this complex issue.

Mind Readings: The Dangers of Excluding Your Content From AI

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, let’s talk about excluding your content from AI.

This is a thing that’s become very popular as a discussion point for content creators to say, Hey, we did not consent to have our our content used to train machines, we want to opt out of it.

And to be clear, your content that you made is your property.

And you have that right to exercise how people may or may not use it.

There’s no debate about that you that is your right.

And you can and should talk to a qualified legal resource for what it would look like to enforce those rights to to exert those rights on your content.

So let’s set the stage there.

If you made it, and you hold the copyright for it, it is your place to say what can and can’t be done with it until you license it or give that those rights away.

Now, let’s talk about why certain kinds of content you might not want to exclude.

We’ll start with marketing.

And one of the things that makes generative AI.

So powerful is that it has a huge corpus of knowledge because it’s trained on the combinations of words and phrases and sentences and paragraphs and documents from trillions and trillions of word combinations.

Those that that pool of knowledge is essentially just a big word Association index.

I mean, if we, if we don’t, if we specifically avoid the math, like vectors and embeddings, and and, you know, vector spaces and stuff.

And we just, essentially call these things really big word clouds, which is conceptually correct, but mathematically wrong.

Then, when these models are made in the first stage, the foundation model making, you are essentially doing word association.

If you are a marketer, and you want your brand to be associated with specific terms and concepts and things.

The worst thing you can possibly do is say, Hey, you may not use our content, right? If your blog is filled with content about who you are, and what you do, and the topics you have expertise in, you want that information, getting into language models, you want that in there.

So that if someone is, through the use of a prompt invoking a concept like B2B marketing, or sales on force automation, or whatever, the more associations there are with your brand and your company and your execs and things, and those topics, the more likely it is that the machine is going to eventually generate content that is aligned with who you are and what you do, right? If somebody types in a prompt, like, name some good resources for learning about B2B marketing.

If you were if you said to the machine, hey, do not use our, our blog, we’re going to make sure that our blog is pulled out of all the different repositories that contain the public internet, then you are by default handing that that term and that concept over to other people.

Right.

So from a marketing perspective, you might not want to do that.

We’ve been counseling people to the exact opposite, which is like be everywhere, you know, be on every podcast, you can be be on every YouTube show that you can be getting every paper that will take you whether it’s the New York Times, the East Peoria Evening News, who cares as long as the public text on the internet, and you get your brand and your concepts mentioned out there, you don’t even need links, right? It’s not SEO, you just need to be out there in as many places as you can.

You need to look at who’s building models, right? So Google is building models, open AI is building models, Facebook meta is building models.

And that tells you where you should be putting your content, right? You should be putting your content on YouTube with closed captions, if you want your stuff to eventually end up in Google’s models, because you know, for sure, they’re going to use that.

With meta, you want to make sure that you’re publishing your content in some fashion or form within any tool where your meta has says, Hey, we’re going to use your data to train our models say yes, here’s my data, train your models on this stuff.

So that recognizes that we are the authority on this thing, right? So that’s the marketing side of things.

And it’s important.

If you want your content to not be used, again, your right to do so.

But the consequence is models will know less about you and that concept than they will about competitors who just shovel their content in.

Now, let’s talk about something more ethical and moral and around bias.

A lot of the content that currently exists is, I would call it typical, right? Normative, to some degree, or in favor of the status quo.

So you have content that is out there that approaches things from, say, a more male point of view, or a more heterosexual point of view, or a more Caucasian point of view, or a more American point of view.

There’s a lot of content out there.

If you are a member of any underrepresented group, whether it is sexual orientation, gender identity, ethnicity, religion, whatever, and you are pulling your content out of AI, again, your right to do so.

It is your right to do so.

If it’s your content, you own the rights.

But if you are withdrawing permission from models to learn that content, and they are, they’re still have the diet of the typical, the the overrepresented, the majority, then you are potentially causing additional harm to your underrepresented class.

Right? If everybody who is Korean, like me, right? We all say nope, no, you may not use any content about Koreans in language models.

We’re withdrawing our favor for other stuff.

Well, then what’s going to be left? Right? It will be other people’s impressions of what Korean means, right? It will be non Korean folks, recipes for Korean foods, right, which people who are of an ethnicity generally cook that food the best.

It will be TV shows that maybe have Korean stars in them, but are not made in Korea or featuring Koreans.

And so this is these are examples if I’m if I we say we’re going to withdraw our content, as this protected class as this group, we are going to reduce the knowledge that tools have about us and in a world where we are already under represented, this is bad, because this increases bias, this increases bad representations, this increases beliefs that are incorrect, founded on bad data on assumptions that other people have made.

And bear in mind, models are trained on as much public text as they can get hold of, which means they are trained on history.

Right? You’re talking about pulling in data, things like the Constitution of the United States of America, which is a document that was written, what more than 200 some odd years ago, the concepts within it, kind of out of date, right? Go books by Jane Austen, great books, but they are no longer aligned with contemporary society.

So if you are saying, hey, you may not use our content, there is still this backlog of public domain historical content that has points of view, and biases from that period of time.

And there’s a lot of it.

And because it’s all public domain, it is usable for free by by model makers.

So if you say you may not use any copyright data, well, then you’re automatically saying you may not use information from before from after 1925, right, or 1923, was the world in 1923.

Fair, and representative and equal rights for who you are.

Because if you say you may not use this content, you may only use things that you have that are not copyrighted.

You are saying here’s where we’re going to focus on materials that were made prior to that date.

That’s when copyright runs out.

I would not want to live as a person who is an ethnic minority in the USA, I would not want to live in 1923 America, I would not want to live there, people who look like me would be very heavily penalized, discriminated against.

And if we make AI that is essentially frozen in time at 1923, because we’ve said what you may not use copyrighted works, it’s going to be heavily biased towards that world in the world that preceded it.

And that’s not a world that I want my machines to learn either.

So give some thought, be thoughtful about what content you do and do not give to AI, right? What you do and don’t give to the for profit entities who are making these things.

Again, I’m not saying that machine, the these companies should just have free reign to do whatever they want with other people’s property.

That’s not at all we’re saying you have property rights.

But the consequences of enforcing those property rights rigidly, without providing some alternatives, it can be problematic, it can have unforeseen consequences.

What does the ideal situation look like? Looks like any other form of property rights, which is, if you want to use my property, you’ve got to pay some kind of licensing fee for it, right? What the music industry does, the television industry does this, everybody understands licensing.

So the question is then, what does either a model that is put together by the community that is filled with voluntary information look like? Or what does a licensing scheme look like that’s provided to copyright owners and copyright holders to say, Yep, here is, here is what you’re allowed to use in exchange for these economic benefits.

Give this some thought.

Give this some thought about what goes into models.

And if certain groups of people withdraw their content, again, which again, as they’re right, what impact will that have on the biases that are already present in those models? That’s the show for today.

Thanks for tuning in.

We’ll talk to you next time.

If you enjoyed this video, please hit the like button.

Subscribe to my channel if you haven’t already.

And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.

♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Get your copy of AI For Marketers

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!