You Ask, I Answer: Keeping Data Confidential with ChatGPT?

In today’s episode, Ravi raises a crucial question about ensuring security and confidentiality while using AI tools like ChatGPT or Bard. I take you through the potential risks involved and emphasize the paramount importance of not inputting sensitive information into these systems. I also delve into a safer alternative, running large language models locally on your own system. To understand the full context and secure your data effectively, you’ll want to watch this episode. Remember, your information is your responsibility. Tune in to learn more!

Summary generated by AI.

You Ask, I Answer: Keeping Data Confidential with ChatGPT?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Ravi asks what steps should we take to ensure security and confidentiality when using tools like chat GPT or bard? Well, that’s easy.

Don’t put confidential secure information into these things.

Ever, ever.

Not too long ago, chat GPT had 100,000 accounts compromised.

They got into hackers got access to the accounts and we’re able to see the history in them.

You should not be putting any kind of sensitive information in these tools at all.

Because even if they were perfectly secure from third parties, you are still putting information that is yours into a system that is not yours, right? That is someone else’s system.

So don’t do it.

And that’s the easy answer.

Suppose you want to use large language models on sensitive or protected information.

How do you do that safely? The safest way to do that is to run a large language model locally.

And there are tools that allow you to do this.

One of which is called GPT for all so GPT the number for all.io.

This is a public open source project with a web app.

No, it’s actually a desktop app you run on your computer, Windows, Linux or Mac, and installs an interface.

And then you download one of many different models, you know, llama, Vakuna, you name it.

What happens next is once you’ve downloaded the model of your choice, assuming you agreed, you uncheck the share my information, that model runs locally on your computer.

And it’s not gonna be as fast as chat GPT, right? It’s not gonna be as thorough, it’ll have more limitations.

But anything you put in it never ever leaves your computer never even goes on your local network, it just goes on your computer, the responses you get are only on your computer.

And so as long as your computer doesn’t get stolen, that data is safe.

That is the safest way to use a large language model with sensitive or secure or confidential information, you absolutely do not want to be putting that into any third party, even if that third party is saying, yes, we protect your data, really, inevitably, with any kind of third party service, someone has to audit these things, someone has to from time to time, you know, take a sample and make sure it’s it’s doing what it’s supposed to be doing.

And if you’re putting in confidential information, other people can see that right now.

Yes, it’s going to be in there with a gazillion other people’s responses.

And you know what they’ve been using the software for, but the reality still is if you’re putting in third party information, it is at risk.

And there’s no way to fix that, right? There’s no way to not have that happen.

So I would download and install one of these tools.

They are free, they’re open source, and they are local.

And that makes all the difference for secure and confidential information.

Now for non secure stuff like oh, you know, it’s right up by an outline for a blog post about marketing automation.

Sure, you can use chat GPT for that you can use Bard or Bing.

Because in instances like that, you’re not going to be causing substantial problems.

rewrite this email in a professional tone, right? As long as there’s not substantial personal identifying information in the email, you can absolutely do that in a chat GPT.

So the easiest way to think about is this.

Would I email the contents of what I’m going to hand into the to a chat GPT? Would I be okay just mailing that to a member of the general public, just email to some random person on the street? Would would I be okay with that? Would would my company be okay with that? If it’s like, you know, you’re trying to figure out a way to more tactfully phrase a memo about, you know, please stop microwaving fish in the common room microwave.

That’s a pretty obvious yes, like, yeah, I’ll hand that to any stranger like a jerk.

Stop doing that.

You know, that would be the prompt.

And of course, the response would be, please, let’s avoid doing this.

But if you were putting the contents of an email saying like, hey, here’s the third quarter sales numbers.

I wouldn’t give that to some random person on the street.

I wouldn’t give that to a potential competitor.

That’s the easy benchmark as to what you should put into these tools and not is would you hand it to another person without reservation? If the answer is no, use it, use one of the local models instead.

So good question.

It’s an important question.

That’s a question people are not thinking about enough.

So Robbie, good job for thinking about it.

Thanks for asking.

We’ll talk to you next time.

If you like this video, go ahead and hit that subscribe button.

(upbeat music)


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Get your copy of AI For Marketers

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!