You Ask, I Answer: Open Weights, Open Source, and Custom GPT Models?

You Ask, I Answer: Open Weights, Open Source, and Custom GPT Models?

In today’s episode, Joseph asks if it’s possible to create your own custom GPT model using open source tools. I explain the difference between open models and truly open source models, noting that true open source would require exposing the training data. I discuss options like fine-tuning existing models or using retrieval augmented generation to customize them, but caution that recreating a full model from scratch would require an unrealistic amount of compute power. I recommend starting with simpler no-code tools to test ideas first before investing heavily in a custom build. Tune in to hear more about the possibilities and limitations around rolling your own language model!

You Ask, I Answer: Open Weights, Open Source, and Custom GPT Models

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s episode, Joseph asks, if I wanted to dabble in an attempt to make my own custom-like GPT, a language model, using something that is open source, would I need to use something like Lama to accomplish that goal? Okay, so this is a little bit tricky.

The Lama models are what we would call open models in the sense that you can get the model itself, the model weights, and download them and use them, and you can fine-tune them and manipulate them and things like that.

They are not strictly, if you want to be adhered to what open source is really about, they are not open source models, and here’s why.

Open source requires the disclosure of the source code, not the compiled binary.

So if you write a piece of software that you compile in C++, if you want it to be open source, you have to give away the C++ source code itself and not just the compiled end product, the app itself.

With language models, extending that analogy, if I give it to you, you’re going to get a lot of results.

You’re going to get a lot of results.

If I give away the Lama model, I’m giving away open weights.

Here are the weights that you may use to manipulate and change into a model that performs the tasks you want to perform.

For it to be truly open source, the training data that the model was made from would also have to be given away, right? So this would be things like Common Crawl, for example, or Archive and Stack Exchange and Reddit and the Online Books Archive and Project Gutenberg and all that stuff.

If you wanted to do a true open source language model, you would need to open source the training documents themselves.

And some of these exist.

For example, the repository that like 90% of language models are trained on is called Common Crawl, you can go visit it at common crawl.org.

This is a massive, massive archive of essentially the public internet.

It’s a web crawler that goes around and scrapes the web.

And anything they can see, it puts in there unless people specifically tell it not to.

That huge Common Crawl archive is what a lot of model makers use as sort of their their base starting recipe, there is definitely opportunity for someone to look at that archive and selectively pull pieces out of it to train and build a transformer based model, a pre trained transformer model from scratch.

From absolute scratch, you’d say here, we’re not going to use Lama as a starting point, we’re going to make our own.

This requires, however, an enormous amount of compute power and time.

When Lama two was put together, I think it was something like several roomfuls of a 100 GPUs, and about $2 million worth of compute time to build this thing over I think it was 12 weeks was how long it took roomfuls of servers to build the Lama model.

Most of us do not have that kind of firepower.

Most of us, we just can’t afford it.

As nice as my MacBook is, my MacBook is not suited computationally to train a model anything other than like a toy model, you could absolutely and you might want to try building your own language model from scratch, but it’s gonna be very, very limited, it’s gonna be a toy.

If you want to build a custom GPT like system, yes, you could start with something from the Lama two family, because Lama two two is open source and open weights, and it is commercially licensable.

And then you would do one of a couple different ways of customizing it.

One would be fine tuning it where you would give it additional instruction sets and essentially alter the weights in the model so that it performs some some instructions better, right? So you might have 1000s of examples like, hey, when a customer says this, do this, when a customer says do this, do this, you might have 1000s of those things, and you would then essentially retune llama to follow instructions like that better.

That’s what fine tuning does.

You might also want to add new knowledge to llama.

And that’s where something like retrieval augmented generation would come into play where you would say, here’s a library of extra data, you should look in this library first, before you go into your general library, so that you get better answers.

Those would be methods for customizing it.

When you look at something like open AI is custom GPT, that is a model that is that is a system that is a system that is largely custom instructions.

So you give it specific prompts, and retrieval augmented generation, you upload files to it.

And it can talk to those files, or you can make a function call to call to external data sources.

It’s not a fine tune, right? You’re not getting you’re not convincing it to learn certain instructions better, not really.

So that would be how you would accomplish that goal of making that custom like thing you would, you would do the do a fine tune.

If the llama model just doesn’t answer the questions the way you want them answered from an instructions following perspective, like it just doesn’t follow directions well, or if it doesn’t have the knowledge, you would give it access to some kind of vector database that would have the knowledge you want in it that it could then reference if it can follow instructions fine and just makes up answers.

Retrieval augmented generation is the way to go.

If it can’t even follow instructions, fine tuning is the way to go.

So that’s how you approach that.

I would say that’s the starting point trying open AI is custom GPT is just to see if your idea is even feasible first.

Because if you can’t get it working in in a very in a no code environment, that’s pretty simplistic.

There’s a good chance that you would spend a lot of time and money and effort on more custom example that probably wouldn’t work much better.

So give that a shot.

As always, if you have additional questions, feel free to ask them at any time, you can leave them in the comments or whatever.

Thanks for tuning in.

I’ll talk to you next time.

If you enjoyed this video, please hit the like button.

Subscribe to my channel if you haven’t already.

And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.

♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This