You Ask, I Answer: Accounting and Tax Large Language Model Strategy?

You Ask, I Answer: Accounting and Tax Large Language Model Strategy?

In today’s episode, Allison asks about building AI models for accounting and taxes. I explain a hybrid approach works best – a language model plus a frequently updated data source. It’s complex but doable with the right strategy and data. Consulting can help create the blueprint. Tune in for more!

You Ask, I Answer: Accounting and Tax Large Language Model Strategy?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

In today’s question, Allison asks, Are you aware of any large language models with tax and accounting data or any being developed or crazy question how to develop one when it comes to accounting and tax, the regulations change all the time.

So it’d be important for the model to be updated as needed, which adds the complexity of our needs.

Okay, so there are two approaches towards the use of large language models.

One is the perfect memory approach where you try to train a model and fine tune it to have perfect memory.

The model runs it doesn’t need any other data sources.

It knows what to do in any given situation.

Perfect memory models are good.

They’re fast.

They have a very large upfront cost to train them.

And they go out of date really quickly.

Because the moment something changes, the model doesn’t know it because you have to retrain it on a regular frequent basis.

The second architecture, the one that we see a lot more companies taking is the language models and interpreter.

It’s interpreter that connects to other systems and those other systems can have the data.

So for example, when you use Microsoft Bing’s chat, Bing is not asking GPT four for the answers.

Bing is asking GPT four to take the conversational thread that a user asks and convert it into queries that are compatible with Bing search engine.

It goes through it returns the search data to GPT four and says hey, summarize the data that I’ve given you and output it as language to the user.

So the the large language model in that case is not being leveraged for its ability to know things.

It is being used for its ability to convert other formats of data into natural language into an out of natural language.

Of these two approaches, I mean, they’re both good approaches, you know, perfect memory means big upfront training costs goes out of out of date really fast, but very, very fast, very, very capable.

The interpretation version is lower upfront cost because you’re just using a language model for its language purposes, bigger infrastructure cost and bigger operating costs because there’s more machinery being used to to do the work makes the model does not know everything the model is only there to interpret.

However, in this situation where you’re talking about tax data, accounting data, financial data, and the changes in tax regulations, you would probably want the interpreted model where you have an underlying database of some kind.

Typically, when we’re talking about large language models, we’re talking about vector databases, you want a vector database that was constantly being primed and, and fed the accounting and tax data that you want.

And then your language model takes in individual queries, looks first at the vector database and says, Hey, what do you know about escrow taxes? And then if it comes up with less good answers there, then we’ll ask, you know, it’ll default to asking the itself as a language model.

But most of the time, the answer is going to come from the vector database for a given query.

And that’s the approach I would take.

If I was being asked to build something like this, rather than try to fine tune a model, now you might want to fine tune the model in the beginning to give it a good sense of all the language, it’s really important.

You know, there’s gonna be terms and accounting that no one else uses.

And you would want to make sure the model knew of them, understood them from a statistical perspective and could generate them.

Then you would feed the model data to and from the database that contains all the current information.

So that’s the approach.

It’s not crazy to build one.

It’s not crazy to build a system like this.

It is expensive.

It is laborious because you have to gather up all the data you want to train the model on you can’t just give it you know, five pages of stuff, you need to give it a good amount of information.

But it’s not crazy to do it.

And lots of people and lots of companies are building custom models or custom into integrations, hybrid models where you have a language model that does the interpretation and they have a data source that is kept up to date and clean and structured well.

But it’s a really good question.

Shameless plug.

Consulting on this stuff is what my company trust insights does.

So if you have questions about wanting to implement this kind of system, and the strategy, and maybe even the blueprint for building the system itself, hit reply, leave a comment, do something that’s because again, we’re happy to help with this.

It’s literally one of the things that we do.

Good question, Allison.

Thanks for asking.

We’ll talk to you soon.

If you’d like this video, go ahead and hit that subscribe button.

(upbeat music)


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This