In today’s episode, Anne asks how I see the power of large language models having the most utility. I explain what agent networks are and how they allow multiple AI models to work together. This coordination unlocks capabilities beyond any single model, like integrating search engines and workflows. Tune in for examples of agent networks in action and how they will transform productivity.
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Got a question for You Ask, I'll Answer? Submit it here!
- Subscribe to my weekly newsletter for more useful marketing tips.
- Subscribe to Inbox Insights, the Trust Insights newsletter for weekly fresh takes and data.
- Find older episodes of You Ask, I Answer on my YouTube channel.
- Need help with your company's data and analytics? Let me know!
- Join my free Slack group for marketers interested in analytics!
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.
in today’s episode and asks when you mentioned large language models are more powerful than people imagine, in which ways do you see that power having most utility? And what excites you about that? Okay.
This is in relation to actually a whole conversation that we had on threads, because I’ve left behind the dumpster fire that is the network formerly known as Twitter.
And this was a discussion about large language models and in specific agent networks.
So if you’re not familiar, an agent network in in AI language is when you have multiple language models working together.
So if you think about chat GPT, for example, that is a single instance of a language model, you are talking to one instance of it, you ask a question, it gives you answers, it tells you jokes, it writes limericks, etc.
You’re used to that, you know how to use that.
And you know how to ask follow on questions.
If you say write a limerick, and you’re like, Okay, well, that wasn’t funny.
So let’s let’s revise it.
There are systems, technologies out there that allow you to glue together language models along with other systems, probably the most well known one is a system called Lang chain, which is a scripted environment where you tie together multiple language models.
So real practical example, you have one language model that is maybe writing a trashy romance novel.
And you have a second model that reads the output of the first model and edits it says, well, that doesn’t really make a whole lot of sense or that’s misspelled or that doesn’t you know that there’s no coherence.
And you have a third model that inspects the overall output saying, Look, there’s a there’s no narrative arc here, right? Yeah.
You know, and and Suzy, me in Act one and the Suzy are dating in act two and Suzy are riding hot air balloons.
It’s like, look, there’s the so that third model’s job is to inspect the overall arc and say, okay, model one, go back and try again, you know, girl meets girl, girl falls in love with girl.
Hot air balloons, huh? It should be girl meets girl, girl falls in love with girl girl breaks up with girl, girl gets back together with girl and so on and so forth.
And so that’s an example of an agent network, you’d have multiple models controlled by the software called Lang chain, that would be interacting with the outputs in ways that one model can’t do on its own right in the same way that a software developer really should not be qAing their own code.
A language model really should probably not be trying to edit as it writes, just like you know, if you read my friend and Hanley’s book, everybody writes, including AI writing and editing are different tasks, you should not be editing while you’re writing.
And so you would either do that separately, or you hire an editor to edit your writing.
That’s what an agent network is.
It is multiple instances of language models doing different tasks in coordination with each other.
And these are really, really, really powerful because they can also talk to other pieces of software.
So Lang chain, for example, can talk to something like a selenium web driver, which is a fancy piece of technology that just browsers the web, it’s just a web browser that a computer uses instead of your eyes.
So it doesn’t need the all back button and all this stuff.
It just is a text based web browser.
Systems like chat GPT, or Claude, or whatever, they can’t browse the web.
chat GPT used to be able to but it turns out that people were misusing it.
So they can’t do that anymore.
Selenium, selenium web driver can, but it needs to be told what to do.
So now in an agent network, you have a language model doing some generation that Lang chain can take that output, pass it to a selenium instance and say browse the web and bring back the text from that web.
And then hand either hand it back to the original language model or pass to another language model and say, hey, interpret this and do something with it.
You can see this at work in Microsoft Bing.
If you use Microsoft Bing with its with its chat GPT integration, when you ask a question of Bing chat, watch what happens it will, it will take your question out of natural language.
And the GPT form model will rewrite that question as a Bing query and that will pass that to Bing search engine, pull the results back from the search engine, pass it back to the GPT model to say rewrite this into, you know, coherent narrative text, and boom, there’s your answer.
It’s not asking the GPT model for the answer.
It’s asking the Bing search engine.
So Microsoft has sort of glued together different components to make this this ecosystem.
It’s the smart way to do large scale implementations of AI.
So that’s the power of these systems.
The models themselves are very powerful, but they’re really good at language.
They’re not really good at other things.
They’re not really good at search.
They’re not really they’re definitely not good at math.
And they can they can lose their memory over time because of all sorts of technical limitations.
But they’re really good at language.
So if you take something that’s really good at language and glue it to a database, or you glue it to a web browser, or you glue it to a chat client, or you glue it to a spreadsheet, you are now creating networks of systems that can interact with each other and develop capabilities that are beyond what any one component itself can do.
Again, this is where Google duet and Microsoft Co pilot are going to really really unlock the power of these these language models because in Microsoft Co pilot, you’ll be able to be in a Word document and say, turn this into a PowerPoint presentation.
The language model is not going to do that.
The language model is going to take your input and the document and use and it’s going to write code because code is a language.
It’s going to write code to pass to like Visual Basic Script or Python or whatever the backend languages that Microsoft uses that will then create the output.
And so that’s how these tools get around their limitations of you know, the tasks that are not language like making PowerPoints.
Writing code is a language and therefore, a language model can control PowerPoint or Excel or Word.
So that’s where I see these tools having enormous utility in agent networks, as part of an overall computational environment that brings in all these heterogeneous systems, and the unifies them with language the same way we do.
Right? That’s the secret.
That is the secret.
We do this already as humans, we use language, we have keyboards and mice and they type and where we talk, and we click on things on the screen.
We are interacting with our software that exists today through language.
So getting a machine to use the same style of communication is not really a stretch.
And therefore, that’s what’s going to unlock productivity.
And that’s really exciting, right? If you would get good at, at prompt engineering, or just prompting, let’s just call it prompting.
And you understand how specific you need to be to get good outcomes.
As language models find themselves into every single piece of software, and as agent networks spring up, you will be able to do more than any other, you know, colleague who’s not using AI, you’ll be dramatically more productive.
I think Boston Consulting Group just did a study saying that people who use AI within their job were 40% more productive.
Now keep in mind, companies are delighted to see that.
And they’re excited, like out of their minds, they get a 2% increase in productivity in employees.
So when you see 40% increase in productivity, that’s like, your head just explode, you know, money starts raining from the sky.
That’s, that’s what’s exciting about this stuff.
If you get on board and you get proficient at it today, you are paving a path for yourself to be the conductor of the orchestra, right, the leader of the world.
And bags of money to join existing companies that want to retain their leadership in the face of a highly disruptive trend.
So really good question.
It’s there’s a lot we can explore on it, but it’s a that’s a good start.
So thanks for asking.
If you’d like this video, go ahead and hit that Subscribe button.
You might also enjoy:
- What Is The Difference Between Analysis and Insight?
- You Ask, I Answer: Google Tag Manager and Google Analytics Integration?
- The Basic Truth of Mental Health
- How to Measure the Marketing Impact of Public Speaking
- Almost Timely News, 17 October 2021: Content Creation Hacks, Vanity Metrics, NFTs
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers