Almost Timely News, November 26, 2023: ChatGPT Turns 1. What Have We Learned?

Almost Timely News: ChatGPT Turns 1. What Have We Learned? (2023-11-26) :: View in Browser

Almost Timely News

๐Ÿ‘‰ Watch the newest version of my talk, The Intelligence Revolution, recorded live at DigitalNow 2023, now with more talking robot dogs! (plus get the slides) ๐Ÿ“บ

Content Authenticity Statement

100% of this newsletter’s content was generated by me, the human. When I use AI, I’ll disclose it prominently. Learn why this kind of disclosure is important.

Watch This Newsletter On YouTube ๐Ÿ“บ

Almost Timely News: ChatGPT Turns 1. What Have We Learned? (2023-11-26)

Click here for the video ๐Ÿ“บ version of this newsletter on YouTube ยป

Click here for an MP3 audio ๐ŸŽง only version ยป

What’s On My Mind: ChatGPT Turns 1. What Have We Learned?

Itโ€™s the one year anniversary of ChatGPT; 2022 was a landmark year with Stable Diffusion for images and ChatGPT for text. Since then, the world as we know it has changed dramatically.

So, what have we learned from this whiplash rollercoaster ride that we now call generative AI in the last year?

The first and most important thing that generative AI really changed is that non-technical, non-coding people got an on-ramp to AI. We’ve had AI for decades, and we’ve had very sophisticated, capable, and powerful AI for the last 20 years. However, that power has largely been locked away behind very high technical restrictions; you had to know how to code in languages like Python, R, Scala, and Julia to make the most of it. Today, you code in plain language. Every time you give an instruction to Bing, Bard, Claude, or ChatGPT, you are coding. You are writing code to create what you hope is a reliable, reproducible result in the same way that a programmer who writes in Python hopes.

The implications of this change are absurdly large, almost too big to imagine, and we’re only at the very beginning of this change. Clay Shirky once said that a tool becomes societally interesting once it becomes technologically boring, but AI is defying that particular trend. It’s still technologically quite interesting, but its simplicity and ease of use make it societally interesting as well.

And those societal changes are only beginning to be felt. Recently, I was on a call with a colleague who said that their company’s management laid off 80% of their content marketing team, citing AI as the replacement for the human workers. Now, I suspect this is an edge case for the moment; unless that team’s content was so bad that AI was an improvement, I find it difficult to believe the management knew what AI was and was not capable of.

That raises the second major thing we’ve learned in the last year: the general public doesn’t really have a concept of what AI is and is not capable of. The transformers architecture that powers today’s language models is little more than a token guessing machine, a machine that can take in a series of arbitrary pieces of data called tokens (in language models, these tokens correspond to 4 letter pieces of words), and then they attempt to predict what the next set of tokens would be in any given sequence. That’s all they are; they are not sentient, not self-aware, have no agency, and are incapable of even basic things like math (just ask any of them to write a 250 word blog post and you’ll almost never get exactly 250 words).

The general public, however, appears to be under the impression that these tools are all-knowing, all-powerful magic wands that will either usher in a world like Star Trek or Skynet, and the various AI companies have done little to rein in those extremes. In fact, a substantial number of people have gone on at length about the existential threat AI poses.

Look, AI doesn’t pose world-ending threats in its current form. A word guessing machine isn’t going to do much else besides guess words. Now, can you take that and put it into an architecture with other components to create dangerous systems? Sure, in the same way that you can take a pressure cooker and do bad things with it to turn it into an explosives device. But the pressure cooker by itself isn’t going to be the cause of mass destruction.

To be clear, there are major threats AI poses – but not because the machines are suddenly sentient. Two of the major, serious, and very near future threats that very few people want to talk about are:

  1. Structural unemployment.
  2. Income inequality.

AI poses a structural unemployment risk. It’s capable of automating significant parts of jobs, especially entry-level jobs where tasks are highly repetitive. Any kind of automation thrives in a highly repetitive context, and today’s language models do really well with repetitive language tasks. We’ve previously not been able to automate those tasks because there’s variability in the language, even if there isn’t variability in the task. With language models’ abilities to adapt to language, those tasks are now up for automation – everything from call center jobs all the way up to the CEO delivering talks at a board meeting. (sit on any earnings call and the execs largely spout platitudes and read financial results, both tasks machines could do easily)

As a result, we will, planet-wide, need to deal with this risk of structural unemployment. Yes, a lot of jobs will be created, but many more jobs will be curtailed because that’s the nature of automation. The US economy, for example, used to be mostly agriculture, and today less than 1% of the population works in agriculture. What the new jobs look like, we don’t know, but they won’t look anything like the old jobs – and there will be a long, painful period of transition as we get to that.

The second risk is substantially worsened income inequality. Here’s why, and it’s pretty straightforward. When you have a company staffed with human workers, you have to take money from your revenues and pay wages with it. Those human workers then go out into the broader economy and spend it on things like housing, food, entertainment, etc. When you have a company staffed more and more with machines and a few human workers to attend to those machines, your company still earns revenues, but less of it gets disbursed as wages. More of it goes to your bottom line, which is part of the reason why every executive is scrambling to understand AI. The promise of dramatically increased profit margins is too good to pass up – but those profit margins come at a cost. That cost is paying wages to fewer people.

What happens then is a hyper-concentration of wealth. Company owners keep more money – which is great if you’re an owner or a shareholder, and not great if you are unemployed. That sets up an environment where hyper-concentrated wealth exists, and for most of human history, that tends to end in bloodshed. People who are hungry and poor eventually blame those in power for their woes, and the results aren’t pretty.

The antidote to these two problems is universal basic income funded with what many call a robot tax – essentially, an additional set of corporate taxes. Where that will play out will depend very much on individual nations and their cultures; societies which tend to be collectivist such as Korea, Japan, China, and other East Asian nations will probably get there quickly, as will democratic socialist economies like the Scandinavian nations. Cultures which are hyper-individualistic, like the USA, may never get there, especially with corporations’ lobbying strength to keep business taxes low.

The third thing we’ve learned in this last year is how absurdly fast the AI space moves. Back in March of 2022, there were only a handful of large language models – GPT 3.5 from OpenAI, Google’s BERT and T5, XLNet, and a few others. Fast forward a year and a half, and we now have tens of thousands of language models. Take a look at all that’s happened for just the biggest players since the release of GPT-3.5:

  • March 15, 2022: GPT-3.5 released
  • April 4, 2022: PaLM 1 released
  • November 30, 2022: ChatGPT released
  • January 17, 2023: Claude 1 released
  • February 1, 2023: ChatGPT Plus released
  • February 27, 2023: LLaMa 1 released
  • March 14, 2023: GPT-3.5-Turbo, GPT-4 released
  • May 10, 2023: PaLM 2 released
  • July 12, 2023: Claude 2 released
  • July 18, 2023: LLaMa 2 released
  • October 16, 2023: GPT-4-V, GPT-4-Turbo released
  • November 21, 2023: Claude 2.1 released

When you look at this timeline, it becomes clear that the power of these models and the speed at which they are evolving is breathtaking. The fact that you have major iterations of models like LLaMa and the OpenAI GPT models within 6 months of the previous version – with a double of capabilities each time – is unheard of. We are hurtling into the future at warp speed, and in a recent talk by Andrej Karpathy (one of OpenAI’s top technologists), he said there was so far no indication that we’re running into any kind of architectural limits for what language models can do, other than raw compute limits. The gains we get from models continue to scale well with the resources we put into them – so expect this blistering pace to continue or even accelerate.

That’s quite a tour of the last year and change. What lessons should we take from it?

First, AI is everywhere and its adoption is increasing at a crazy rate thanks to the promises it offers and its ability to fulfill them in ways that previous generations of AI have not. The bottom line is this: AI will be an expected skill set of every knowledge worker in the very near future. Today, knowledge and skill with AI is a differentiator. In the near future, it will be table minimum. This harkens back to a refrain I’ve been saying in my keynotes for years: AI won’t take your job. A person skilled with AI will take the JOBS (plural) of people who are not. One skilled worker with AI can do the tasks of 2, 3, 5, or even 10 people. You owe it to yourself to get skilled up quickly.

Second, the pace of change isn’t slowing down. That means you need to stick close to foundational models like GPT-4-V, Claude 2.1, LLaMA 2, etc. – models that have strong capabilities and are adapting and changing quickly. Avoid using vendors who build their companies on top of someone else’s AI model unless there’s no other viable alternative, because as you can see from the list earlier, that rate of change is roughly 6-9 months between major updates. Any vendor who builds on a specific model runs the risk of being obsolete in half a year. In general, try to use foundational models for as many tasks as you can.

Third, everyone who has any role in the deployment of AI needs to be thinking about the ethical and even moral implications of the technology. Profit alone cannot be the only factor we optimize our companies for, or we’re going to create a lot of misery in the world that will, without question, end in bloodshed. That’s been the tale of history for millennia – make people miserable enough, and eventually they rise up against those in power. How do you do this? One of the first lessons you learn when you start a business is to do things that don’t scale. Do things that surprise and delight customers, do things that make plenty of human sense but not necessarily business sense. As your business grows, you do less and less of that because you’re stretched for time and resources. Well, if AI frees up a whole bunch of people and increases your profits, guess what you can do? That’s right – keep the humans around and have them do more of those things that don’t scale.

Here’s a practical example. Today, humans who work in call centers have strict metrics they must operate by. My friend Jay worked in one for years, and she said she was held to a strict 5 minute call time. She had to get the customer off the phone in 5 minutes or less, or she’d be penalized for it. What’s the net effect? Customers get transferred or just hung up on because the metric employees are measured on is time, not outcome – almost no one ever stays on the line to complete the survey.

Now, suppose AI tackles 85% of the call volume. It handles all the easy stuff, leaving only the difficult stuff for the humans. You cut your human staff some, but then you remove the time limits for the humans, and instead measure them solely on survey outcomes. Customers will actually make it to the end of the call to complete the survey, and if an employee is empowered to actually take the time to help solve their problems, then your customer satisfaction scores will likely skyrocket.

This would be contingent on you accepting that you won’t maximize your profits – doing so would require you to get rid of almost all your human employees. If you kept the majority of them, you’d have somewhat lower costs, but re-tasking those humans to solve the really thorny problems would let you scale your business even bigger. The easy stuff would be solved by AI, and the harder stuff solved by the majority of humans you kept around for that purpose.

Will companies do this? Some will. Some won’t. However, in a world where AI is the de facto standard for handling customer interactions because of its low cost, your ability to differentiate with that uniquely human touch may become a competitive advantage, so give that some thought.

Happy first birthday, ChatGPT, and let’s see what the world of generative AI has in store for us in the year to come.

How Was This Issue?

Rate this week’s newsletter issue with a single click. Your feedback over time helps me figure out what content to create for you.

Share With a Friend or Colleague

If you enjoy this newsletter and want to share it with a friend/colleague, please do. Send this URL to your friend/colleague:

https://www.christopherspenn.com/newsletter

For enrolled subscribers on Substack, there are referral rewards if you refer 100, 200, or 300 other readers. Visit the Leaderboard here.

ICYMI: In Case You Missed it

Besides the newly-refreshed Google Analytics 4 course I’m relentlessly promoting (sorry not sorry), I recommend the episode Katie and I did on business continuity planning when it comes to generative AI.

Skill Up With Classes

These are just a few of the classes I have available over at the Trust Insights website that you can take.

Premium

Free

Advertisement: Generative AI Workshops & Courses

Imagine a world where your marketing strategies are supercharged by the most cutting-edge technology available โ€“ Generative AI. Generative AI has the potential to save you incredible amounts of time and money, and you have the opportunity to be at the forefront. Get up to speed on using generative AI in your business in a thoughtful way with Trust Insights’ new offering, Generative AI for Marketers, which comes in two flavors, workshops and a course.

Workshops: Offer the Generative AI for Marketers half and full day workshops at your company. These hands-on sessions are packed with exercises, resources and practical tips that you can implement immediately.

๐Ÿ‘‰ Click/tap here to book a workshop

Course: Weโ€™ve turned our most popular full-day workshop into a self-paced course. The Generative AI for Marketers online course launches on December 13, 2023. You can reserve your spot and save $300 right now with your special early-bird discount! Use code: EARLYBIRD300. Your code expires on December 13, 2023.

๐Ÿ‘‰ Click/tap here to pre-register for the course

Get Back to Work

Folks who post jobs in the free Analytics for Marketers Slack community may have those jobs shared here, too. If you’re looking for work, check out these recent open positions, and check out the Slack group for the comprehensive list.

What I’m Reading: Your Stuff

Let’s look at the most interesting content from around the web on topics you care about, some of which you might have even written.

Social Media Marketing

Media and Content

SEO, Google, and Paid Media

Advertisement: Business Cameos

If you’re familiar with the Cameo system – where people hire well-known folks for short video clips – then you’ll totally get Thinkers One. Created by my friend Mitch Joel, Thinkers One lets you connect with the biggest thinkers for short videos on topics you care about. I’ve got a whole slew of Thinkers One Cameo-style topics for video clips you can use at internal company meetings, events, or even just for yourself. Want me to tell your boss that you need to be paying attention to generative AI right now?

๐Ÿ“บ Pop on by my Thinkers One page today and grab a video now.

Tools, Machine Learning, and AI

Analytics, Stats, and Data Science

All Things IBM

Dealer’s Choice : Random Stuff

How to Stay in Touch

Let’s make sure we’re connected in the places it suits you best. Here’s where you can find different content:

Advertisement: Ukraine ๐Ÿ‡บ๐Ÿ‡ฆ Humanitarian Fund

The war to free Ukraine continues. If you’d like to support humanitarian efforts in Ukraine, the Ukrainian government has set up a special portal, United24, to help make contributing easy. The effort to free Ukraine from Russia’s illegal invasion needs our ongoing support.

๐Ÿ‘‰ Donate today to the Ukraine Humanitarian Relief Fund ยป

Events I’ll Be At

Here’s where I’m speaking and attending. Say hi if you’re at an event also:

  • Social Media Marketing World, San Diego, February 2024
  • Australian Food and Grocery Council, Melbourne, May 2024
  • MAICON, Cleveland, September 2024

Events marked with a physical location may become virtual if conditions and safety warrant it.

If you’re an event organizer, let me help your event shine. Visit my speaking page for more details.

Can’t be at an event? Stop by my private Slack group instead, Analytics for Marketers.

Required Disclosures

Events with links have purchased sponsorships in this newsletter and as a result, I receive direct financial compensation for promoting them.

Advertisements in this newsletter have paid to be promoted, and as a result, I receive direct financial compensation for promoting them.

My company, Trust Insights, maintains business partnerships with companies including, but not limited to, IBM, Cisco Systems, Amazon, Talkwalker, MarketingProfs, MarketMuse, Agorapulse, Hubspot, Informa, Demandbase, The Marketing AI Institute, and others. While links shared from partners are not explicit endorsements, nor do they directly financially benefit Trust Insights, a commercial relationship exists for which Trust Insights may receive indirect financial benefit, and thus I may receive indirect financial benefit from them as well.

Thank You

Thanks for subscribing and reading this far. I appreciate it. As always, thank you for your support, your attention, and your kindness.

See you next week,

Christopher S. Penn


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

One response to “Almost Timely News, November 26, 2023: ChatGPT Turns 1. What Have We Learned?”

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This