Mind Readings: Where is Apple in Generative AI?

In today’s episode, we’re discussing Apple’s strategy in the generative AI space. You’ll gain insights into the capabilities of Apple’s neural engine, the innovative architecture of their M-series chips, and the significant implications for AI and machine learning. Learn about Apple’s approach to integrating AI into their devices, offering not just more power, but also efficiency and practicality. Tune in to discover how Apple is shaping the future of AI on consumer devices.

Mind Readings: Where is Apple in Generative AI?

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

A lot of folks in recent days, well, really, since like the last quarter of 2023, have been talking about Apple, and saying that Apple is missing the boat on generative AI.

Are they? Let’s take a few different points of view on this topic, some disparate data points that Apple has been publishing some stuff, I think is worth paying attention to.

Because it tells you kind of the direction that Apple’s might be going and I should disclose I have no insider information whatsoever on this topic.

I don’t work for Apple.

I don’t know anyone personally who does work for Apple.

All this is just based on the data they’re publishing publicly, and the things that they’re doing.

First is the Apple neural engine.

It is a common piece of hardware, the Apple neural engine in both these devices, the A series chips by the iPhones, and the M series of chips, the M1, the M2, the M3, that Apple makes that are the core of their desktop and laptop computers.

The Apple neural engine is a neural processor and set of marketing speak, what is this thing? If you’ve heard of a Google’s special specialized tensor processing units, TPS, Apple neural engine is the same family of specialized chip.

It’s a type of chip that allows machine learning calculations of very specific kinds to be executed.

And it takes the load off of the CPU and the GPU.

So the Apple neural engine, the GPU and the CPU, in Apple devices all share the same memory, right? When you go and buy a MacBook Air, it will ask you like, how much memory do you want to buy? And they give you all these different numbers.

And the rule has always been, obviously, with any computer, Windows or Apple, buy as much memory as you can afford, because memory is is like any valuable resource, the more of it you have, the better.

But with modern phones, and with Apple’s desktops, you absolutely want as much memory as you can, because Apple shares its memory across its neural engine, GPU and CPU.

This is also why eight gigabyte memory, Apple MacBook Pros just suck.

They’re basically bricks, because there’s not enough memory available for all the different parts.

Why does Apple do this? Why they design their systems like this way, speed, shared memory means that you don’t have to move.

Move data from one type of memory to another, like you do, say in a Windows system, where you have to move from CPU memory to GPU memory to video RAM, in Windows systems and Linux systems, with Apple’s all in one spot.

So the three different components can access the data without having to shuttle it around.

And that makes it much faster.

The M three chipset, which is part of the newest version of Apple’s laptops right now, as of the time of this recording beginning of 2024, is the first of Apple’s chips to have what’s called dynamic caching, which can load parts of things like AI models, rather than the whole thing, along with other parts of tasks that the GPU and the neural engines going to use.

When you look at the pricing and the capabilities of Apple’s M series chips, they have the M chip, the M Pro and the M Max and the M Ultra sort of the four varieties that they have for any of any of their product lines, it’s pretty clear that they know that people are buying the high end chips not necessarily for advanced graphics, although you certainly can use it for that.

But their first chips, the memory bandwidth, the bandwidth speed, the the way that it’s architected, is definitely suggestive, that Apple knows those chips are gonna be super valuable for machine learning and AI.

Next, so that’s chips, that’s hardware on the software side, Apple’s been releasing some very interesting open source packages recently, they released a toolkit in the last quarter of 2023, called mlx mlx.

Is a toolkit that provides processing speed using the metal architecture that is much, much faster.

It’s designed for shared memory.

So it’s designed for Apple’s unique architecture.

And the mlx toolkit does certain operations like graphics tasks, image generation, language models, image generation models, up to 40% faster than the the more common pie torch toolkit on the same hardware, that’s a big speed up, right? If you can be 40% faster than 20% faster, running inference on a language model, you’re running Mistral locally, 40% of big speed bump, being able to deliver performance that quickly.

They’re doing multimodal research, they’re doing research to correct hallucinations and language models.

But there was a paper recently, that really caught everyone’s eye in the AI space called was the papers, it was essentially about the paper tells efficient large language model inference with limited memory ll in a flash.

And what they were saying in that paper was, there are ways to store language models in flash memory, rather than dynamic RAM.

And it makes much, much faster language models.

In the paper, they said the practical outcomes of our research are noteworthy, we have demonstrated the ability to run language models up to twice the size of available dynamic RAM, achieving acceleration, and inference speed by four to five x compared to traditional loading methods and CPU and 20 to 25 x in GPU.

This breakthrough is particularly crucial for deploying advanced LLMs and resource limited environments, therefore expanding their applicability and accessibility.

And they go through some examples using Falcon and opt etc.

pop quiz.

Which Apple device contains six GPU cores, 16 neural engine cores, and only eight giga RAM.

It’s not the M series chips, right? It is this guy.

The A series aka the iPhone.

When you put all the clues together of what Apple is doing, all the papers, all the research, they’re all hinting at finding efficient, effective ways to run smaller models 7 billion parameter models or less on resource constrained hardware.

While maxing out performance and quality.

They’re not talking loudly about it making crazy claims like a lot of other companies have released in the AI space, but you can see the stars aligning, you can see the foundation being prepared.

Apple is looking at ways to put language models and other forms of generative AI on these devices in highly efficient ways that deliver all the benefits, but obviously in a much more controlled way.

Here’s the thing I’ve and I will confess to being an Apple fanboy.

I own probably more Apple devices than I should.

Apple’s not first on a bunch of anything.

They did not have the first GUI, right? That was the Xerox PARC had that they’d not have the first mouse also Xerox, they don’t have the first personal computer that was IBM, to some degree, I believe they did not have the first tablet computer not by launch.

I think Toshiba had the first one, they did not have the first smartphone, we were using Nokia phones that were reasonably smart long before the iPhone.

They did not have the first mp3 player, I river had one years before the iPod, they did not have the first smartwatch, they certainly did not have the first VR glasses.

Apple has not been first on any of these things.

But they are polished, and in many cases, best, right? That’s Apple’s recipe.

It’s not first, it’s best take something that could be successful, but is all rough edges and smooth out the rough edges.

That’s really what Apple’s good at take design, take user experience and make a smoother experience for something that there’s marketability for.

But what’s out there kind of sucks, right? When you look at Vision Pro, and then you see what Oculus is like, Oculus is kind of a big clunky device, right? It’s the OS is not particularly intuitive.

The hardware is not super high end.

It does a good job for what it is.

But clearly, Apple’s like, Okay, how can we take this thing that there’s been proven a market for this? But how do we up level it and make it a lot smoother? That is where Apple is going.

Christopher Penn: With generative AI? Have they missed the boat? Now, they’re on a different boat.

They’re building a different boat for themselves.

And it behooves all of us who are in the space, we’re paying attention to what’s happening in the space to keep an eye on what’s going on in Cupertino.

That’s gonna do it for this episode.

Talk to you next time.

If you enjoyed this video, please hit the like button.

Subscribe to my channel if you haven’t already.

And if you want to know when new videos are available, hit the bell button to be notified as soon as new content is live.

♪ ♪


You might also enjoy:


Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here


AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Shares
Share This