IBM THINK 2020 Digital Experience: Day 2 Review

Warning: this content is older than 365 days. It may be out of date and no longer relevant.

IBM THINK 2020 Digital Experience: Day 2 Review

Day 2 of THINK 2020 was much more meat and potatoes, from use cases for AI to process automation. Rob Thomas, SVP Cloud and Data, showed a fun stat that early adopters of AI reaped a 165% increase in revenue and profitability, which was nice affirmation. But the big concept, the big takeaway, was on neurosymbolic AI. Let’s dig into this really important idea presented in a session with Sriram Raghavan, Vice President, IBM Research AI.

IBM THINK 2020 Digital Experience: Day 2 Review

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Today we’re talking about day two of IBM think 2020 digital experience, which was much more meat and potatoes than day one day one was a lot of flash and showbiz and big name speakers as typical for many events.

Day two was what many of us came for, which is the the technical stuff, the in depth dives into all neat technologies that IBM is working on.

The one of the cool stats of the day was from Rob Thomas, whose title I can’t remember anymore because it keeps changing.

But he said that for organizations that were early adopters of artificial intelligence, they saw a 165% lift in revenues and profitability.

That’s pretty good.

That’s pretty darn good.

At unsurprisingly, because of the way IBM approaches, Ai, a lot of the focuses on automation on operational efficiencies, things like that.

So less huge radical revolutions and more, make the things you do better.

Much, much better.

The big takeaway, though, for the day came from a session with Sriram Raghavan, who is the VP of IBM Research AI.

And he was talking about his concept called neuro symbolic AI, which is a term that I had not heard before today.

I may be behind on my reading or something.

But it was a fascinating dive into what this is.

So there’s there’s two schools of artificial intelligence, there’s what’s called classical AI.

And then there is neural AI.

And the two that sort of had this either or very binary kind of battle over the over decades, classical AI was where artificial intelligence started with the idea that you could build what are called expert systems that are trained.

And you’ve thought of every possible outcome.

And the idea being you would create these these incredibly sophisticated systems.

Well, it turns out that scales really poorly.

And even with today’s computational resources, they they’re just not able to match the raw processing power of what’s called neural AI, which is why we use things like machine learning, neural networks, deep learning, reinforcement, learning, transfer, learning, active learning, all these different types of learning.

And you feed machines, massive piles of data and the machine learns itself.

The revolution that we’ve had in the last really 20 years in artificial intelligence has been neural AI, and all the power and the cool stuff that it can do.

The challenge with neural AI is that Deep learning networks are somewhat brittle and easily.

It’s called spiking a bet you contaminate them with even a small amount of bad data and you can get some really weird stuff happening.

That combined with a lack of explained ability, and interpretability makes them somewhat challenging you a model comes out and does great things.

But no one could explain exactly why the model works.

We can guess we can maybe put in some interpretability checkpoints in the code, but it’s very difficult and cost intensive to do that.

So you have these two different schools.

You have the classical, let’s have a pristine knowledge system and have the let’s throw everything in see what happens.

neurosymbolic AI, at least from what Dr.

Raghavan was explaining, is when you weld these two things together, so you have all this data but it from the neural side, but the expert system side effectively forms guardrails that say, here are the parameters where we’re which the model shouldn’t drift out of So instead of making it a free for all and risking having having contaminated data in there, you say these are the guardrails, which we’re not going to let the model go outside of.

A really good example of this is, if you’ve ever worked with a chat bot of any kind, there are things that chat bots are and are not allowed to say.

And as we develop more and more sophisticated Chatbots the risk of having them be contaminated with bad data.

You know, internet trolls typing in hate speech into these things, is a real risk.

But having this idea of neurosymbolic AI says these these not just you know these words in our lab, but these entire concepts or categories are not allowed.

And so neurosymbolic AI brings these two worlds together, if you can do it well.

Last year, IBM did a thing called Project debater, which was their first attempt at having a public demonstration of neurosymbolic AI the debate Architecture had 10 different API’s of which several were expert systems saying these are the types of data the look for, these are the things that are allowed.

These are the things that are explicitly not allowed.

And then the neural side said, here’s the corpus of every English language article on in the database.

And by having the two systems play off of each other, it delivered better performance than either kind of AI would have delivered alone.

So what does this mean for us? It’s a change in the way we think about building artificial intelligence models instead of having to choose either or trying to handcraft an expert system again, if you build chat bots, you’ve done this because you’ve had to drag and drop the workflows and the IF THEN statements and things you know, classical, not true, deep learning NLP.

The chat bots, you’ve built by hand like this very limited.

There’s a range of what they can do, but it’s sort of a classic expert system.

And then you have the free for all.

If we can develop neurosymbolic systems that are relatively easy to use and relatively easy to scale, then you get the best of both worlds, you say these are the things I want to allow in my chat bot, but it can have conversations about other things as long as it doesn’t fall afoul of, you know, this area of things I don’t want to allow.

So you could say, allow customer service interactions, allow sales interactions, allow marketing interactions, but also allow history of the company also allow profiles of the executives.

And if a person interacting with your chat bot said it was all like, well, who exactly is who exactly is Christopher Penn? It would know and be able to use the neural side and the expert system side to say, I’m going to go and look at Christopher Penn data that I have in this database.

I know what’s allowed and I know what’s not allowed from the expert system side and I’m going to return a an intelligible answer neurosymbolic I think has the potential to be a way for us to build more trust in artificial intelligence, because we know that the expert system side is there to guide us is there it’s handcrafted by somebody to, to really build the rules, the safety, the trust, the things that are explicitly not allowed the things that are encouraged in the system.

That’s where I see a lot of potential for this concept.

Now, it’s going to be challenging for organizations to build this because it requires knowledge of both schools, AI and a lot of folks particularly last 10 years or so have been solely on the machine learning and neural side.

The idea of the expert system side is something only folks with a lot of gray hair in the AI field will have done because that was you know, the 70s, the 80s.

The 90s was sort of that time period when expert systems were the thing.

So it’s neat to see this concept coming around.

And again, a few other things I thought were interesting from the day talk on propensity modeling and causal inferences within machine learning, I thought was really cool being able to use different algorithms to start to hint at causality you can’t prove without a shadow of a doubt.

But there are some definitely some algorithms that can get you closer to causality rather than correlation.

That was really cool.

And of course, the quantum stuff, always mind blowing.

And always, I still can’t put it into into words, I can understand it yet.

But a terrific wrap up.

That’s the end of the live sessions for think but the thing digital experiences open to the public, I think for least a few more weeks, so I’m going to dive into some of the on demand sessions and dig through those.

As always you have follow up questions, please leave them in the comments box, subscribe to the YouTube channel newsletter, I’ll talk to you soon.

Take care.

want help solving your company’s data analytics and digital marketing problems? Visit Trust today and let us know how we can help you

You might also enjoy:

Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here

AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!


Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This