I’m back from IBM THINK 2019. Let’s look at the major highlights from a marketing and AI perspective.
– Watson Anywhere
– Watson OpenScale
– Project Debater (and its implications)
– Watson AutoAI
What does it all mean for you? What will you realistically be able to use in the next year?
FTC Disclosure: Trust Insights is an IBM Registered Business Partner. Any transaction you make with IBM through Trust Insights financially benefits the company and the author indirectly.
Can’t see anything? Watch it on YouTube here.
Listen to the audio here:
- Got a question for You Ask, I’ll Answer? Submit it here!
- Subscribe to my weekly newsletter for more useful marketing tips.
- Find older episodes of You Ask, I Answer on my YouTube channel.
- Need help with your company’s data and analytics? Let me know!
- Join my free Slack group for marketers interested in analytics!
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.
In today’s episode, we’re recapping all of the major announcements. IBM think 2019 at least the ones that certainly caught my eye and I think will have an impact on what you’re doing with artificial intelligence and machine learning within the realm of marketing so let’s go through the big announcements first and then their implications number one was Watson anywhere virtualization of the Watson API’s so that you can use them with any machine learning or any data set regardless of the environment it’s in including other people’s clouds. If you I think this is an announcement that is useful if you are doing work and you need access to some of the Watson specific API’s, especially some of the ones like visual recognition.
The natural language understanding and so on and so forth. So useful stuff there. It does open the door, I believe, to using Watson studio as well to be able to wrangle other people’s eyes. And that that is a very, very helpful thing because the studio environment, there’s one of which is it’s a, it’s a relatively low code environment. So there’s some opportunities there. The second of course, was Watson open scale if you talked about a couple episodes back and what it means for being able to tune models and fix them identify when the going off the rails and especially with regard to bias the third was project debater which was their artificial intelligence that debated a human and didn’t do as well as I think people expected it to but still did some pretty amazing stuff and forth was open AI. I was like auto AI auto AI allows you to load the data set and Watson will do its best to create and choose algorithms and
fix things and have all of these technologies. I think there are some some immediate takeaways. Number one open skill for reducing bias is going to be really important, especially for being able to identify bias when you didn’t plan for it a front end the data set, that’s a big deal because a lot of folks in machine learning and AI today are deploying models without necessarily taking into account all the different ways that your data sets can be biased. So having open API be able to raise it and say, Hey, something’s wrong here is a very powerful option I think will help for to reduce unfairness in artificial intelligence. And I like that about the way IBM is approaching AI. This concept of trusted AI that we will never reap the full benefits of artificial intelligence if we don’t trust the machines to make fair unbiased decisions.
This is something that played into a discussion I had with the lead engineer for project debater when I had a chance to
Talk with her. She was saying there’s underneath the hood. There’s a 300 million document corpus and 10 different API’s all essentially doing different things and blending their results together. Which explains why it was able to listen, synthesize speech to text, or do a document corporate search and then create natural language back within four minutes. It’s got a lot of hardware and software running on a hood. But one of those 10 guys is responsible for ethics and and rule enforcement. That is saying there are certain rules that it has to follow certain things that it may not do.
And I have some hesitation about that, not because I don’t trust the rules that they put in place because IBM did a laudable job and making sure those rules and those thresholds are set high. But again, when companies and private enterprises and individuals who who have those capabilities are working with these technologies, they may not necessarily
put the same
level of diligence into their ethics modules that an IBM would or the very worst case would be where someone takes the technology and gives it a very different set of ethics rules. Can you imagine, for example, a
heavy manufacturing company using the technology to to synthesize great natural sounding debate, but saying we’re going to completely discount any articles in the corpus that are about the environmental impact of this type of manufacturing technology so we can create true natural language that sounds great. That sounds logical and well reasoned, but intentionally biased.
And I think there’s a there is an opportunity to have the discussion now and may have enforcement later where companies like IBM that sell artificial intelligence technology, particularly if it’s an off the shelf solution like that,
in addition to having to do things like export controls and other forms of compliance, they may have to do an ethics
You have a company and may I would hope have situations they say Nope, you can’t buy this, you your ethics track record or your your stated policies do not align with what we want our technology being useful because it can very much be used as an information weapon. So some bots, some this, there’s more to unpack there. But for the most part, it was a really cool technology test. It was a really good example what A is capable of and highlights the fact that
who is ultimately responsible for the output of AI is a human being or a set of human beings and we have to as consumers, and as business owners constantly be asking, How can this be used in appropriately or illegally or to disadvantage a certain group of people.
So let’s go back to auto AI all the way I is, again, you take a data set like you export all your Google Analytics data and you pour it
into the auto AI system, and it will start to process it do feature engineering and do a lot of the the upfront stuff that a data scientists will have to do today.
And then start to help you understand how to model the data set
and how to create machine learning algorithms that will help you make better use of the data. So you put all your Google Analytics, you say, conversions is what I care about. And it will go through and process and come up with a model actually come up with several models of things that it thinks are optimized for conversion. So set of time on page is really important. So you should focus on that
this is a very powerful tool. I think it will be a great time saver for data scientists and for machine learning specialist. I’m not convinced that it will still help people who are not good at technology or math. I think it’s still too advanced for someone who’s like I don’t want to touch and I just want to hit export and have magic happen that’s none of these tools that are on the
market or coming to market will are going to be magic. They are still deeply rooted in you have to do some upfront work. But that said, for people who have a technical aptitude, even if you don’t have any formal training, we have a technical technical aptitude and you’re able to to squeeze the most out of things like Google Analytics or Google Data Studio without
formal training, something like auto way I could be the thing that tips them over into being able to do machine learning credibly. And well, one of the important things that I think is going to be so critical to to auto AI success is it’s bias detection. It has the same bias detection tools is open scale, and also as the as the was a 360 product
AI fairness 360. Sorry,
in that it can detect biases in your data as it builds a model and either compensate them for them automatically, or spit back and ask you Hey, this looks like a protected class. Do you want to
In fact, protect the outcomes that will be greatly helpful, I think to, again, the cause of machine learning and artificial intelligence. Because if someone who doesn’t have a strong background in data science and machine learning is building a model, but the system knows enough to look for biases, the model they will out should be more fair than if they were to try and do it themselves with some of the other automatic model selector tools out there that may not know to look at something like age or gender or ethnicity and say, Nope, those are protected classes. We cannot use him for modeling and we even want to have specific outcomes maps. So if it’s gender, the generally speaking should be a 5050 split, whether it’s in sample size or an outcome, the privileged class in the non privileged class should have the same general outcome. So although I has a lot of potential I’m looking forward to trying it out in the beta and we’ll have more to share when when I can actually get my hands on it and play around with it. But overall, there’s some
Really, really good stuff coming out of IBM from think 2019 when it comes to the application of machine learning to the world. And
I think they’re probably one of the few companies that’s giving serious thought and implementation to the ethics and the mitigation of bias with an artificial intelligence is if there was one core thing that that came out of the week and all the different products it is that they’re thinking about how to keep the technology from being misused and putting it into the product
which is a major step forward. So a good show a lot of fun look forward to putting the technology to use and and sharing more as we have it. As always, please subscribe to the YouTube channel and the newsletter I’ll talk to you soon
want help solving your company’s data analytics and digital marketing problems. This is trust insights.ai today and let us know how we can help you
You might also enjoy:
- The Basic Truth of Mental Health
- Almost Timely News, 17 October 2021: Content Creation Hacks, Vanity Metrics, NFTs
- It's Okay to Not Be Okay Right Now
- Transformer les personnes, les processus et la technologie - Christopher S. Penn - Conférencier principal sur la science des données marketing
- You Ask, I Answer: Google Tag Manager and Google Analytics Integration?
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers