Fireside Chat: Interview with Manxing Du of Talkwalker

Warning: this content is older than 365 days. It may be out of date and no longer relevant.

Fireside Chat: Interview with Manxing Du of Talkwalker

I had a chance to sit down with Manxing Du, Senior Machine Learning Researcher at Talkwalker. We talk about pressing issues in AI and machine learning, natural language processing, bias in datasets, and much more.

Fireside Chat: Interview with Manxing Du of Talkwalker

Can’t see anything? Watch it on YouTube here.

Listen to the audio here:

Download the MP3 audio here.

Machine-Generated Transcript

What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for watching the video.

Christopher Penn 0:10

All right, in this episode we’re talking to Manxing Du from Talkwalker.

About all things.

AI and data science.

So Manxing just start off with, tell us about yourself, what’s, what’s your background? Your how’d you get into data science and machine learning?

Manxing Du 0:24


So thank you for inviting me.

So my name is managing.

And I did my bachelor, and my master in telecommunications, engineering, actually.

And then I did my PhD here in Luxembourg in machine learning.

I started doing data analytics projects, actually, for my master thesis.

So I did in Research Institute of Sweden, rice.

So in that project, I analyzed YouTube video, YouTube users watching behaviors, and discuss the potential gains of caching the popular content in the local proxy cache for an efficient content distribution, even though there was no machine learning related in the project.

But that’s my very first step of entering this domain.

Christopher Penn 1:28


That’s very cool.

So you would be telling telecom providers what to cache to reduce bandwidth strain? Yes.


Very cool.

And did they did they go into production?

Unknown Speaker 1:40

No, no, not really.



Christopher Penn 1:43


In terms of data science environments, and things, your what’s your favorite environment for working Jupiter, our studio? And why?

Unknown Speaker 1:53

So actually, I use Python all the way.

But sometimes for a very quick experiments or for data visualization, I use Jupyter.


Christopher Penn 2:07


Why would you so so what do you your Python development in? Is it just a straight text editor?

Unknown Speaker 2:15

No, I use PI charm.

Christopher Penn 2:18

Okay, recall, in terms of how do you decide when to do something in a notebook versus when to just write up straight up Python code.

Unknown Speaker 2:29

For instance, if I just want to quickly show, let’s say, take a look at the data, and to see the distributions of the labels or to see some examples to check the features and so on.

So that I would use the Jupyter Notebook.

And to carry out like running experiments, I will switch to two pi charm.


Christopher Penn 2:55


So talk to me about what you do for Talkwalker.

Unknown Speaker 3:00

So I joined Talkwalker, actually, almost two years ago.

And so in our data science team, we mainly work on, of course, finding AI driven solutions for our products, ranging from image processing to natural language processing, both for text and for audios.

And for me, I have worked on improving our document type classification model, particularly to identify news or blocks, or forum sites, among others.

And the rest of the time, I have been working on NLP related projects, mainly processing text.

And, but that’s work in progress.

And these are, are not publicly released yet.

And also, I’m also working on some more, let’s say practical issues, let’s say how do we serve our model efficiently and to meet the requirements of the production environment?

Christopher Penn 4:09

Can you talk a bit about sort of the evolution of natural language processing? Like we all think pretty much everybody started with a bag of words.

And just to be very simple tokenization? And where is the field today? And how do you see, you know, the most recent big models like Transformers, how do you see them being used?

Unknown Speaker 4:31

So this, like big models, like for example, now very popular ones, it’s transformer based models.

The most interesting part for that model is it used this contextual embeddings instead of a bag of words, which only embeds each words like independently regarding, regardless of the context.

So in that case, we One word would have only one embedding.

And for contextual based in word embeddings.

So if one word has multiple meanings, and they will have multiple embeddings accordingly, so it has a lot more potential, and it understands the semantic meanings of the word.

So it would help us to solve many real world’s problems.

Christopher Penn 5:27

How does that work with stuff like, for example, like hate speech and abuse of language.

Unknown Speaker 5:36

So for that, I think we have, we call them like noises, we have our noise control.

So we will also, of course, train our model based on the context and then to understand the meaning and then identify them.

And then, of course, in our training data, I think before we would do other tasks, we would do this noise control, we will try to filter out these noisy data first, and then we continue with other analysis.

Christopher Penn 6:16

What if somebody wanted to specifically study, like hate speech? For example? Would they have to have a separate model that was trained specifically for it?

Unknown Speaker 6:28

Not necessarily, but I would say we provide general models.

But if you want like a really domain specific model, it is also possible to train your customized model.


Christopher Penn 6:48

How much? How much horsepower? Does it take in terms of compute power for working with some of these models? Like BERT or GPT? The GPT-2 family or the open the Ilica? AI family? Is it something that a technically savvy person could do on a modern laptop? Do you need cloud architecture? Do you need a roomful of servers? For like, epic training time? How? What’s What’s the overhead on these models?

Unknown Speaker 7:19

So I think, if I’m not sure, I think some models if you load them, it could be it could take up, let’s say 512, or like one gigabytes, memory.

And I think normally, if you just want to run like a base model, it’s a modern like laptop can can afford it.

And but of course, for us, we use, like bigger GPU servers.

Christopher Penn 7:51




What are some of the more interesting machine learning challenges you’re working on right now?

Unknown Speaker 7:59

So, in general, the most challenging part is, for instance, how do I assign labels to on label documents? For instance, if you, if you have a predefined set of topics, and you have tons of documents, how do you assign the topic for for each document? So a very naive approach would be, let’s say, we define a few, we find a few keywords related to the topic.

And then we could do keyword matching on on the documents.

And also, of course, if you want to go one step further, you want to find the embedding of the document, and then you want to compute the similarities.

And of course, when you choose the model, how would you compute the let’s say the document embedding would you compute word word embeddings, and aggregate them? Or would you compute based on synth based on sentence? So there are multiple choices? And also, how do we for instance, of course, we deal with global data, and then the data documents would be in multiple languages? And how do we deal with that?

Christopher Penn 9:23

Do you find like, is there a substantial difference in terms of performance between using the more complex embeddings like from a transformer model versus just using bigrams? You know, sort of going back to the naive approach, but using diagrams.

Unknown Speaker 9:40

I never tried actually, but I think because, for instance, if we want to, let’s say find something related to Apple, I guess.

The rather naive word embedding models would and understand, for instance, between the real fruit apple and the Apple products, right? So I think that would be a challenge.

And right now I think the big, more complex models it can because of the contextual embedding, and it can understand the meaning of the words so it’s more powerful and more accurate.

Christopher Penn 10:22

Okay? Describe your exploratory data analysis process, when you get hand and say a new data set.

What do you do? What’s your what’s your recipe for unlocking value from a dataset.

Unknown Speaker 10:36

So take, right now take this text data, for example, we will check the source of the data set, and if it matches our problem or not, because, for instance, if the data is from social media, or is, is any, like domain specific data, or it’s like, it’s from news website, and so on.

And of course, we may do data cleaning up and we need to maybe translate the emojis into text and also remove some user account information.

And also in this process, we need to try our best to D bias the the text as well.

And, of course, we need to also check the label distributions to see if any of the class if any of the group is significantly more, we have significant, significantly more data than the other groups and so on.

And also, we can always run some simple baseline models on it.

And to quickly check the results and also identify, let’s say, the misclassified documents and to see which class we perform better in which class we perform worse.

Christopher Penn 11:58

Talk a bit more about what you said D biasing the text, what does that mean?

Unknown Speaker 12:04

So for instance, one example is so, emoji comes in different gender and in different skin colors, and so on.

So we want when we want to translate the emojis into text, we will remove the gender and the racial related text and to keep it as neutral as possible.

Christopher Penn 12:35

Are there cases though, where that those factors would be useful?

Unknown Speaker 12:43

Yes, I guess so.

But that’s also always a trade off.

Christopher Penn 12:48

So somebody who needed that they would have to do the that data analysis separately outside of the environment you’re talking about?

Unknown Speaker 12:59

Yeah, I guess Oh, yes.

Christopher Penn 13:01


Why? Why is that step in there.

I’m curious as to like the decision making processes about why that’s important or not important.

Unknown Speaker 13:15

Because I think we right now, we don’t want to make assumptions, or we don’t want to confuse the model.

And it’s very important to keep our data set neutral and clean.

We don’t want to introduce too much like bias into into the data.

So the model may pick it up and may focus on around, let’s say, feature in the data to make the decision.

Christopher Penn 13:43


You mentioned labeling of, of sources and documents.

How do you differentiate because there’s, there’s a lot of, I guess, blurry lines, I’ll give you an example.

My personal website is listed in Google News.

Right now.

It’s a personal blog, I would argue it’s probably not a news source, even though it shows up in Google News.

How do you differentiate between news sources? And, you know, some random guys block?

Unknown Speaker 14:15

Yeah, that’s a very, very good question, because it’s very difficult for us as well.

We actually work very closely with our product team.

And then we give a rather like detailed guidelines to to label our data.

For instance, let’s say if the, in a personal blog, if you are talking about news in a very objective way, and then we we may classify it as news, even though it’s published on your personal blog site.

So yeah, it’s it’s, it also depends on what our like Clients want.

So I would say it’s we need a rather clear in detail guideline to label our data.

Christopher Penn 15:12

How do you deal with objectivity issues? I’ll give you an example.

Most of the planet agrees that Russia illegally invaded Ukraine.

Right? It’s generally accepted as true.

If you go to the official Russian news website, we have Asti it’s a completely different story.

It’s basically Kremlin propaganda.

But RIA Novosti would be classified as a news source is literally the state is the government official news source, just like the BBC is the government official news sources of the United Kingdom? In cases like that, how do you deal with a site that is theoretically accredited, but is completely disconnected from reality? When you’re talking about new sources and classifying something as a new source? Whereas propaganda?

Unknown Speaker 16:05

Yes, so in this case, I guess it depends on what you want to use this, how do you want to use this data? So if you want to use it for for instance, sentiment analysis, then I guess your data is highly biased.

So I would say we will, like exclude them from our training data, because it’s yeah, it was.

It’s highly biased.



I don’t know it’s

Christopher Penn 16:41

in terms of sentiment analysis, how, what is the field look like right now? Because in a lot of the different studies I’ve seen and papers I’ve read, even with transformer models, it’s still kind of a crapshoot.

Unknown Speaker 17:00

I would say, for us, I think we, well, it depends, you need to, if you use, like, let’s say, vanilla version of the model, then, like, let’s say BERT is not trained to do sentiment analysis, then of course, you may not have the best performance there.

And, and also, it’s not really trained for sentence embedding, let’s say, because it’s better to do word embedding.

And then how do you aggregate them? I would say, you need to find that’s why in Talkwalker, we, we collect our own training data, and also we customize our model and for like, specific tasks.

So in that case, we will make sure that for instance, for sentiment analysis will will have better performance, they then using a model, we just use it, just take it from the shelf.

Christopher Penn 18:11


Do you find that these models, how much how much human review of the training data is needed for natural language processing models? Is it some it’s not as easy, for example, is like saying, you know, taking ecommerce sales data, that’s much easier to model.

Unknown Speaker 18:31

So I guess we also, so first we collect, let’s say, from from some public data set.

And so we we know that these data, for instance, are used to build up some benchmarks.

So they are relatively reliable.

And also, we will also make labels some data by ourselves.

So yeah, we have rather good control of our training data.

And yeah, it takes a lot of time to, to build up our in house datasets.


Christopher Penn 19:16

Talk a bit about the mitigation of bias in datasets.

You mentioned, obviously, the D biasing of some of the text itself.

Do you? Is it a valid approach in natural language processing to keep some of the demographic data and use it as a way to remove bias? So for example, let’s say you have 100 articles by 100 authors and have gender information for the authors.

And let’s say 80 of them are male 20 of them are female, is it in terms of d biasing the data set? There’s obviously a few different ways to do it.

One of the easier ways would be to take you know, do something like propensity matching find the 20 articles that are most similar to the women’s articles only choose 20 of the In the ad men’s articles, but obviously, you drop out a lot of information that way.

How do you think about the mitigation of bias, particularly in the problems that you’re being asked to solve?

Unknown Speaker 20:13

That’s a tricky question.

tricky subject? Yes.


So I guess I have also, like, watched some, like talks about trading bias.

And they said is, it’s, it’s always, it’s always a trade off between, you don’t want to remove too much of the demographic information, because you will lose a lot of information as well in that case.

So I guess it’s depends on your, your task, for instance, you you can keep all the data, and then you do the training, and then you test on your test set, and to see if you can observe any mistakes, let’s say.

And if those kinds of demographical features really introduced bias predictions, then I would say, maybe we need to deal with it.

Otherwise, the demographical information, if it’s provides benefits to the prediction, then we we should keep them Yeah.

Christopher Penn 21:44


Do you think though, that, and I don’t mean Talkwalker, because of entropy in companies in general? How, how carefully do you see your fellow machine learning and data science practitioners thinking about bias and making sure that it’s a step that they account for in their pipelines, and even in their training data?

Unknown Speaker 22:10

I think because we are also fully aware of this problem.

And so, for us, I think we always when we do data collections, and so on, we need to make sure that datasets are like diverse enough.

And we don’t collect for instance, from a specific domain or specific region and so on.

Yeah, so we, we, when we do when we build up our own training data sets, and we are very careful and try to prepare this rather clean and diverse training set.

Christopher Penn 22:49

What do you how do you deal with drift when it comes to models, particularly around dimensions, like bias when, let’s say you calibrated a dataset so that it returns the author’s that are evenly split 5050 for gender as a very simple example, but over time, just by nature of the fact that maybe you’re pulling in, I don’t know, accounting papers, or something or pick a domain where there’s, there’s a strong gender bias in one direction or the other, the model will inevitably drift if you just feed the raw data, how do you how do you deal with drift in models.

Unknown Speaker 23:28

So, for us, so, before we release our models, of course, we will test it in our production environment and using our production data and to see the proof to monitor the performance.

And of course, later if we have feedbacks from from our clients that they are not satisfied with the results and if they see some misclassified documents and so on, and it’s always possible to label for instances a domain specific data set and then using our AI engine to retrain the model.

Christopher Penn 24:13

Do How effective are systems like reinforcement learning and active learning for these kinds of models in terms of getting feedback from customers, like have customers just thumbs up or thumbs down an article in the results? How does that work as a feedback loop for retuning models?

Unknown Speaker 24:33

So, for active learning, I think right now, we have for instance, if we notice that there are certain type of documents or a certain group of documents, they are missing, they are misclassified and then we would add those examples, particularly, we are going to targets those examples and then add them into the training set.

And we try to learn from those difficult cases.

Christopher Penn 25:11

What advice would you give to aspiring data science just and machine learning engineers? What are the what things? Would you warn them about? You know, looking back at your career so far and things, what are the things that you say like, oh, look out for this?

Unknown Speaker 25:26


So I think the first step, of course, right now, we have tons of like, big complex models out there.

And it’s very fascinating, and we’ll all wants to try them.

But at the beginning, I think it is always beneficial to select a rather simple model, it could be even a decision tree model, to build your baseline, and to understand your data.

And, and also, of course, you shouldn’t stop learning, you should never stop learning, because this is a really fast pace, area.

And you should always keep up with the recent research.

And also, when you see sometimes the results are incredibly good.

Always double check, always go back to see to make sure they are not too good to be true.

Christopher Penn 26:31

What research and things are you keeping an eye on what things have got your interest that are on the horizon now that are obviously not in production, but that have caught your interest?

Unknown Speaker 26:42

For instance, right, now, let’s say we, we need to train a model specifically for for each problem we want to solve.

And, of course, GPT, three gives us this opportunity to do this zero shot learning and it can just we describe our task and then the model will immediately pick it up and then give us give us the results.

And I think in that domain, there are still tons of things could be done.

And also how is it possible to to use or even to downsize such giant model into smaller manageable ones? And use them in production? So So very interesting question.

Christopher Penn 27:40

What do you think of some of the more novel use cases of natural language processing to solve problems that aren’t strictly language, there was a case not too long ago, where someone took the sequence genome of SARS, cov, to the COVID virus, transcribed it into essentially words, you know, RNA fragments, just the letter sequences of the amino acids, and then used natural language processing to try and predict mutations with a fairly good degree of success.

Without How much do you keep up with, you know, the way these models can be transferred from one domain to another?

Unknown Speaker 28:17

Yeah, I have seen those kinds of usage.

I guess you can also, let’s say applied NLP model in the music domain.

I think they are all of these usage are quite interesting.

And then it also shows how powerful right now this natural language models are.

Yeah, and I think they are.

It’s definitely these models have the potential to solve the problems in other domains.

Christopher Penn 28:53

Do you think they’ll be sophisticated enough at some point that we’ll be able to use them for example, to restore lost languages?

Unknown Speaker 29:05

Yeah, I guess because I think right now.

So these models could pick up, for instance, some similarities between different models.

For instance, one multilingual model, if you train them on one task only in English, and then you test it on the same task, but in another language, it’s also it wouldn’t give you a really top performance, but it’s it’s also the results are also quite are quite impressive.

So I think the model has the potential to to pick up the links between the languages, so yeah, maybe why not.

Christopher Penn 29:54


And what advice would you give to non technical folks In particular, when they’re thinking about artificial intelligence, because they seem to have, they fall in one or two camps that there seem to be disbelieving of it entirely, or they think it’s entirely magic and can do anything, including, you know, create Terminator robots and, and other things.

How do you talk to executive non technical executives about what AI can and can’t do?

Unknown Speaker 30:24

So I think personally, I would say we should definitely, definitely embrace the enormous the enormous potential of AI.

And, but also at the same time, we, we need to be well aware of the limitations AI cannot do everything.

For instance, right now, the models, people are mistakenly think the models tells us the correlations between features.

But here, the correlations are not equal to conversations.

So for instance, on Valentine’s Day, and if you see, oh, we have rather high price for the roses, and at the same time, we have also a very high sale of the roses, and they are highly correlated.

And but it doesn’t mean you cannot draw the conclusion that oh, so we should in order to have a high profit, a high sell of roses, we should increase the price, because high price is the cause of the high sale of the roses, which is wrong.

So I think here people should be aware of all these limitations, and also, when you interpret the results, how to explain how to understand the results correctly.

So so very important.

Christopher Penn 32:02

How do you deal with? So with a model like GPT? Three, for example, there is no interpretability or explainability of it, it really is very much a black box, given the interest of governments and things, rightly so about how machines are being used to make decisions.

How do you deal with a situation like that? When when somebody says, Well, how did how did the model come up with this answer? And you have this black box? What do you tell somebody?

Unknown Speaker 32:35

Yeah, so I guess this Explainable AI is also a very hot research topic right now.

So uh, but I guess, for instance, if you look at Chatbots, or you let GPT-2, three to read your story, you can read the story, and then easily probably tell, oh, this is not really a human written.

Text, it’s it, it looks or it’s, it seems not consistent, or rather, looks weird.

So maybe you can emit immediately see, it’s not generated, it’s not written by, by human.

So I would say, in this case, we are still a bit far away from the real, let’s say, intelligence machine.

Christopher Penn 33:44

Okay, how to how to you personally, and I guess, from a professional and corporate perspective, I plan on dealing with the absurd amount of content that’s going to be generated by a lot of these natural language generation models, where they’re going to create you know, instead of one really good blog post, they’ll generate a million mediocre blog posts that are you know, that still meet their goals, which is, you know, keyword density or other things for mostly for SEO, but will flood all of our public Commons I guess, with with machine generated stuff that is okay, but not great.

You know, how do you how do you see companies dealing with just this massive explosion of content?

Unknown Speaker 34:37

So I guess in this case, the first task is to identify which text are generated by machines and which are the real let’s say comments the real articles written by by human Yeah, I guess in the future may be the Let’s say the noise control engine should should also try to identify.

So this is also one of the major tasks in the future like to first future out the machine generated text, and then to find your interested up human generated content.

Christopher Penn 35:31

Particularly with comments, though, like on product reviews and things, I see it being really difficult because on one hand, you might have a machine generated comment that, you know, hat might have a marker or two like, okay, that that word choice is not how you would normally say something, but it could be somebody who’s not a native speaker of that language.

And on the other hand, you have comments that are just put up by human idiots.

I was reading an Amazon product reviews say the other day about type of apple juice, and like, it doesn’t taste like fresh apples at all.

Like it’s not it’s it’s dried apple powder.

Of course, it’s not going to taste like, you know, we’ll apples, you idiot.

This human just wrote this absurdly stupid comment on a product.

But you can easily see that a machine learning model.

Trying to understand comments might actually think the machine comment was more useful and valuable, even though it’s generated but not by a human, then the what the idiot human wrote.

And it poses this challenge, I think of the machines might actually write better product reviews.

But they’re fake, they’re not a real authentic review, then what the human idiot wrote? How do you see companies dealing with that, particularly a company like Amazon, where they’re gonna have, you know, people who have very strong interest in bombarding a product with, you know, as many fit 1000s of fake reviews possible to to boost the ratings.

Unknown Speaker 36:53

So I guess those machine like the fake accounts, maybe you could also look at their account names and find some patterns, and also how often they post you could, I think, from other aspects, other than only looking at the text they generated, and also sometimes this machine generated text, they may put, maybe put lots of, let’s say, emojis or adult ad links, and so on.

So I guess you need to, if let’s say we can identify those comments, easily if then we should maybe filter out those comments and then maybe try to study the pattern? And yeah, otherwise, if, if those comments if those accounts are even difficult for us to identify them? Yeah, how can machine identify them?

Christopher Penn 38:01


I mean, that’s the challenge I was having was like, did a real human read this good? I can’t believe well, and I looked carefully, like he said, looking for other views.

And like, No, this actually was a real just stupid person.


Okay, where can folks find out more about you and learn more about you and the work that you’re doing?

Unknown Speaker 38:21

Um, I think if you wanted to see my previous publications, I think, Google Scholar, you can find me.

Yeah, and right now, I Talkwalker.

We are not publishing like research papers.

But I think you can always stay tuned with our product release and see our new products.

Christopher Penn 38:47

That’s [email protected].



All right.

Thanks so much for being on the show.

Unknown Speaker 38:53

Thank you for having me here.

It’s very nice talking to you.

You might also enjoy:

Want to read more like this from Christopher Penn? Get updates here:

subscribe to my newsletter here

AI for Marketers Book
Take my Generative AI for Marketers course!

Analytics for Marketers Discussion Group
Join my Analytics for Marketers Slack Group!


One response to “Fireside Chat: Interview with Manxing Du of Talkwalker”

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This