Rather than just outright publishing my own conclusions, I’m going to try an experiment with you, my data junkie friends. We’ve talked about influence scores and reputation scoring systems like Klout in prior posts. We’re not going to rehash that here.
What I do want to pose to you is this: what’s the difference between Klout and PeerIndex, and which is a better indicator of influence, if any?
To that end, I present to you a simple data file. Here is a list of 15,737 Twitter handles scored by both Klout and Peerindex. For the most part, these are people who tweeted in the past month or more using the #Marketing hashtag at least once. In the data file you’ll find the following:
Klout score, PeerIndex score, Difference, Twitter Handle
Take a look at the file (it’s a text CSV). I did the boring part of the work, pulling all the scores. Now it’s up to you to do the juicy part and find the goodies. Mess around with it in the spreadsheet or data crunching tool of your choice. See what conclusions you come up with, then either post your conclusions in the comments or blog it on your own blog, linking back to this post so we can all find it.
Good luck! I’ll post my own conclusions separately after I’ve heard from you.
You might also enjoy:
- Best Practices for Public Speaking Pages
- The Biggest Mistake in Marketing Data
- Retiring Old Email Marketing Strategies
- Is Social Listening Useful?
- How I Think About NFTs
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers
I always find your stats fascinating. On this one I was first shocked at the Peer Index Ratings on your list. Given I have been registered fro Peer Index and Klout for many months now, maybe over a year, and I see how they work in my own profile, the rankings listed on your chart seemed extremely off.
So my theory (not based on deep scientific study, but only a starting theory at this point) is that Peer Index, unlike Klout can not rate or will not rate properly if the person in question is not signed up. I went into Peer Index to check this out and my own list is consistent with my theory. For example, a Peer Index of 11 signified to me the person has not signed up for Peer Index and therefore Peer Index is not rating the data properly. Unlike Klout who, as we know got into enough trouble for taking “unauthorized data” – as the claims go… The information on Twitter for the most part is public, therefore a straight forward Twitter ranking should be attainable without the authorization, in any case.
However from your list, I am pretty sure what we are seeing is everyone with the low scores 15 and under are those who did not sign up for Peer Index, and the reverse is true, those who did not sign up for Klout are now seeing an adjustment removing unauthorized scoring. This explains people with high Klout or Peer Index, yet low scores in the alternate tool. It does not necessarily explain the situations where Peer Index is stating the user is not signed up, yet has a fairly accurate peer index score, however in your sampling seems to be a lower percentage…
Check out the #Usguys Peer Index list. http://bit.ly/ww83qy Its far more consistent, probably because many of those in the group signed up for both. It also indicates where someone isn’t signed up. You will see their score.
So the problem, if my theory is correct, lies in the standardization of norms. People must understand their rating is only as good as the information the rating tool resources available. Most importantly however, people using these tools to rate others have to understand that neither Klout nor Peerindex should be the a decision making factor on a person’s value on Social Media – nor their ability to truly influence others on any given topic. Its simply an indicative tool, which must be looked at with background knowledge and the ability to understand where it all fits in…
My Klout is 53 , my peer Index is 50. When I compare to those around me, and my activity – in general both seem about right. Attached a couple of screen shots of the Peer Index of some of my peers. Showing a little of what I was mentioning above…
Looking forward to your analysis/comments as always!
Wow, @milaspage I wish I had seen & read your comment first, would have saved a lot of typing on my part. We reached similar conclusions approaching it in different ways. Curious to see how Chris sees it…
Let me answer with Kred 🙂 Okay, since you confined the argument to your dataset let me give you my big data view. Am signed into Kred, Klout, and PeerIndex as participant / user of all three services and have a fair idea of the strength & weaknesses of all three.
The main differences between the numbers seems to be how Klout vs. PeerIndex handle unregistered accounts AND secondarily how they score those accounts.
Am a Sean Moffitt fan @SeanMoffitt:twitter who asserts (correctly 🙂 that influence is subjective and contextual. You do not want to ask me or Sean much about cars. I know when and where I should get my oiled changed and that is about it. Now, ask me a question about Global Supply Chain Management or effective use of Pay Per Click and I’ll stand toe-to-toe with anyone…
Based on that understanding, Kred & PeerIndex have superior platforms to Klout TODAY. Klout is building out a different model for scoring based on Topics. And herein lies the problem.
Zite, your outlying high Klout low PeerIndex has an 80.41 Klout because of sheer volume of followers on Twitter who RT or engage the account. They have a Klout account and do not have a PeerIndex account. Does this make them influential in #Marketing however? Should I ask them about Corporate Blogging? Sustainable agriculture? Health care reform? With Kred & Peerindex you have the ability to discern faster whether an account “could” address a topic of your concern.
Conversely, detoursound is the high PeerIndex / low Klout account. Examination of the user shows they do not have a PeerIndex account and do not have a Klout account. They had recent activity shown in Klout of around a 56 score and are now flatlined at 10. PeerIndex retains a 4 month average and most likely is basing it’s scoring on past history.
My point in referencing these account or many others is that:
1) none of the systems have a great scoring mechanism.
— you have to hone in on what you want to look at manually
— you have to be more aware of how they work to benefit from using the numbers than you “should”
2) participation in the systems seem to yield more consistent numbers
— Klout is not as reactive / time sensitive as they were; scoring is more stable.
— Scores within Klout are much closer now to what PeerIndex has been reporting. I submit my one rat study of @Josepf:disqus on both platforms.
To answer you first question, which one is a better indicator of influence and why? The one you are most familiar with and can interpret. If both then PeerIndex gives your more targeted information at a glance than Klout.
You still need to put your eye on the person in question. The systems, if the person is enrolled, give you “in the ballpark” type statistics. In both systems you would think scores under 20 are either very inactive or bots; that still doesn’t mean they are not influential… All things being equal if you have just a few minutes you’re probably going to engage the 50+ Klout score vs the 10… The same is true for PeerIndex. However, if I need a brain surgeon, I’m not going to rely on either system. The MD with the high Klout score probably has too much time on their hands… Give me the anti-social Doctor working the long hours in the OR and Research Lab every day.
Learning? Until these scoring systems start to roughly agree in a consistent manner I’m not sure we should trust any of them except as a novelty factor. It is NOT a matter of Klout “winning” or Kred build “the right” platform. Instead it is about us, as users and participants, evolving to a shared understanding of just what these number should mean and why. As we evolve that understanding then by definition we should see more consistency, and all feel comfortable they are giving reliable enough information.
Now… Will that ever happen? 🙂
— best regards Chris, thanks for the post and though provocation,
Calculating arithmetic differences won’t provide much information. Instead, I ran a simple Pearson’s correlation between two scores, and I got .45. That’s not good, but not very bad either. It shows, that there is a positive association of a moderate strength between two scores. If you use the rank-order correlation (that is, if you rank scores in each column first, and then correlated the ranks), you get .43. I then assigned users to quintiles according to their klout and pi scores and computed the correlation between the quintile groups: it was only .42. One possible explanation is that Klout and PI use different populations of online users. Remember, the scores are computed as relative to the group of all people that these services track. It is nearly impossible to download data on ALL Twitter users, so most likely, they draw random samples of users and then use them as reference groups.
Kostas (Modern Metrix, mmx.typepad.com