Why Facebook’s Emotional Testing Isn’t Just A/B Testing

etymology_of_ethics_-_Google_Search

Much ink has rightly been spilled about the revelation that Facebook wanted to judge whether emotional states were affected and contagious in their most recent research on 600,000+ people. You can read more of the details here.

On social media, reactions have been mixed, from people calling it an outrage to people saying, “What’s the big deal, it’s just A/B testing, get over it”. This latter point is one worth digging into. A/B testing normally is used to optimize conversions and provide a way of understanding how your content performs. What’s different about what Facebook was doing deals more with professional ethics in research. As both Tom Webster and I have pointed out, many organizations in the research space have codes of ethics that give researchers guidelines about what they should and should not do. Here’s one from AAPOR, the American Association of Public Opinion Researchers, from section I A:

1. We shall avoid practices or methods that may harm, endanger, humiliate, or seriously mislead survey respondents or prospective respondents.

2. We shall respect respondents’ desires, when expressed, not to answer specific survey questions or provide other information to the researcher. We shall be responsive to their questions about how their contact information was secured.

3. Participation in surveys and other forms of public opinion research is voluntary, except for the decennial census and a few other government surveys as specified by law. We shall provide all persons selected for inclusion with a description of the research study sufficient to permit them to make an informed and free decision about their participation. We shall make no false or misleading claims as to a study’s sponsorship or purpose, and we shall provide truthful answers to direct questions about the research. If disclosure could substantially bias responses or endanger interviewers, it is sufficient to indicate that some information cannot be revealed or will not be revealed until the study is concluded.

Where Facebook fell down is on points 1 and 3. On point 3, yes, the Terms of Service permit them to legally do anything they want to their data and their users, but there’s a difference between implied consent buried in the Terms of Service and informed participation in a research study. All Facebook had to do would have been to put up a little header at the top of the News Feed to say, “Facebook would like you to participate in an emotional research study (click here for details), are you willing to participate? If so, click the Like button on this banner.”

The biggest part where Facebook fell down was on point 1. The difference between A/B testing the conversion rate of your website and intentionally altering peoples’ emotions positively or negatively is the impact of the potential outcome. If I succeed in manipulating your behavior to get you to buy X% more stuff, there’s moderate to low risk of me causing serious permanent harm to your life beyond financial impact. If I succeed in manipulating your emotions to make you sad and depressed, there’s a certain percentage of people – small, but non-zero – who will amplify that to the extreme of harming themselves or others.

That’s the difference between regular A/B testing and what Facebook’s experiment did wrong. I would wager a class action lawsuit will be on its way in no short order, and it’s deserved for an ethics violation that has had realistic potential to cause serious harm to participants of the study.


If you enjoyed this, please share it with your network!


Want to read more like this from ? Get daily updates now:


Get my book!

Subscribe to my free newsletter!


Hat tip is the new rip off and the content shock

If you doubt whether we’re reaching a content shock, a point where the creation of content is a questionable exercise in value (even when it’s outstanding), look no further than this example of what gets the love in social media.

The other day, I was surfing through my Facebook News Feed and I saw a story that piqued my interest, on 22 ways to creatively save space and hide ugly items. The story first appeared from a friend who shared it, and it looked like this.

The_22_Most_Creatively_Genius_Ways_To_Hide_Ugly_Stuff_In_Your_House___DIY_Cozy_Home

Pretty cool. I scrolled to the bottom of the article and noticed there was a very brief link and attribution (“hat tip”) to a story on another site.

Genius_Ideas_For_Hiding_Eyesores_In_Your_House___House_Organization_Ideas

Someone had taken this story, mixed up the order of the 22 items, and reshared it as “original content”. I started to feel bad for the creator of this list of 22 items, because clearly it had taken them some time to assemble… or had it? At the bottom, another “hat tip” link, which led to…

23_Creative_Ways_To_Hide_The_Eyesores_In_Your_Home_And_Make_It_Look_Better___Bored_Panda

The previous page had taken from this page and remixed the order again, a copy of a copy. As far as I could tell, this was the source list.

This is the content shock in action. The originating site was now two remixes away from what was actually being shared, and the remixes would fail a grade school test in plagiarism, hat tip or not. Was it worth it for the original content creator to do the work and publish the content when blatant copies are reaping the rewards? At some point, the cost/benefits will have flipped for the original content creator vs. those who have the deep pockets or the traffic to rip off (“hat tip”) great original content.

This happens to all sectors of content, all verticals, as evidenced by this post by friend Tom Webster. Plagiarism is a symptom of the content shock – the cost/benefits of ripping off someone else’s content are higher than creating your own.

How do you solve this, if you’re a legitimately valuable content creator who wants to protect your work?

First, make sure your content is inarguably yours. Find and develop a style of imagery and writing that sounds like you and only you. Use unusual words and phrases like “hat tip is the new rip off” that are easily searchable in Google so that you can identify simple plagiarism. Consider trying out tools like Copyscape to monitor, and Google your own stuff frequently.

When it comes to images, develop a unique style and watermark, ideally in such a way that’s difficult to eliminate, making it part of a graphic. A hideously bad example of this would be to superimpose a chart over a photo of you, for example. A less obvious way to do that is with digital watermarking tools, some of which are built into applications like Photoshop, or even using steganography tools to embed copyright information into images. You may not necessarily want to do this for every image, but it’s worth the extra steps for big, important stuff.

Finally, develop and grow your social network. The more eyes and ears you have out there who know what your stuff looks like and can alert you when they see something questionable, the better. I can’t attend every conference in the world, but by having lots of friends and allies, lots of people are listening on my behalf (and I on theirs). I can’t read every piece of content shared on Twitter, but with 75,000 of you out there reading, chances are we cover an awful lot together. (and thank you for being part of it)

As the content shock weighs ever more heavily on publishers, expect more to behave badly (especially those who actively denigrate journalism and journalist integrity standards), and keep an eye on your content!


If you enjoyed this, please share it with your network!


Want to read more like this from ? Get daily updates now:


Get my book!

Subscribe to my free newsletter!


Use Twitter to test click through rates

Ever had ad copy you wanted to test without spending a fortune up front?

Ever wondered if your email subject lines are performing as well as they could be?

Here’s an easy way to get the answers to those questions: use Twitter.

If you’re wondering what sorts of things get YOUR audience to click, throw away all of the useless “perfect email subject line” infographics and other nonsense. None of those are tuned to your specific audience. At best, they’re generalizations of the Internet population as a whole. At worst, they’re completely non-representative of your audience and you may end up doing more harm than good by following a formula that actively offends your audience.

So how do you fix this problem? Dig into your basic Twitter analytics.

Start by going to ads.twitter.com. Go to the analytics menu. Select Tweet Activity. Then look for your top tweets in the most recent period in the table that appears:

Pasted_Image_6_3_14__7_06_AM

If you’re sharing regularly, some tweets will stick out as being favorited, retweeted, or commented more than others. Make note of them. Make note of the language structure you’re using, of the words, phrases, and syntax that resonates most with your audience. Then use that information to craft your ad copy for short-form ads like Google AdWords, or for subject lines in your email marketing.

After that, take those learnings and use them to keep creating better and more compelling tweets, so that you’re always testing, always improving your ability to catch someone’s attention.

Here’s why this matters: your audience is specific to your brand and company. They know you, they presumably trust you at least a little, and thus when you share things in social media that might be of use and value to them, you have a true and accurate way to measure their responses to you, day in and day out.

Of course, this tactic is predicated on the assumption that you routinely share useful information on Twitter with your audience in an attempt to be helpful, in an attempt to build trust with your audience. If you’re not doing that, then this tactic won’t work as well for you. Be sure you’re doing that first!


If you enjoyed this, please share it with your network!


Want to read more like this from ? Get daily updates now:


Get my book!

Subscribe to my free newsletter!