Sit down with any group of marketers who are self-aware enough to be able to think as both marketers and consumers, and you’ll inevitably hear this expression at some point in the conversation: "Does that really work?"
Usually, it’s in reference to a specific tactic of some kind, particularly one that seems far-fetched or distasteful. Cold emails with rude or punishing subject lines. Social media tactics like mass-tagging people on LinkedIn or cramming your post full of hashtags on Instagram. Outright cold-calling.
Why We Wonder
Do these things work? For some, we do have data to demonstrate the efficacy – or lack thereof – of a given tactic. For others, we don’t necessarily have a lot of visibility. Even for those tactics that have questionable efficacy, we see other marketers persist in using them. Why?
The answer could be one of a few reasons.
Stale knowledge. This is most common in fast-changing disciplines like SEO. We see some marketers – really terrible ones – using tactics that are ancient, like keyword stuffing a web page. In cases like this, it’s old knowledge causing the problem.
No knowledge. This is most common in very new marketing channels and disciplines where very little data is available; what tends to happen is early adopters set the tone of what "works" with no credibly researched data. What knowledge is "best practice" tends to be anecdotal. Examples of this include the early days of software like Tiktok or Clubhouse, where no data on efficacy exists.
Cost-benefit ratio. This is most common in mass-scale disciplines like email marketing and cold calling, where the risk of doing something ineffective or distasteful are low compared to the relative rewards. Spam is the ultimate cost-benefit example; the costs of spamming are so low that even the slightest reward is worthwhile.
Growth at any cost. This is most common not in tactics, but in specific kinds of companies. When management dictates unrealistic targets and goals, marketers have to pull out every possible option – even terrible ones – to make those numbers. In this case, no one cares what the costs are, just hitting their numbers by any means necessary.
Do You Want to Know If It Works?
Here’s the big question: of all the things we wonder about in terms of efficacy, which ones do we want to know really work or not? The answer here is shaded by nuance as well.
There are some strategies and tactics which go against our values; for example, we as consumers might dislike being the recipients of cold calls. To find out they were highly effective could put us in a values-based bind – do we implement something we know we dislike, even if it’s legal? In those cases, it’s almost better we don’t know whether they work or not, because if we find ourselves coming up short on a key metric, we might be tempted to use the technique despite our objections.
For other strategies and tactics, we need to test, to assess, as close to the format of a clinical trial as possible. Does the tactic work? What does it impact? What impacts it? Statistical modeling will help us understand the overall impact of a new strategy or tactic as long as we design the testing conditions well.
In terms of professional development, make a list of all the strategies, tactics, and techniques for which you have questions about effectiveness. Prioritize them based on importance to you, and whether or not they go against your values. Remove the ones that are objectionable, and then start digging in. Get skilled up on them, understand them, talk to other marketers who use them, and use that list to build out your toolkit to become a better, more effective marketer.
You might also enjoy:
- How to Set Your Public Speaking Fee
- B2B Email Marketers: Stop Blocking Personal Emails
- The Biggest Mistake in Marketing Data
- Almost Timely News, 17 October 2021: Content Creation Hacks, Vanity Metrics, NFTs
- What Is The Difference Between Analysis and Insight?
Want to read more like this from Christopher Penn? Get updates here:
Get your copy of AI For Marketers