Research reveal that 3 out of 4 businesses plan to increase their budgets for digital marketing this year, thats 71 per cent compared to 20 percent of businesses that plan to increase their traditional marketing budgets.
You would think, with all the data available and all that money being shelled out, it would be easy to see whether digital advertising campaigns work. With statistics on click through rates, online traffic and online sales to use to put together a matrix of numbers to decide whether it was worth it to buy a display ad, or something of that ilk.
However what can we actually take from these numbers? Are they really a true representation of what is happening online?
A working paper from three researchers who once worked together crunching numbers at Yahoo suggests we need to reset how we view tradition metrics used to judge effectiveness.
Here are the three problems with our current approach, according to the authors, Randall Lewis and David Reiley, now at Google, and Justin Rao, now at Microsoft:
1. Just because people don’t click doesn’t mean they don’t buy.
If you are looking at click-through rates of ads, and matching that to sales, you are missing a big audience: The customers who see your ad, think about it, and then buy later, often in a physical retail location. Leaving out these customers often underestimates the effectiveness of your ad campaign. Trouble is, these are a bit tougher to capture, but the authors note there, with the incorporation of some third-party measurements, it is getting easier to figure how many of these buyers are out there.
2. Just because they click doesn’t mean they bought something because of the ad.
When you look at click-through rates, you generally shout “Yippee” when you can directly tie a click on the ad directly to a purchase. The ad agency pops a cork, saying it was clearly their creative that did the trick. The marketer assumes it is the quality of the product. The host site pats itself on the back for having such wonderful associated content, it just makes readers want to spend. All may be fooling themselves. Sometimes people just want to buy stuff. So they search for stuff. And then they happen upon your ad and see and easy way to do it. They were buying anyway. You got lucky.
3. Just because you have data doesn’t mean they don’t suck.
Well, the authors didn’t quite put it that way. Instead, they said, “(M)ore sophisticated models that do compare exposed to unexposed users to establish a baseline purchase rate typically rely on natural, endogenous advertising exposure and can easily generate biased estimates due to unobserved heterogeneity.” While at Yahoo, the authors found that studies of buyers vs. non-buyers were put together wrong, with a lot of bad data and noise.
So, what is a marketer to do? Well, many ways, click-through rates are already being discounted in many ad buys. Last year, researchers from Hewlett-Packard, using their own digital display ads for printers, found that click-through rates were too random to be used as a metric for effectiveness. Instead, they only used them to compare whether one set of creative did better than another – and even then they admitted they didn’t entirely trust the numbers.
That leaves other approaches, like gauging reach, engagement, brand awareness, traffic to your site, and stuff like that. These are often more difficult, and more expensive, to monitor, than the cheap-and-easy click-through, but they have an advantage: The data actually may be useful.