Do you sell or buy insights on the attribution of business performance to different (online) advertising forms? Then you must have heard about Google’s Randall Lewis and Microsoft’s Justin Rao “On the Near Impossibility of Measuring the Returns to Advertising”. Lewis and Rao analyze 25 experiments with hundreds of thousands of randomly exposed internet users, and show that the selection effects in advertising experiments are very large compared to the very small individual impact the advertiser hopes to achieve. In other words, advertisers mistake exposure to online marketing as leading to conversions when instead it is simply an activity bias: the decision of someone to access e.g. a price comparison site – which is driving the underlying differences in purchase intent between the exposed and unexposed groups.
Is this the end of attribution modeling? Well, it all depends on the business question you are asking. Lewis and Rao try to pinpoint the exact ROI of specific campaigns using individual data. I buy their conclusion that it is hard to estimate such campaign returns precisely, and that it is tough to establish the causal effect of an individual being exposed to an ad and then converting because of it. In contrast, senior managers typically ask me and other attribution modelers to attribute sales changes to different ad forms or providers. For instance, an online retailer recently ask us about the relative effectiveness of TV, radio, emails, retargeting, search, affiliates, price comparison sites and portals. Each ad form contains several campaigns, including more and less successful ones. The retailer’s employees spent almost all their time addressing tactical questions on which campaign is better and how to improve campaign effectiveness within each ad form. What they wanted us to do is to answer the more strategic questions on which ad form was (on average) better at getting people to the website and in converting them into paying customers. Their intuition told them last-click methods were wrong about online forms and could not capture offline marketing effectiveness.
Our aggregate-level analysis was insightful: Granger causality tests showed that spending on certain ad forms goes up before sales (advertising to sales causality), while others go up at the same time as sales (activity bias). Accounting for other influences, we showed that paid search and retargeting were successful at getting people to the website, but not at converting them to paying customers. Instead, content-integrated ad forms such as affiliates and price comparison sites were most successful in increasing sales. These findings had face validity, as the direction was often in line with management intuition, but our analysis added by pinpointing how much the retailer’s allocation should change. Applying our reallocation saw revenues go up by 28% and helped managers show how wrong last click methods are.
So what have we learned? Yes, it is true that many consumers clicking on your ads may have reached you anyway, and that it is tough to pinpoint the exact returns of online ad campaigns. However, this does not mean companies should simply stop advertising and hope consumers will find them by themselves. We can show causality of spending on certain ad forms to sales and use these attributions to improve our allocation. Happy hunting in this new year!
Prof Koen Pauwels
New book available now
It’s Not the Size of the Data – It’s How You Use It:
Smarter Marketing with Analytics and Dashboards
2 thoughts on “On the profitability of measuring returns to advertising: asking the right attribution questions”
thanks so much – amazing this post is still relevant 5 years later…with all the attention to https://thecorrespondent.com/125/the-non-sense-of-online-advertising-when-the-numbers-dont-add-up/267513125-ebeb97f2
Pingback: Digital ads increase sales. A lot. | Smarter Marketing gets Better Results