As the year ends, I am contemplating which topics I changed my mind about in 2014. After all, updating your beliefs based on incoming evidence is a key attribute of a scientist. No need to bore you with the new marketing facts I read or uncovered – instead the general interest one that sticks out is the safety of genetically modified food (GMO). Originally against GMOs (until proven safe), the evidence I read this year suffices for me to believe it is safe.
Which topics have you updated your mind about in 2014? Can’t name any? You are in good company: none of my 800 Facebook friends could come up with one either. And I have to admit having “digged my heels deeper” in most debates this year again. What explains this human reluctance to update beliefs based on new incoming information? Research abounds on the psychological issues with integrating disconfirming information (see Cook and Lewandowsky’s The Debunking Handbook for a fun read). For social scientists, I believe holding on to cherished theories is also influenced by the philosophy of ‘Strong Theory’ versus ‘Strong Evidence’ (as simplification of philosophies of science described in lengthy books).
Strong theory scientists have a specific theory they aim to support or falsify and design experiments or tests to do so. For instance, attribution theory implies that some consumers buying your product on a discount attribute their purchase to the discount and thus think less of your product (lowering brand equity and baseline sales). Loss aversion holds that losses loom larger than gains, and thus implies that the sales loss from returning to the regular price outweighs the sales gain of the discount. Strong theory is the dominant paradigm in social sciences, where reviewers often decry the ‘absence of a strong, unifying theory’, top journals specifically call for ‘conceptual work’; i.e. theory without a shred of evidence, and doctoral students are taught to built their model and write up their research starting from a specific theory they set out to test in data. While such strong theory-based science has plenty of benefits (see e.g. Bass 1995), it also suffers from lots of pitfalls, such as selection bias and confirmation bias. Basically, researchers have the tendency to select situations where their theory is likely to hold, and ignore evidence against the theory (e.g. consumer segments for which gains outweigh losses). Proponents of strong theory need to depend either on the highest ethical standards in each researcher or on the highest alertness and replication enthusiasm of the scientific community.
Strong evidence scientists have prior notions, sure, but they appear more open to opposing points of view. This is reflected in their research and papers, which often offer alternative hypotheses or no formal hypotheses at all (e.g. Ehrenberg 1995, Trusov et al. 2009). I especially like studies on the conditions under which e.g. price promotions decrease brand equity (DelVecchio et al. 2006) and give rise to loss aversion (Alkis 2014, Bronnenberg and Watthieu 1996). Based on such research, we have updated our beliefs: price promotions only lower brand equity if the brand is unfamiliar and the discount substantial, and price loss aversion is reversed when consumers care more about quality or are promotion-focused. In my own research, competitive reaction to grocery product price promotions have a minimal impact in the US but a large one in Turkey, and paid online media is more effective than owned media for familiar product brands, but not for unfamiliar brands and services (Demirci et al. 2013). UCLA Prof. Mike Hanssens’ books on empirical generalizations (2009, 2015) are full of examples of strong evidence studies and the insights they generated.
Ironically, such studies have a hard time getting published in top journals and getting impact in popular press and further research. Is the study set up with competing theories too complicated for reviewers? Or do strong evidence researchers have no good alternative to the linear, strong-theory write-up taught in their doctoral seminars? Are the findings too complicated for a controversial article or blog? Do strong evidence researchers “dilute their message” by giving too much information?
At a recent conference, a strong theory modeler explained how he was shifting to strong evidence work due to his doubt in some of the assumptions that guided his previous work (for the insiders: that consumers Bayesian update their beliefs). A hopeful sign of things to come?