As the embargo on the Science publication was just lifted today, I am happy to now share my take on the Fake News Fireside Chat at MIT’s Conference on Digital Experimentation (CODE).
Civilization is understanding who we are, then protect us and improve us by institutions.
As to understanding humans with fake news, the researchers at CODE made substantial progress. Fake news spreads faster, further and broader than real news because of its novelty ( (http://science.sciencemag.org/content/359/6380/1146). The human impulse is to click on and share novelty. This impulse is even stronger for political news, as compared to e.g. urban legends or disasters. The actual and potential damage to society is obvious, especially when it is weaponized. In my opinion, we basically have 3 options to counteract fake news: punishing it, censoring fake news (e.g. at the platform level) or educating users to help recognize it and stop spreading it. The latter appears the more feasible option, and requires us users and institutions to work together. It has always been trusted institutions with the right processes that helped us separate fact from fiction. After WWII, journalists got serious on the process of fast checking – now we all have to get serious about the process by which information is created.
How can we protect each other from fake news? My own pet peeve is to always read before sharing. Sure, you can stop midway a 20-page article, but I continue to be puzzled by friends who share a 1-page blog without even reading it. Not cool. Second, platforms such as Facebook should invest in tools to protect and educate their users, just as they do for their advertisers. Such tools may include information labels explaining the origin of the news source. Sure, many of us won’t read those (just as we won’t read the nutritional information on food), but at least the information is at our fingertips when we choose to. A change of perspective is necessary: Facebook is all about sharing, but should now be caring about what is shared. Finally, Google should also take care not to strengthen polarization. For instance, when you watch a YouTube video on a subject, you are typically recommended more extreme views on that subject. Yes, this increases user arousal and time on the site, but should that be the only objective?