I’m sure you’ll nod in approval if someone asked you about having encountered a hoax or a fake image on social media. The phenomenon has existed since 1934 when Colonel Robert Wilson supposedly snapped a photograph of the Loch Ness monster but in our times of instant communication and news-sharing the Sphinx of hoaxes and digital rumors has reached an unprecedented scale.
If you are on social media, you must have encountered some form of this spread of misinformation – perhaps you saw a shark swimming in a flooded shopping mall in Kuwait, or did you stare sadly at the Statue of Liberty bracing for Hurricane Sandy, or perhaps you spotted a two-trunked elephant and or a three-humped camel (in Qatar, if you please), or one such image too many. The spread of fake images is one thing, but there is another trend of spreading rumors through tweets, status updates, text messages, emails… you get the gist.
Unsurprisingly, the rapid spread of misinformation online features as one of the top 10 trends facing the world in 2014 in the comprehensive and though-provoking Outlook on the Global Agenda 2014 report. Another trend of specific interest to us in the same list is intensifying cyber threats but let’s just talk about the spread of misinformation today.
The report was launched by the Network of Global Agenda Councils – a community of over 1,500 premier thought leaders –created by the World Economic Forum to foster greater understanding and collaboration around the major issues of our time. It was launched recently to coincide with their Summit on the Global Agenda which concludes today in Abu Dhabi.
The Report offers a perspective from the Global Agenda Councils on the challenges and opportunities of the coming 12–18 months. It offers a comprehensive overview of the world, drawing upon the foremost global intelligence network and its collective brainpower to explore the most important issues we all face in the coming year.
So what does it say about the rapid spread of misinformation online? Can we – if at all – tackle it effectively?
Farida Vis, Research Fellow at the University of Sheffield, and Member of the Global Agenda Council on Social Media, says in the Report that this is a phase when people make assumptions about the powers and problems associated with every new communications technology. It happened when we first had radio communications, surely must have happened when mass publication of books started, when telephones started ringing and so on.
Any online information is part of a larger and more complex ecology, with many interconnected factors. While it is difficult to fully map the processes involved in the rapid spread of misinformation or to identify where this information originates, writes Farida, suggesting we endeavor to look beyond the specific medium and consider the political-cultural setting in which misinformation spreads and is interpreted.
The Report cites the example of UK riots in 2011 when rumors were rife on Twitter about a children’s hospital being attacked by looters. People tended to believe the story because the preconceptions of rioters’ stereotypes and what they might be capable of matched the rumors, but the Twitter community that had started the rumor quickly debunked it as well even before the hospital and media issued statements about it being wrong.
The April 2013 Boston bombings, the Turkish protests earlier this year, and many other events have been the subject of misinformation through social media – sometimes even mainstream media has picked up this information to the dismay of the readers. Now, however amused or troubled they maybe when they first see a hoax photo or rumor, people are actually not happy receiving misinformation. And this is the fact that has got some of us working on sorting this anomaly of digital media. There are organizations and individuals working on processes, apps and initiatives that could potentially help verify social media information.
The Masdar Institute of Technology and the Qatar Computing Research Institute are working on a platform – Verily – that aims to enlist people in collecting and analyzing evidence to confirm or debunk reports. Then there are services like Storyful that use different methods to check facts about viral information, and there are apps such as Swift River that allow users to set filters on social media to help sort information based on how “trusted” a source of information is.
We would like to share some of our observations and some points from the Outlook on the Global Agenda 2014 report about how we can mitigate the rapid spread of misinformation online:
- map the processes involved in the rapid spread of misinformation
- identify where the information originated
- consider the political-cultural setting in which misinformation spread
- reiterate prompt confirmation or denial of misinformation by the concerned individual, organization, or event
- clarify the role of mainstream media in publishing / broadcasting information based on verified and validated facts and details
- develop and share guidelines and techniques that can help news sources and social media platforms to verify facts
- use computer-assisted processing combined with human evaluation to put information into context
To conclude, it is imperative that in our time large amounts of unverified and contradictory information will continue to appear on social media networks following major events and natural disasters. Timely verification of this information is the crucial solution whether it is for the aid of coordinating relief efforts or to reassure the people who may be directly affected by the situation. We agree with Farida that every case of misinformation is unique and should be considered independently. But no matter what the measures, interpreting misinformation is all about human evaluation of the information and this is how may be for a long time unless we make concrete, collective efforts to combat it.