By Hanna Rioseco
This summer, the World Health Organization (WHO) hosted the first Infodemiology Conference, focused on understanding, measuring, and controlling infodemics.
The term “infodemic” was coined by the WHO to describe the rapid spread and overabundance of information – some accurate, and some not. In a situation report published in early February, the WHO warned that infodemics make it difficult to find trustworthy sources and reliable guidance. During COVID-19, the consequences of misinformation can be a matter of life and death: a study published in the American Journal of Tropical Medicine and Hygiene estimates that between January and March 800 people around the globe may have died because of coronavirus-related misinformation.
Mitigating the risk of COVID-19 includes tackling the spread of misinformation that often accompanies outbreaks. Like a virus, misinformation spreads from person to person, but through information and communications technology systems.
I don’t have to consult the WHO, however, to recognize the information crisis for what it is: I’m living, scrolling, and sorting through it myself. Since February, my newsfeeds have been crowded with COVID-19 related news, stories, and memes. But the content that comes across my screen is not all accurate, or even useful. I’ve seen acquaintances criticize government directives about social distancing and question the effectiveness of mask-wearing; conspiracy theories regarding the origins and nature of the virus, some fueled with harmful sentiments; and medical misinformation such as untested at-home remedies. In Canada, a Carleton University study found that 46 percent of Canadian respondents believed at least one of four unfounded COVID-19 theories.
To curb the spread of misinformation, the WHO has been active in the digital space by partnering with influencers to spread factual information. They have also been working closely with search engines and social media platforms to ensure that science-based health messages from official sources appear first in search results or newsfeeds. These efforts are being made to combat dangerous rumors, for example, that COVID-19 cannot survive in hot weather, or that chloroquine medication can prevent the virus. Additionally, the WHO is using artificial intelligence to engage in social listening and gain insights about the types of concerns people have about the virus. In theory, this will help officials to better tailor health messaging to meet the needs of the public. As I researched and reported on pandemic-related changes to access to information laws for the Centre for Law and Democracy’s COVID-19 Tracker, I also learned about how some States have used the infodemic surrounding COVID-19 as justification for harsh disinformation laws. Though aimed at protecting public health by curbing the spread of misinformation surrounding COVID-19, these laws have in many cases resulted in the detention of journalists and the criminalization of free speech. These responses raise a multitude of concerns, not only regarding human rights but also concerning how communications and information policy and legal frameworks can support access to reliable information moving forward.
During my internship at the Centre for Law and Democracy, I learned about how governments can mitigate the harmful effects of misinformation surrounding COVID-19 by fulfilling their right to information obligations. In a time where things feel more uncertain than ever, States can rebuild public trust and confidence by providing access to timely, reliable information. As I think about what I’ve learned about freedom of information and expression, and reflect on how our information systems and policies have failed to keep people informed and protected during this crisis, I am left with more questions than answers. What can this moment teach us about regulating the information environment? The problems posed by misinformation will, in all likelihood, outlast the virus, and require a multi-stakeholder solution. How can our digital communications infrastructure better safeguard against the harms of misinformation? What role should the private digital companies play? Should platforms censor or label content they identify as being false or misleading, or would that set a dangerous precedent for the moderation of free speech? And of course, where do human rights fit in?