Man in suit holding mobile

Image courtesy of Getty Images.

COVID misinformation is a health risk – tech companies need to remove harmful content not tweak their algorithms

Many worldwide have now caught COVID. But during the pandemic many more are likely to have encountered something else that’s been spreading virally: misinformation. False information has plagued the COVID response, erroneously convincing people that the virus isn’t harmful, of the merits of various ineffective treatments, or of false dangers associated with vaccines.

The Conversation's logo

This article was published by the Conversation.

Often, this misinformation . At its worst, it can . The UK’s , noting the scale of the problem, has made online information the subject of its latest . This puts forward arguments for how to limit misinformation’s harms.

The report is an ambitious statement, covering everything from  to conspiracy theories about . But its key coverage is of the COVID pandemic and – rightly – the question of  about COVID and vaccines.

Here, it makes some important recommendations. These include the need to better support factcheckers, to devote greater attention to the  on private messaging platforms such as WhatsApp, and to encourage new approaches to online media literacy.

But the  – that social media companies shouldn’t be required to remove content that is legal but harmful, but be asked to tweak their algorithms to prevent the viral spread of misinformation – is too limited. It is also ill suited to public health communication about COVID. There’s good evidence that exposure to vaccine misinformation , making people  to get jabbed and  to discourage others from being vaccinated, .

The basic –  – problem with this recommendation is that that it will make public health communication dependent on the good will and cooperation of profit-seeking companies. These businesses are poorly motivated to open up their data and processes, despite being crucial infrastructures of communication. Google search, YouTube and Meta (now the umbrella for Facebook, Facebook Messenger, Instagram and WhatsApp) have substantial market dominance in the UK. This is real power, despite these companies’ claims that they are merely “platforms”.

These companies’ business models depend heavily on direct control over the design and deployment of their own algorithms (the processes their platforms use to determine what content each user sees). This is because these algorithms are  for harvesting mass behavioural data from users and selling access to that data to advertisers.

This fact creates problems for any regulator wanting to devise an effective regime for holding these companies to account. Who or what will be responsible for assessing how, or even if, their algorithms are prioritising and deprioritising content in such a way as to mitigate the spread of misinformation? Will this be left to the social media companies themselves? If not, how will this work? The companies’ algorithms are . It is unlikely they will want to open them up to scrutiny by regulators.

Recent initiatives, such as Facebook’s hiring of factcheckers to identify and moderate misinformation on its platform, have not involved opening up algorithms. That has been off limits. As leading independent factchecker : “Most internet companies are trying to use [artificial intelligence] to scale fact checking and none is doing so in a transparent way with independent assessment. This is a growing concern.”

Plus, tweaking algorithms will have no direct impact on misinformation circulating on private social media apps such as WhatsApp. The end-to-end encryption on these  services means shared news and information is beyond the reach of all automated methods of sorting content.

A better way forward

Requiring social media companies to instead remove harmful scientific misinformation would be a better solution than algorithmic tweaking.The key advantages are clarity and accountability.

,  and  can identify and measure the prevalence of misinformation, as they have done so far during the pandemic, despite  on access. They can then ask social media companies to remove harmful misinformation at the source, before it spreads across the platform and drifts out of public view on WhatsApp. They can show the world what the harmful content is and make a case for why it ought to be removed.

Article continues...

For the full article by Professor Andrew Chadwick, an expert in political communication, visit the Conversation webpage . 

Notes for editors

Press release reference number: 22/09

91ÌÒÉ«ÊÓƵ is one of the country’s leading universities, with an international reputation for research that matters, excellence in teaching, strong links with industry, and unrivalled achievement in sport and its underpinning academic disciplines.

It has been awarded five stars in the independent QS Stars university rating scheme, named the best university in the world for sports-related subjects in the 2021 QS World University Rankings and University of the Year for 91ÌÒÉ«ÊÓƵ by The Times and Sunday Times University Guide 2022.

91ÌÒÉ«ÊÓƵ is in the top 10 of every national league table, being ranked 7th in The UK Complete University Guide 2022, and 10th in both the Guardian University League Table 2022 and the Times and Sunday Times Good University Guide 2022.

91ÌÒÉ«ÊÓƵ is consistently ranked in the top twenty of UK universities in the Times Higher Education’s ‘table of tables’ and is in the top 10 in England for research intensity. In recognition of its contribution to the sector, 91ÌÒÉ«ÊÓƵ has been awarded seven Queen's Anniversary Prizes.

The 91ÌÒÉ«ÊÓƵ London campus is based on the Queen Elizabeth Olympic Park and offers postgraduate and executive-level education, as well as research and enterprise opportunities. It is home to influential thought leaders, pioneering researchers and creative innovators who provide students with the highest quality of teaching and the very latest in modern thinking.

Categories