ABSTRACT

Role of AI in Addressing Misinformation on Social Media Platforms   [open pdf - 0B]

From the Introduction: "[1] Over the past year, misinformation has consistently undermined the global public health response to the COVID-19 [coronavirus disease 2019] pandemic. It has led to the use of dangerous and false health cures, increased the spread of the virus by perpetuating a myth that it is fake or presents no risk, and slowed vaccine uptake. All of this has contributed, directly or indirectly, to the deaths of many around the world, while also serving to deepen the public's distrust in democratic institutions. [2] In a bid to reduce the volume of COVID-19 misinformation, platforms have over the last year introduced a number of new measures and policies, many of which rely on the use of algorithms to help detect and address harmful content. At the same time, platforms have been forced to turn to machine-based content moderation in order to cover for shortfalls in their workforce of human moderators, many of whom were unable to work from the office for prolonged periods. [3] While initially proposed as a temporary solution to a unique crisis, some have questioned whether automated content moderation could become part of the status quo. Advocates see little choice but to rely heavily on algorithms to rein in misinformation. Critics, however, say that algorithms are not equipped to identify harmful content, and believe their adoption at scale could lead to unintended censorship. Others see this debate as a distraction from the more important question of how to reform platform business models, which may perpetuate misinformation."

Publisher:
Date:
2021-08-05?
Series:
Copyright:
UK Government, Centre for Data Ethics and Innovation
Retrieved From:
UK Government: https://assets.publishing.service.gov.uk/
Media Type:
application/pdf
URL:
Help with citations