Scholar of content moderation highlights lack of nuance in detecting false information.
Sarah T. Roberts, associate professor in UCLA’s Department of Information Studies, took part in the webinar, “Can Algorithms Tackle the ‘Infodemic’?” earlier this month, presented by the Center for Data Innovation. The discussion examined the use of algorithmic tools in recognizing false claims about the coronavirus, alleged cures, and safety guidelines.
Professor Roberts noted that algorithms did not possess the ability to pick up on nuances.
“It might know, for example, that a cat is a cat,” she said. “But it’s not because it has a cultural and social and historical or biological sense of what a cat… is. It’s because it has a massive database of shapes and other mass data.”
Roberts said that social media companies have always aspired to completely automate the moderation process.
“There’s simply not enough human beings to sit on every live stream, and most people would be uncomfortable with that anyway,” she said.
To read the full article in Broadband Breakfast, visit this link.