An article in The Atlantic by the professor of information studies looks at commercial content moderation.
UCLA Assistant Professor of Information Studies Sarah T. Roberts has published an article in The Atlantic on “Social Media’s Silent Filter.” A scholar of various social media topics including IT infrastructure and planning, digital economy, and internet governance and society, Roberts has spent six years researching the commercial content moderation (CCM) workforce across the globe, and delineates its occupational – and largely unknown – hazards.
“CCM workers make decisions about the appropriateness of images, video, or postings that appear on a given site— material already posted and live on the site, then flagged as inappropriate in some way by members of the user community,” she writes. “CCM workers engage in this vetting over and over again, sometimes thousands of times a day.
“While some low-level tasks can be automated (imperfectly) by processes such as matching against known databases of unwanted content, facial recognition, and “skin filters,” which screen photos or videos for flesh tones and then flag them as pornography, much content (particularly user-generated video) is too complex for the field of “computer vision”—the ability for machines to recognize and identify images. Such sense-making processes are better left to the high-powered computer of the human mind, and the processes are less expensive to platforms when undertaken by humans, although not without other costs.”
Roberts has done extensive research on the CCM workforce in a wide diversity of environments, including the Philippines, India, Scotland, rural Iowa, and Silicon Valley. She describes the working conditions of CCM contractors, who are generally employed only part-time and without quality health care, despite their high rate of burnout from constant exposure to disturbing online material.
“Despite cultural, ethnic, and linguistic challenges, [CCM workers] share similarities in work-life and working parameters,” writes Roberts. “They labor under the cloak of NDAs, or non-disclosure agreements, which disallow them from speaking about their work to friends, family, the press, or academics, despite often needing to: As a precondition of their work, they are exposed to heinous examples of abuse, violence, and material that may sicken others, and such images are difficult for most to ingest and digest. One example of this difficulty is in a recent and first-of-its-kind lawsuit filed by two Microsoft CCM workers who are now on permanent disability, they claim, due to their exposure to disturbing content as a central part of their work.”
Roberts highlights the necessity of the CCM workforce, whose interventions can sometimes become a matter of life or death for internet users.
“CCM workers are often insightful about the key role their work plays in protecting social-media platforms from risks that run the gamut from bad PR to legal liability,” she writes. “The workers take pride in their efforts to help law enforcement in cases of child abuse. They have intervened when people have posted suicidal ideation or threats, saving lives in some cases—and doing it all anonymously.”
Roberts states that while many social media companies are considering automated measures to streamline the CCM processes and eliminate the human element, they would also diminish the qualities of human reflection and questioning that CCM workers provide.
“In the absence of openness about these firms’ internal policies and practices, the consequences for democracy are unclear,” writes Roberts. “CCM is a factor in the current environment of fake-news proliferation and its harmful results, particularly when social-media platforms are relied upon as frontline, credible information sources. The public debate around fake news is a good start to a critical conversation, but the absence of the full understanding of social-media firms’ agenda and processes, including CCM, make the conversation incomplete.”
To read “Social Media’s Silent Filter,” click here.