Content Moderation Case Study: Using Fact Checkers To Create A Misogynist Meme (2019)
Furnished content.
Summary: Many social media sites include fact checkers in order to either block or at least highlight information that is determined to be false or misleading. However, in some ways, that alone can create a content moderation challenge.
There are many factors working against the moderator making the right decision. Facebook (Instagram's parent company) outsources several thousand workers to sift through flagged content, much of it horrific. Workers, who moderate hundreds of posts per day, have little time to decide a post's fate in light of frequently changing internal policies. On top of that, much of these outsourced workers are based in places like the Philippines and India, where they are less aware of the cultural context of what they are moderating.The Instagram moderator may not have understood that it's the image of the shark in connection to the claim that it won a NatGeo award that deserves the false information label.The challenges of content moderation at scale are well documented, and this shark tale joins countless others in a sea of content moderation mishaps. Indeed, this case study reflects Instagram's own challenged content moderation model: to move fast and moderate things. Even if it means moderating the wrong things.
edit: Policy/auto___content_moderation_case_study__using_fact_checkers_to_create_a_misogynist_meme__2019_.wikieditish...