Content Moderation Case Study: Facebook's AI Continues To Struggle With Identifying Nudity (2020)
Furnished content.
Summary: Since its inception, Facebook has attempted to be more "family-friendly" than other social media services. Its hardline stance on nudity, however, has often proved problematic, as its AI (and its human moderators) have flagged accounts for harmless images and/or failed to consider context when removing images or locking accounts.The latest example of Facebook's AI failing to properly moderate nudity involves garden vegetables. A seed business in Newfoundland, Canada was notified its image of onions had been removed for violating the terms of service. Its picture of onions apparently set off the auto-moderation, which flagged the image for containing "products with overtly sexual positioning." A follow-up message noted the picture of a handful of onions in a wicker basket was "sexually suggestive."
"We use automated technology to keep nudity off our apps," wrote Meg Sinclair, Facebook Canada's head of communications. "But sometimes it doesn't know a walla walla onion from a, well, you know. We restored the ad and are sorry for the business' trouble."Originally posted at the Trust & Safety Foundation website.
edit: Policy/auto___content_moderation_case_study__facebook_s_ai_continues_to_struggle_with_identifying_nudity__2020_.wikieditish...