e dot dot dot
home << Policy << auto content moderation case study social media services respond when recordings of shooting are uploaded by the person committing the crimes august 2015

Sat, 08 Aug 2020

Content Moderation Case Study: Social Media Services Respond When Recordings Of Shooting Are Uploaded By The Person Committing The Crimes (August 2015)
Furnished content.


Summary: The ability to instantly upload recordings and stream live video has made content moderation much more difficult. Uploads to YouTube have surpassed 500 hours of content every minute (as of May 2019), making any form of moderation inadequate.The same goes for Twitter and Facebook. Facebook's user base exceeds two billion worldwide. Over 500 million tweets are posted to Twitter every day (as of May 2020). Algorithms and human moderators are incapable of catching everything that violates terms of service.When the unthinkable happens -- as it did on August 26, 2015 -- these two social media services swiftly responded. But even their swift efforts weren't enough. The videos posted by Vester Lee Flanagan, a disgruntled former employee of CBS affiliate WDBJ in Virginia, showed him tracking down a WDBJ journalist and cameraman and shooting them both.

Both platforms removed the videos and deactivated Flanagan's accounts. Twitter's response took only minutes. But the spread of the videos had already begun, leaving moderators to try to track down duplicates before they could be seen and duplicated yet again. Many of these ended up on YouTube, where moderation efforts to contain the spread still left several reuploads intact. This was enough to instigate an FTC complaint against Google, filed by the father of the journalist killed by Flanagan. Google responded by stating it was still removing every copy of the videos it could locate, using a combination of AI and human moderation.Users of Facebook and Twitter raised a novel complaint in the wake of the shooting, demanding "autoplay" be opt in -- rather than the default setting -- to prevent them from inadvertently viewing disturbing content.Moderating content as it is created continues to pose challenges for Facebook, Twitter, and YouTube -- all of which allow live-streaming.Decisions to be made by social media platforms:Questions and policy implications to consider:Resolution: All three platforms have made efforts to engage in faster, more accurate moderation of content. Live-streaming presents new challenges for all three platforms, which are being met with varying degrees of success. These three platforms are dealing with millions of uploads every day, ensuring objectionable content will still slip through and be seen by hundreds, if not thousands, of users before it can be targeted and taken down.Content like this is a clear violation of terms of service agreements, making removal -- once notified and located -- straightforward. But being able to "see" it before dozens of users do remains a challenge.

Read more here


edit: Policy/auto___content_moderation_case_study__social_media_services_respond_when_recordings_of_shooting_are_uploaded_by_the_person_committing_the_crimes__august_2015_.wikieditish...

Password:
Title:
Body:
Link | Image | Paragraph | BR | Return | Create Amazon link | Technorati tag
Technorati tag?:
Delete this item?:
Treat as new?:
home << Policy << auto content moderation case study social media services respond when recordings of shooting are uploaded by the person committing the crimes august 2015