e dot dot dot
home << Policy << auto content moderation case study detecting sarcasm is not easy 2018

Fri, 11 Sep 2020

Content Moderation Case Study: Detecting Sarcasm Is Not Easy (2018)
Furnished content.


Summary: Content moderation becomes even more difficult when you realize that there may be additional meaning to words or phrases beyond their most literal translation. One very clear example of that is the use of sarcasm, in which a word or phrase is used either in the opposite of its literal translation or as a greatly exaggerated way to express humor.In March of 2018, facing increasing criticism regarding certain content that was appearing on Twitter, the company did a mass purge of accounts, including many popular accounts that were accused of simply copying and retweeting jokes and memes that others had created. Part of the accusation for those that were shut down, was that there was a network of accounts (referred to as Tweetdeckers for the user of the Twitter application Tweetdeck) who would agree to mass retweet some of those jokes and memes. Twitter suggested that these retweet brigades were inauthentic and thus banned from the platform.In the midst of all of these suspensions, however, there was another set of accounts and content suspended, allegedly for talking about self -harm. Twitter has policies regarding glorifying self-harm which it had just updated a few weeks before this new round of bans.


However, in trying to apply that, Twitter took down a bunch of tweets that had people sarcastically using the phrase kill me. This included suddenly suspending many accounts despite many of those tweets being from many years earlier. It appeared that Twitter may have just done a search on kill me or other similar words and phrases including kill myself, cut myself, hang myself, suicide, or I wanna die.While some of these may indicate intentions for self-harm, in many other cases they were clearly sarcastic or just people saying odd things, and yet Twitter temporarily suspended many of those accounts and asked the users to delete the tweets. In at least some cases, the messages from Twitter did include some encouraging words, such as Please know that there are people out there who care about you, and you are not alone. But that did not appear to be on all of the messages. That language, at least, suggested a specific response to concerns about self-harm.Decisions to be made by Twitter:
Questions and policy implications to consider:Resolution: This continues to be a challenge for various platforms, including Twitter. The company has continued to tweak its policies regarding self-harm over the year, including partnering with suicide prevention groups in various locations to seek to help those who indicate that they are considering self-harm.

Read more here


edit: Policy/auto___content_moderation_case_study__detecting_sarcasm_is_not_easy__2018_.wikieditish...

Password:
Title:
Body:
Link | Image | Paragraph | BR | Return | Create Amazon link | Technorati tag
Technorati tag?:
Delete this item?:
Treat as new?:
home << Policy << auto content moderation case study detecting sarcasm is not easy 2018