e dot dot dot
home << Policy << auto content moderation case study suppressing content to try to stop bullying 2019

Thu, 08 Oct 2020

Content Moderation Case Study: Suppressing Content To Try To Stop Bullying (2019)
Furnished content.


Summary: TikTok, like many social apps that are mainly used by a younger generation, has long faced issues around how to deal with bullying done via the platform. According to leaked documents revealed by the German site Netzpolitik, one way that the site chose to deal with the problem was through content suppression -- but specifically by suppressing the content of those the company felt were more prone to being victims of bullying.The internal documents showed different ways in which the short video content that TikTok is famous for would be rated for visibility. This could include content that was chosen to be featured (i.e., seen by more people) but also content that was deemed Auto R for a form of suppression. Content rated as such was excluded from the for you feed on Tiktok after reaching a certain number of views. The for you feed is how most people view TikTok videos, so this rating would effectively put a cap on views. The end result was the reach of content categorized as Auto R was significantly limited, and completely prevented from going viral and amassing a large audience or following.What was somewhat surprising was that TikTok's policies explicitly suggested putting those who might be bullied in the Auto R category -- even saying that those who were disabled, autistic, or with Down Syndrome, should be put in this category to minimize bullying.

According to Netzpolitik, employees at TikTok repeatedly pointed out the problematic nature of this decision, and how it was discriminatory itself and punishing people not for any bad behavior, but because of the belief that their differences might possibly lead to them being bullied. However, they claimed that they were prevented from changing the policies by TikTok's corporate parent, ByteDance, which dictated the company's content moderation policies.Decisions to be made by TikTok:Questions and policy implications to consider:Resolution: TikTok admitted that these rules were a blunt instrument that were put in place rapidly to try to minimize bullying on the platform -- but that the company had realized it was the wrong approach and had implemented more nuanced policies:
"Early on, in response to an increase in bullying on the app, we implemented a blunt and temporary policy," he told the BBC."This was never designed to be a long-term solution, and while the intention was good, it became clear that the approach was wrong."We have long since removed the policy in favour of more nuanced anti-bullying policies."
However, the Netzpolitik report suggested that this policy had been in place at least until September of 2019, just three months before its reporting came out in December of 2019. It is unclear exactly when the more nuanced anti-bullying policies were put in place, but it is possible that they came about due to the public exposure and pressure from the reporting on this issue.

Read more here


edit: Policy/auto___content_moderation_case_study__suppressing_content_to_try_to_stop_bullying__2019_.wikieditish...

Password:
Title:
Body:
Link | Image | Paragraph | BR | Return | Create Amazon link | Technorati tag
Technorati tag?:
Delete this item?:
Treat as new?:
home << Policy << auto content moderation case study suppressing content to try to stop bullying 2019