Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017)
Furnished content.
Summary: The messaging service Kik was founded in 2009 and has gone through multiple iterations over the years. However, it seemed to build a large following for mostly anonymous communication, allowing users to create many new usernames not linked to a phone number, and to establish private connections via those usernames. This privacy feature has been applauded by some as being important for journalists, activists and at-risk populations.However, the service has also been decried by many as being used in dangerous and abusive ways. NetNanny puts it as the number one dangerous messaging apps for kids, saying that it has had a problem with child exploitation and highlighting the many inappropriate chat rooms for kids on the app. Others have said that, while the service is used by many teenangers, many feel that it is not safe for them and full of sexual content and harassment.Indeed, in 2017, a Forbes report detailed that Kik had a huge child exploitation problem. It described multiple cases of child exploitation that we found on the app, and claimed that it did not appear that the company was doing much to deal with the problem, which seemed especially concerning given that over half of its users base was under 24 years of age.Soon after that article, Kik began to announce some changes to its content moderation efforts. It teamed up with Microsoft to improve its moderation practices. It also announced a $10 million effort to improve safety on the site and named some high profile individuals to its new Safety Advisory Board.A few months later the company announced updated community standards, with a focus on safety, and a partnership with Crisis Text Line. However, that appeared to do little to stem the concerns. A report later in 2018 said that, among law enforcement, the app that concerned them most was Kik, with nearly all saying that they had come across child exploitation cases on the app, and that the company was difficult to deal with.In response, the company argued that while it was constantly improving its trust & safety practices, it also wanted to protect the privacy of its users.Decisions to be made by Kik:
edit: Policy/auto___content_moderation_case_study__kik_tries_to_get_abuse_under_control__2017_.wikieditish...