e dot dot dot
home << Policy << auto content moderation case study kik tries to get abuse under control 2017

Sat, 22 May 2021

Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017)
Furnished content.


Summary: The messaging service Kik was founded in 2009 and has gone through multiple iterations over the years. However, it seemed to build a large following for mostly anonymous communication, allowing users to create many new usernames not linked to a phone number, and to establish private connections via those usernames. This privacy feature has been applauded by some as being important for journalists, activists and at-risk populations.However, the service has also been decried by many as being used in dangerous and abusive ways. NetNanny puts it as the number one dangerous messaging apps for kids, saying that it has had a problem with child exploitation and highlighting the many inappropriate chat rooms for kids on the app. Others have said that, while the service is used by many teenangers, many feel that it is not safe for them and full of sexual content and harassment.Indeed, in 2017, a Forbes report detailed that Kik had a huge child exploitation problem. It described multiple cases of child exploitation that we found on the app, and claimed that it did not appear that the company was doing much to deal with the problem, which seemed especially concerning given that over half of its users base was under 24 years of age.Soon after that article, Kik began to announce some changes to its content moderation efforts. It teamed up with Microsoft to improve its moderation practices. It also announced a $10 million effort to improve safety on the site and named some high profile individuals to its new Safety Advisory Board.A few months later the company announced updated community standards, with a focus on safety, and a partnership with Crisis Text Line. However, that appeared to do little to stem the concerns. A report later in 2018 said that, among law enforcement, the app that concerned them most was Kik, with nearly all saying that they had come across child exploitation cases on the app, and that the company was difficult to deal with.In response, the company argued that while it was constantly improving its trust & safety practices, it also wanted to protect the privacy of its users.Decisions to be made by Kik:

Questions and policy implications to consider:Resolution: Despite the claims from Kik that it was improving its efforts to crack down on abuse, reports have continued to suggest that little has changed on the platform. A detailed report from early 2020 -- years after Kik said it was investing millions in improving the platform -- suggested that it was still a haven for sketchy content, even noting that just posting a Kik address publicly (on Twitter) resulted in near immediate abuse.Despite an announcement in late 2019 that the company was going to shut down the messaging service to focus on a new cryptocurrency plan, it reversed course soon after and sold off the messenger product to a new owner. In the year and half since the sale, Kik has not added any new content to its safety portal, and more recent articles still highlight how frequently child predators are found on the site.Originally published on the Trust & Safety Foundation website.

Read more here


edit: Policy/auto___content_moderation_case_study__kik_tries_to_get_abuse_under_control__2017_.wikieditish...

Password:
Title:
Body:
Link | Image | Paragraph | BR | Return | Create Amazon link | Technorati tag
Technorati tag?:
Delete this item?:
Treat as new?:
home << Policy << auto content moderation case study kik tries to get abuse under control 2017