e dot dot dot
home << Policy << auto content moderation case study yelp attempts to tackle racism on its platform 2020

Thu, 14 Jan 2021

Content Moderation Case Study: Yelp Attempts To Tackle Racism On Its Platform (2020)
Furnished content.


Summary: Running a site that relies on third-party content means having to deal with the underside of human existence. While most people engage in good faith, a small minority of people engage with the sole purpose of disparaging others.

Yelp is no exception. Designed to provide potential customers with useful information about goods and services, the site's popularity lent itself to brigading (negative reviews delivered en masse in response to current outrages) and the lowest common denominators of the general public: bigots.The potential to ruin a business's reputation over their views on immigration policy, their employment of minorities, or other perceived slights made it possible for the most-respected review site to be weaponized by racists.Yelp recognized this inevitability. Moderators patrol the site to limit the spread of bigoted content that skew review scores based on the racist predilections of reviewers.
Communities have always turned to Yelp in reaction to current events at the local level. As the nation reckons with issues of systemic racism, we've seen in the last few months that there is a clear need to warn consumers about businesses associated with egregious, racially-charged actions to help people make more informed spending decisions. Yelp's User Operations team already places alerts on business pages when we notice an unusual uptick in reviews that are based on what someone may have seen in the news or on social media, rather than on a first-hand experience with the business. Now, when a business gains public attention for reports of racist conduct, such as using racist language or symbols, Yelp will place a new Business Accused of Racist Behavior Alert on their Yelp page to inform users, along with a link to a news article where they can learn more about the incident.
This move may have seemed laudable but it lent itself to subjective interpretations of decisions made by businesses, as well as individual actions by employees. Employing a racist person is not the same as running a racist business, but Yelp's blanket policy seemed to indicate both were equally racist.Further comments by Yelp clarified some of its employees would make the final determination on alleged racism by businesses or business owners. Any company flagged for racist behavior would be sheltered from further comment until a determination was made.
Decisions to be made by Yelp:Questions and policy implications to consider:Resolution: This use of warnings and the hiding of unverified reviews (at least temporarily) is still company policy. While its moderation efforts may eventually lead to a satisfactory resolution, its decision to flag businesses based on unverified claims has the potential to result in a lot of collateral damage.Originally posted on the Trust & Safety Foundation website.

Read more here


edit: Policy/auto___content_moderation_case_study__yelp_attempts_to_tackle_racism_on_its_platform__2020_.wikieditish...

Password:
Title:
Body:
Link | Image | Paragraph | BR | Return | Create Amazon link | Technorati tag
Technorati tag?:
Delete this item?:
Treat as new?:
home << Policy << auto content moderation case study yelp attempts to tackle racism on its platform 2020