e dot dot dot
home << Policy << auto facebook promises to distinguish takedowns from governments whether they re for illegal content or merely site rules violations

Wed, 01 Sep 2021

Facebook Promises To Distinguish Takedowns From Governments; Whether They're For Illegal Content, Or Merely Site Rules Violations
Furnished content.


There's an interesting discussion that happens in content moderation circles with regards to government requests for takedowns: are those requests about content that violates local laws or about the content violating website policies? Many people lump these two things together, but they're actually pretty different in practice. Obviously, if a government comes across content that violates the law, it seems reasonable for them to alert the platform to it and expect that the content will be removed (though, there may be some questions about jurisdiction and such). However, when it's just content that may violate site policy, there are some pretty big questions raised. This actually gets to the "jawboning" question we've been discussing a lot lately, and exploring where the line is between a politician persuading a website to take something down and compelling them to do so.There is one argument that suggests that a government pointing out content that might violate site policies is simply helping out. The website doesn't want content that violates its standards, and the government reporting it is just like any other user reporting it (though, backed up by whatever credibility the government may or may not have). But, the flip side of that is, if the content is perfectly legal, this often starts to feel like a loophole through which government actors can engage in wink-wink-nudge-nudge censorship -- just send a notification to the site that this particular content may not break the law, but hey, doesn't it violate your policies?A recent decision by the Oversight Board, and a corresponding statement from Facebook, suggests that Facebook is going to be clearer about when this situation happens. The case the Oversight Board reviewed was an interesting one. Here's how it summarized the situation:

In January 2021, an Instagram user in the United States posted a picture of Abdullah Öcalan, one of the founding members of the Kurdistan Workers' Party (PKK). The picture included the words y'all ready for this conversation. Underneath the picture the user wrote that it was time to talk about ending Öcalan's isolation in prison on Imrali Island. They encouraged readers to engage in conversation about his imprisonment and the inhumane nature of solitary confinement.Facebook removed the content for violating Instagram's Community Guidelines after the post was automatically flagged for review (at this stage, the Board does not know if the content was removed by an automated system or through human review). These Guidelines under the heading follow the law, set out that Instagram is not a place to support or praise terrorism, organized crime, or hate groups. The Guidelines link to Facebook's Community Standard on Dangerous Individuals and Organizations. These rules clarify that Facebook also prohibits any support or praise for groups, leaders, or individuals involved in terrorist activity or other serious crimes committed by these groups. The PKK has been designated a terrorist organization by multiple countries, including Turkey, the United States, and the EU.The user states in their appeal that Öcalan has been a political prisoner for decades and that banning any reference to him prevents discussions that could advance the position of the Kurdish people. They argue that Öcalan's philosophy is peaceful and that his writings are widely available in bookshops and online. The user compares Öcalan's imprisonment to that of former South African President Nelson Mandela, noting that discussion of Öcalan's imprisonment should be allowed and encouraged.
In July, the Oversight Board overturned Facebook's initial decision (even though once the case was taken Facebook admitted it had taken down the content in error before the Oversight Board even ruled). A key part of the discussion was that Facebook claimed it had "misplaced" an "internal policy exception" for the "Dangerous Individuals and Organizations" policy, as that misplaced policy included an exception for the discussion of human rights issues, even among those designated as "dangerous." As with many other Oversight Board rulings, there were a bunch of other recommendations as well.In early August, Facebook responded to the recommendations, and, as Evelyn Douek noted, buried deep within them (recommendation 11) was an admission that Facebook would start revealing information and data on requests from governments based on "Community Standards" violations, rather than legal violations. Here's what the Oversight Board asked for:
Include information on the number of requests Facebook receives for content removals from governments that are based on Community Standards violations (as opposed to violations of national law), and the outcome of those requests.
While this may have been a minor part of this particular situation, it's important information that has previously not be available. And, as Douek also notes, human rights activists have been begging Facebook for this data for ages, with no luck. But the Oversight Board seems to have made it happen. Here's what Facebook had to say about it:
Our commitment: We are actively working to provide additional transparency when we remove content under our Community Standards following a formal report by a government, including the total number of requests we receive.Considerations: As noted in our response to Recommendation 9 above, when we receive a formal government report about content that may violate local law, we first review it against our global Community Standards, just as we would review a report from any other source. If the content violates our Community Standards, we will remove it and count it in our Community Standards Enforcement Report. These reports are reviewed under a standardized process in the same way and against the same policies as reports from any other source. As a result, we are not currently able to report when we remove content based on a report by a government or from a Facebook user. In addition, we may receive reports of a piece of content that may violate our policies from multiple sources at the same timefor example, from a government and from user reports on Facebook. Such situations create additional challenges in determining whether content should be considered as removed in response to a government report.We have been exploring ways to increase the level of transparency we provide to users and the public about requests we receive from governments, in line with best practices laid out by civil society efforts like the Santa Clara Principles and the Ranking Digital Rights project. We are prioritizing that work in response to this recommendation.Next steps: We are planning work that will enable us to include information on content removed for violating our Community Standards following a formal report by a government, including the number of requests we receive, as a distinct category in our Transparency Center.
Again, as Douek points out, there is some wiggle room here in that Facebook committed to counting "formal" requests from government, which means that totally informal requests and suggestions -- pure "jawboning" situations -- might not be counted. But, still, this is a step forward in getting more transparency about how often governments push Facebook and Instagram to take down content that is legal, even if it may violate site policies.

Read more here


edit: Policy/auto___facebook_promises_to_distinguish_takedowns_from_governments__whether_they_re_for_illegal_content__or_merely_site_rules_violations.wikieditish...

Password:
Title:
Body:
Link | Image | Paragraph | BR | Return | Create Amazon link | Technorati tag
Technorati tag?:
Delete this item?:
Treat as new?:
home << Policy << auto facebook promises to distinguish takedowns from governments whether they re for illegal content or merely site rules violations