e dot dot dot
a mostly about the Internet blog by

September 2021
Sun Mon Tue Wed Thu Fri Sat
     
   


Content Moderation Beyond Platforms: A Rubric

Furnished content.


For decades, EFF and others have been documenting the monumental failures of content moderation at the platform level—inconsistent policies, inconsistently applied, with dangerous consequences for online expression and access to information. Yet despite mounting evidence that those consequences are inevitable, service providers at other levels are increasingly choosing to follow suit.The full infrastructure of the internet, or the “full stack,” is made up of a range of entities, from consumer-facing platforms like Facebook or Pinterest, to ISPs like Comcast or AT&T. Somewhere in the middle are a wide array of intermediaries, such as upstream hosts like Amazon Web Services (AWS), domain name registrars, certificate authorities (such as Let’s Encrypt), content delivery networks (CDNs), payment processors, and email services.For most of us, most of the stack is invisible. We send email, tweet, post, upload photos and read blog posts without thinking about all the services that help get content from the original creator onto the internet and in front of users’ eyeballs all over the world. We may think about our ISP when it gets slow or breaks, but day-to-day, most of us don’t think about intermediaries like AWS at all—until AWS decides to deny service to speech it doesn’t like, as it did with the social media site Parler, and that decision gets press attention.Invisible or not, these intermediaries are potential speech “chokepoints” and their choices can significantly influence the future of online expression. Simply put, platform-level moderation is broken and infrastructure-level moderation is likely to be worse. That said, the pitfalls and risks for free expression and privacy may play out differently depending on what kind of provider is doing the moderating. To help companies, policymakers and users think through the relative dangers of infrastructure moderation at various levels of the stack, here’s a set of guiding questions.

  1. Is meaningful transparency, notice, and appeal possible? Given the inevitability of mistakes, human rights standards demand that service providers notify users that their speech has been, or will be, taken offline, and offer users an opportunity to seek redress. Unfortunately, many services do not have a direct relationship with either the speaker or the audience for the expression at issue, making all of these steps challenging. But without them, users will be held not only to their host’s terms and conditions but also those of every service in the chain from speaker to audience, even though they may not know what those services are or how to contact them. Given the potential consequences of violations, and the difficulty of navigating the appeals processes of previously invisible services (assuming such a process even exists), many users will simply avoid sharing controversial opinions altogether. Relatedly, where a service provider has no relationship to the speaker or audience, takedowns will be much easier and cheaper than a nuanced analysis of a given user’s speech.

  2. Do viable competitive alternatives exist? One of the reasons net neutrality rules for ISPs are necessary is that users have so few options for high-quality internet access. If your ISP decides to shut down your account based on your expression (or that of someone else using the account), in much of the world, including the U.S., you can’t go to another provider. At other layers of the stack, such as the domain name system, there are multiple providers from which to choose, so a speaker who has their domain name frozen can take their website elsewhere. But the existing of alternatives alone is not enough; answering this question also requires evaluating the costs of switching and whether it calls for technical savvy beyond the skill set of most users.

  3. Is it technologically possible for the service to tailor its moderation practices to target only the specific offensive expression? At the infrastructure level, many services cannot target their response with the necessary precision human rights standards demand. Twitter can block specific tweets; Amazon Web Services can only deny service to an entire site, which means they inevitably affect far more than the objectionable speech that motivated the action. We can take a lesson here from the copyright context, where we have seen domain name registrars and hosting providers shut down entire sites in response to infringement notices targeting a single document. It may be possible for some services to communicate directly with customers where they are concerned about a specific piece of content, and request that it be taken down. But if that request is rejected, the service has only the blunt instrument of complete removal at its disposal. 

  4. Is moderation an effective remedy? The U.S. experience with online sex trafficking teaches that removing distasteful speech may not have the hoped-for impact. In 2017, Tennessee Bureau of Investigation special agent Russ Winkler explained that online platforms were the most important tool in his arsenal for catching sex traffickers. Today, legislation designed to prevent the use of online platforms for sex trafficking has made it harder for law enforcement to find traffickers. Indeed, several law enforcement agencies report that without these platforms, their work finding and arresting traffickers has hit a wall.

  5. Will collateral damage, such as the stifling of lawful expression, disproportionally affect less powerful groups? Moderation choices may reflect and reinforce bias against marginalized communities. Take, for example, Facebook’s decision, in the midst of the #MeToo movement’s rise, that the statement “men are trash” constitutes hateful speech. Or Twitter’s decision to use harassment provisions to shut down the verified account of a prominent Egyptian anti-torture activist. Or the content moderation decisions that have prevented women of color from sharing the harassment they receive with their friends and followers. Or the decision by Twitter to mark tweets containing the word “queer” as offensive, regardless of context. As with the competition inquiry, this analysis should consider whether the impacted speakers and audiences will have the ability to respond and/or find effective alternative venues.

  6. Is there a user and speech friendly alternative to central moderation? Could there be? One of the key problems of content moderation at the social media level is that the moderator substitutes its policy preferences for those of its users. When infrastructure providers enter the game, with generally less accountability, users have even less ability to make their own choices about their own internet experience. If there are tools that allow users themselves to express and implement their own preferences, infrastructure providers should return to the business of servicing their customers -- and policymakers have a weaker argument for imposing new requirements.

  7. Will governments seek to hijack any moderation pathway? We should be wary of moderation practices that will provide state and state-sponsored actors with additional tools for controlling public dialogue. Once processes and tools to takedown expression are developed or expanded, companies can expect a flood of demands to apply them to other speech. At the platform level, state and state-sponsored actors have weaponized flagging tools to silence dissent. In the U.S., the First Amendment and the safe harbor of Section 230 largely prevent moderation requirements. But policymakers have started to chip away at Section 230, and we expect to see more efforts along those lines. In other countries, such as Canada, the U.K., Turkey and Germany, policymakers are contemplating or have adopted draconian takedown rules for platforms and would doubtless like to extend them further. 
Companies should ask all of these questions when they are considering whether to moderate content (in general or as a specific instance). And policymakers should ask them before they either demand or prohibit content moderation at the infrastructure level. If more than two decades of social media content moderation has taught us anything, it is that we cannot “tech” our way out of political and social problems. Social media companies have tried and failed to do so; infrastructure companies should refuse to replicate those failures—beginning with thinking through the consequences in advance, deciding whether they can mitigate them and, if not, whether they should simply stay out of it.Corynne McSherry is the Legal Director at EFF, specializing in copyright, intermediary liability, open access, and free expression issues.Techdirt and EFF are collaborating on this Techdirt Greenhouse discussion. On October 6th from 9am to noon PT, we'll have many of this series' authors discussing and debating their pieces in front of a live virtual audience (register to attend here). On October 7th, we'll be hosting a smaller workshop focused on coming up with concrete steps we can take to make sure providers, policymakers, and others understand the risks and challenges of infrastructure moderation, and how to respond to those risks.

Read more here

posted at: 12:00am on 24-Sep-2021
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



Survey Suggests Eager Starlink Users Don't Understand Service Will Have Limited Reach

Furnished content.


So we've noted more than a few times that while Elon Musk's Starlink will be a good thing if you can actually get and afford the service, it's going to have a decidedly small impact on the broadband industry as a whole. Between 20 and 42 million Americans lack access to broadband entirely, 83 million live under a monopoly, and tens of millions more are stuck under a duopoly (usually your local cable company and a regional, apathetic phone company). In turn, Starlink is going to reach somewhere between 300,000 to 800,000 subscribers in its first few years, a drop in the overall bucket.Thanks to massive frustration with broadband market failure (and the high prices, dubious quality, and poor customer service that results), users are decidedly excited about something new. But not only are there limited slots due to limited capacity and physics, a lot of those slots are going to get gobbled up by die-hard Elon Musk fans excited to affix Starlink dishes to their boats, RVs, and Cybertrucks. As a result it will be extremely unlikely that most users who truly need the improved option will absolutely be able to get it.But a new PC Magazine survey continues to make it clear that most consumers don't quite understand they'll never actually have the option (especially if they live in a major metro market):

Starlink is expected to come out of beta next month for a broader commercial launch, and has seen 600,000 orders so far. But many of the customers who have signed up say getting a status update from Starlink customer service is effectively impossible. While major Wall Street analysts like Craig Moffett estimate the service may be able to scale to 6 million users over a period of many years, he also notes that guess is extremely optimistic, and will require a significantly updated fleet of 42,000 satellites to achieve.This all assumes that Starlink will remain financially viable as it works toward that goal, something that's not really guaranteed in a low-orbit satellite industry that has a history of major failures. And there will be questions about throttling and other restrictions once the network gets fully loaded with hungry users. Again, Starlink will be great for off the grid folks if they can get -- and afford -- it, but I suspect there's going to be some heartache when folks excited about the service realize the limitations of its actual reach. And this scarcity is only going to drive even greater interest in a service you probably won't be able to get anytime soon.

Read more here

posted at: 12:00am on 24-Sep-2021
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



September 2021
Sun Mon Tue Wed Thu Fri Sat
     
   







RSS (site)  RSS (path)

ATOM (site)  ATOM (path)

Categories
 - blog home

 - Announcements  (0)
 - Annoyances  (0)
 - Career_Advice  (0)
 - Domains  (0)
 - Downloads  (3)
 - Ecommerce  (0)
 - Fitness  (0)
 - Home_and_Garden  (0)
     - Cooking  (0)
     - Tools  (0)
 - Humor  (0)
 - Notices  (0)
 - Observations  (1)
 - Oddities  (2)
 - Online_Marketing  (0)
     - Affiliates  (1)
     - Merchants  (1)
 - Policy  (3743)
 - Programming  (0)
     - Bookmarklets  (1)
     - Browsers  (1)
     - DHTML  (0)
     - Javascript  (3)
     - PHP  (0)
     - PayPal  (1)
     - Perl  (37)
          - blosxom  (0)
     - Unidata_Universe  (22)
 - Random_Advice  (1)
 - Reading  (0)
     - Books  (0)
     - Ebooks  (0)
     - Magazines  (0)
     - Online_Articles  (5)
 - Resume_or_CV  (1)
 - Reviews  (2)
 - Rhode_Island_USA  (0)
     - Providence  (1)
 - Shop  (0)
 - Sports  (0)
     - Football  (0)
          - Cowboys  (0)
          - Patriots  (0)
     - Futbol  (0)
          - The_Rest  (0)
          - USA  (0)
 - Technology  (1049)
 - Windows  (1)
 - Woodworking  (0)


Archives
 -2024  March  (164)
 -2024  February  (168)
 -2024  January  (146)
 -2023  December  (140)
 -2023  November  (174)
 -2023  October  (156)
 -2023  September  (161)
 -2023  August  (49)
 -2023  July  (40)
 -2023  June  (44)
 -2023  May  (45)
 -2023  April  (45)
 -2023  March  (53)
 -2023  February  (40)


My Sites

 - Millennium3Publishing.com

 - SponsorWorks.net

 - ListBug.com

 - TextEx.net

 - FindAdsHere.com

 - VisitLater.com