Summary: Creating family friendly environments on the internet presents some interesting challenges that highlight the trade-offs in content moderation. One of the founders of Electric Communities, a pioneer in early online communities, gave a detailed overview of the difficulties in trying to build such a virtual world for Disney that included chat functionality. He described being brought in by Disney alongside someone from a kids' software company, Knowledge Adventure, who had built an online community in the mid-90s called KA-Worlds. Disney wanted to build a virtual community space, HercWorld, to go along with the movie Hercules. After reviewing Disney's requirements for an online community, they realized chat would be next to impossible:
Even in 1996, we knew that text-filters are no good at solving this kind of problem, so I asked for a clarification: "I'm confused. What standard should we use to decide if a message would be a problem for Disney?"The response was one I will never forget: "Disney's standard is quite clear:No kid will be harassed, even if they don't know they are being harassed."..."OK. That means Chat Is Out of HercWorld, there is absolutely no way to meet your standard without exorbitantly high moderation costs," we replied.One of their guys piped up: "Couldn't we do some kind of sentence constructor, with a limited vocabulary of safe words?"Before we could give it any serious thought, their own project manager interrupted, "That won't work. We tried it for KA-Worlds.""We spent several weeks building a UI that used pop-downs to construct sentences, and only had completely harmless words - the standard parts of grammar and safe nouns like cars, animals, and objects in the world.""We thought it was the perfect solution, until we set our first 14-year old boy down in front of it. Within minutes he'd created the following sentence:
I want to stick my long-necked Giraffe up your fluffy white bunny.
In that initial 1996 project, chat was abandoned, but as they continued to develop HercWorld, they quickly realized that they still had to worry about chat, even without a chat feature:
It was standard fare: Collect stuff, ride stuff, shoot at stuff, build stuff Oops, what was that last thing again?"kids can push around Roman columns and blocks to solve puzzles, make custom shapes, and buildings.", one of the designers said.I couldn't resist, "Umm. Doesn't that violate the Disney standard? In this chat-free world, people will push the stones around until they spell Hi! or F-U-C-K or their phone number or whatever. You've just invented Block-ChatTM. If you can put down objects, you've got chat. We learned this in Habitat and WorldsAway, where people would turn 100 Afro-Heads into a waterbed." We all laughed, but it was that kind of awkward laugh that you know means that we're all probably just wasting our time.
Decisions for family-friendly community designers:
Is there a way to build a chat that will not be abused by clever kids to reference forbidden content (e.g., swearing, innuendo, harassment, abuse)?
Can you build a chat that does not require universal moderation and pre-approval of everything that users will say?
Are there ways in which kids will still be to communicate with others even without an actual chat feature?
How much of a community do you have with no chat or extremely limited chat?
Questions and policy implications to consider:
Is it possible to create an online family friendly environment that will work?
If so how do you prevent abuse?
If not, how do you handle the fact that kids will get online whether they are allowed to or not?
How do you incentivize companies to create spaces that actually remain as child-friendly as possible?
If the kids will always find a way to get around limitations, does it make sense to hold the companies themselves responsible?
Should family friendly environments require full-time monitoring, or pre-vetting of any usage?
Resolution: Disney eventually abandoned the idea of HercWorld due to all of the issues raised. However, the interview highlights the fact that they tried again a couple of years later, with an online chat where users could only pull from a pre-selected list of sentences, but it did not have much success:
"The Disney Standard" (now a legend amongst our employees) still held. No harassment, detectable or not, and no heavy moderation overhead.Brian had an idea though: Fully pre-constructed sentences - dozens of them, easy to access. Specialize them for the activities available in the world. Vaz Douglas, our project manager working with Zoog, liked to call this feature "Chatless Chat." So, we built and launched it for them. Disney was still very tentative about the genre, so they only ran it for about six months; I doubt it was ever very popular.
The same interview notes that Disney tried once again in 2002 with a new world called ToonTown, with pulldown menus that allowed you to construct very narrowly tailored speech within the chat to try to avoid anything that violated the rules.As the story goes, Disney still had problems with this. To make sure people were only communicating with people they knew in real life, one of the restrictions in this new world was that you had to have a secret code from any user you wished to chat with. The thinking was that parents would print these out for kids who could then share them with their friends in real life, and they could link up and chat in the online world.And yet, once again, people figured out how to get around the restrictions:
Sure enough, chatters figured out a few simple protocols to pass their secret code, several variants are of this general form:User A:"Please be my friend." User A:"Come to my house?" User B:"Okay." A:[Move the picture frames on your wall, or move your furniture on the floor to make the number 4.] A:"Okay" B:[Writes down 4 on a piece of paper and says] "Okay." A:[Move objects to make the next letter/number in the code] "Okay" B:[Writes] "Okay" A:[Remove objects to represent a "space" in the code] "Okay" [Repeat steps as needed, until] A:"Okay" B:[Enters secret code into Toontown software.] B:"There, that worked. Hi! I'm Jim 15/M/CA, what's your A/S/L?"
For example, let's say you have a secret code (1hh 5rj) which you would like to give to a toon named Bob.First, you should make clear that you want to become their SF. You: Please be my friend! You: (random SF chat) You: I can't understand you You: Let's work on that Bob: Yes Now, start the secret. You: (Jump 1 time and say OK. Jump 1 time because that is the first thing in your code. Say OK to confirm that was part of your secret.) Bob: OK (Wait for this, as this means he has written down or otherwise recorded the 1) You: Hello! OK (Say hello because the first letter of hello is h, which is the second part of your secret.) Bob: OK (again, wait for confirmation) Repeat above step, as you have the same letter for the third part of your secret. Bob: OK (by now you should know to wait for this) You: (Jump 5 times and say OK. Jump 5 times as this is the 4th part of your secret) Bob: OK You: Run! OK (The 5th part of your secret is r, and "Run!" starts with r) Bob: OK You: Jump! OK (Say this because j is the last part of your secret.) Bob: OK At this point, you have successfully transmitted the code to Bob. Most likely, Bob will understand, and within seconds, you will be Secret Friends!
So even though Disney eventually did enable a very limited chat, with strict rules to keep people safe, it still left open many challenges for early trust & safety work.Images from HabitChronicles
Even when facial recognition software works well, it still performs pretty poorly. When algorithms aren't generating false positives, they're acting on the biases programmed into them, making it far more likely for minorities to be misidentified by the software.The better the image quality, the better the search results. The use of a low-quality image pulled from a store security camera resulted in the arrest of the wrong person in Detroit, Michigan. The use of another image with the same software -- one that didn't show the distinctive arm tattoos of the non-perp hauled in by Detroit police -- resulted in another bogus arrest by the same department.In both cases, the department swore the facial recognition software was only part of the equation. The software used by Michigan law enforcement warns investigators search results should not be used as sole probable cause for someone's arrest, but the additional steps taken by investigators (which were minimal) still didn't prevent the arrests from happening.That's the same claim made by Las Vegas law enforcement: facial recognition search results are merely leads, rather than probable cause. As is the case everywhere law enforcement uses this tech, low-quality input images are common. Investigating crimes means utilizing security camera footage, which utilizes cameras far less powerful than the multi-megapixel cameras found on everyone's phones. The Las Vegas Metro Police Department relied on low-quality images for many of its facial recognition searches, documents obtained by Motherboard show.
In 2019, the LVMPD conducted 924 facial recognition searches using the system it purchased from Vigilant Solutions, according to data obtained by Motherboard through a public records request. Vigilant Solutions—which also leases its massive license plate reader database to federal agencies—was bought last year by Motorola Solutions for $445 million.Of those searches, 471 were done using images the department deemed “suitable,” and they resulted in matches with at least one “likely positive candidate” 67% of the time. But 451 searches, nearly half, were run on “non-suitable” probe images. Those searches returned likely positive matches—which could mean anywhere from one to 20 or more mugshots, all with varying confidence scores assigned by the system—only 18% of the time.
Fortunately, low-quality images seemingly rarely return anything investigators can use. (Although that 18% is still 82 "likely positive matches...") If the system did, we'd be seeing far more bogus arrests than we've seen to this point. Of course, prosecutors and police aren't letting suspects know facial recognition software contributed to their arrests, so courtroom challenges have been pretty much nonexistent.Although most of the information in the documents is redacted -- making it difficult to verify LVMPD claims about the software's contribution to arrests and prosecutions -- enough details remained to provide a suspect facing murder charges with information the LVMPD had never turned over to him or admitted to in court.
Clark Patrick, the Las Vegas attorney representing [Alexander] Buzz, told Motherboard that neither the LVMPD nor the Clark County District Attorney’s office ever informed him that investigators identified Buzz as a suspect using, at least in part, facial recognition technology. The Clark County District Attorney’s office did not respond to an interview request or written questions.
Had this information been given to Buzz and his attorney at the beginning of the trial, he likely would not have waived his right to a preliminary evidentiary hearing. If this had taken place -- along with knowledge of a private company's contribution to the investigation -- prosecutors may have had to produce information about the tech and the surveillance footage it pulled images from.The documents don't appear to show a reliance on low-quality images to make arrests, but they do show investigators will run nearly any image through the software to see if it generates some hits. The precautions taken after this matter most. If investigators are only considering matches to be leads, it will head off most false arrests. But if investigators take shortcuts -- as appears to have happened in Detroit -- the outcome is disastrous for those falsely arrested. A person's rights and freedoms shouldn't be at the mercy of software that performs poorly even when given good images to work with. The use of this software is never going to go away completely, but agencies can mitigate the damage by refusing to treat matches as probable cause.
This is an apple. Some people might try to tell you that this is a banana. They might scream banana, banana, banana. Over and over and over again. They might put BANANA in all caps. You might even start to believe that this is a banana. But it's not. This is an apple. Now, why would I subject our dear readers to one of the most insultingly patronizing, insipid advertisements ever run by a news organization, nevermind CNN? Because it does make at least one point relatively well: apples are not bananas. Logic holds, therefore, that if apples are not bananas, they are also quite unlikely to be grapes, or kiwis, or, say, pears.And, yet, it appears that Apple, tech manufacturer most notable for making rounded corners, would like to attempt to animate CNN's commercial into some flavor of real life. See, Apple has decided to oppose a recipe app's logo because it consists of the shape of a pear. Prepear (groan) must have assumed that everyone would agree that we could tell fruits apart with the following logo.
In an Instagram post, the app's developer said Apple has objected to the firm's logo, claiming that the pear used is "too close" to the Apple logo and hurts the Apple brand. The filing also cites brand confusion and dilution caused by "blurring." According to the publication, the trademark was filed in 2017 and accepted by the US Trademark Office. It was only on the last day possible for objections to be filed that Apple did so.
This is a pear. Some enormous corporations might tell you morons in a hurry would think it was an apple. They might scream apple, apple, apple. Over and over and over again. They might put their apple next to your pear and insist they look alike. You might even start to believe that someone out there could mistake the pear for the apple. But they won't. Because a pear is not a fucking apple.Also because the logos don't actually look anything alike. The color scheme is wildly different, the drawing lines totally distinct, and the style fully unique. There is literally no reason to think there is any chance of confusion here, not to mention that the companies are quite distinct in how the public perceives their product offerings.But, of course, trademark bullying doesn't work on the merits. It works on the size of the legal war chest.
"To fight this it will cost tens of thousands of dollars," Prepear claims. "The CRAZY thing is that Apple has done this to dozens of other small business fruit logo companies, and many have chosen to abandon their logo or close doors."Prepear has launched a change.org petition in an attempt to convince Apple to drop legal action as the process reaches the discovery phase, a particularly expensive part of the process. The company has only five members and says that fighting Apple on this matter could cost tens of thousands of dollars.
Yeah. And, unless Prepear gets some kind of rescue here, the most likely scenario is that it will need to change its logo. Losing all kinds of time and money in developing its branding. Or, it can risk bankrupting itself by fighting back.Trademark bullying works. Again, not because of any legitimate legal or market concern. But purely as a matter of who can fight the fight and who cannot.
Todesign better regulation for the Internet, it is important tounderstand two things: the first one is that today's Internet,despite how much it has evolved, still continues to depend on itsoriginal architecture; and, the second relates to how preserving thisdesign is important for drafting regulation that is fitfor purpose.On top of this, the Internet invites a certain way of networking -let's call it the Internet way of networking. There are manytypes of networking out there, but the Internet way ensuresinteroperability and global reach, operates on building blocks thatare agile, while its decentralized management and general purposefurther ensure its resilience and flexibility. Rationalizing this,however, can be daunting because the Internet is multifaceted, whichmakes its regulation complicated. The entire regulatory processinvolves the reconciliation of a complex mix of technology and socialrules that can be incompatible and, in some cases, irreconcilable.Policy makers, therefore, are frequently required to make toughchoices, which often manage to strike the desired balance, while,other times, they lead to a series of unintended consequences.Europe'sGeneral Data Protection Regulation (GDPR) is a good example. Thepurpose of the regulation was simple: fix privacy by providing aframework that would allow users to understand how their data isbeing used, while forcing businesses to alter the way they treat thedata of their customers. The GDPR was set to create much-neededstandardsfor privacy in the Internet and, despite continuous enforcement andcompliance challenges, this has majorly been achieved. But,when it comes to the effect it has had on the Internet, the GDPR hasposed some challenges. Almost two months after going into effect, itwas reportedthat more than 1000 websites were affected, becoming unavailable toEuropean users. And, even now, two years after, fragmentationcontinues to be an issue.So,what is there to do? How can policy makers strike a balance betweenaddressing social harms online and policies that do not harm theInternet?A starting point isto perform a regulatory impact assessment for the Internet. It atestedmethod of policy analysis, intended to assist policy makers in thedesign, implementation and monitoring of improvements to theregulatory system; it provides the methodology for producing highquality regulation, which can, in turn, allow for sustainabledevelopment, market growth and constant innovation. A regulatoryimpact assessment constitutes a tool that ensures regulation isproportional(appropriate to the size of the problem it seeks to address),targeted(focused and without causing any unintended consequences),predictable(it creates legal certainty), accountable(in terms of actions and outcomes) and, transparent(on how decisions are made).Thistype of thinking can work to the advantage of the Internet. TheInternet is an intricate system of interconnected networks thatoperates according to certain rules. It consists of a set offundamental properties that contribute to its flexible and agilecharacter, while ensuring its continuous relevance and constantability to support emerging technologies; it is self-perpetuating inthe sense that it systematically evolves while its foundation remainsintact. Understanding and preserving the idiosyncrasy of the Internetshould be key in understanding how best to approach regulation.Ingeneral, determining the context, scope and breadth of Internetregulation is important to determine whether regulation is needed andthe impact it may have. Asking questions that under normalcircumstances policy makers contemplate when seeking to make informedchoices is the first step. These include: Does the proposed new rulesolve the problem and achieve the desired outcome? Does it balanceproblem reduction with other concerns, such as costs? Does it resultin a fair distribution of the costs and benefits across segments ofsociety?Is it legitimate, credible and, trustworthy? But, there should be anadditional question: Does the regulation create any consequences forthe Internet?Activelyseeking answers to these questions is vital because regulation isgenerally risky, and risksarise from acting as well as from not acting.To appreciate this, imagine if the choices made in the early days ofthe Internet dictated a high regulatory regime in the deployment ofadvanced telecommunications and information technologies. TheInternet would, most certainly, not be able to evolve the way it hasand, equally the quality of regulation would suffer.Inthis context, the scope of regulation is important. The fundamentalproblem with much of the current Internet regulation is that it seeksto fix social problems by interfering with the underlying technologyof the Internet. Across a wide range of policymaking, we know thatsolely technical fixes rarely fix social problems. It is importantthat governments do not regulate aspects of the Internet that couldbe seen as compromising network interoperability, to solve societalproblems. Thisis a "category error" or, more elaborately, amisunderstanding of the technical design and boundaries of theInternet. Such a misunderstanding tends to confuse the salientsimilarities and differences between the problem and where thisproblem occurs; it not only fails to tackle the root of the problembut causes damage to the networks we all rely on. Take, for instance,data localization rules, which seek to force data to remain withincertain geographical boundaries. Various countries, most recentlyIndia,are tryingto forcibly localize data, and risk impeding the openness andaccessibility of the global Internet. Data will not be able to flowuninterrupted on the basis of network efficiency; rather, specialarrangements will need to be put in place in order for that data tostay within the confines of a jurisdiction. The result will beincreased barriers to entry, to the detriment of users, businessesand governments seeking to access the Internet. Ultimately, forceddata localization makes the Internet less resilient, less global,more costly, and less valuable.Thisis where a regulatory risk impact analysis can come in handy.Generally, what the introduction of a risk impact analysis does isthat it shows how policy makers can make informed choices about howsome of the regulatory claims can or cannot possibly be true. Thiswould require a shift in the behavior of policy makers from solelyfocusing on process to a more performance-oriented and result-basedapproach.Thissounds more difficult than it actually is. Jurisdictions around theworld are accustomed to performing regulatory impact assessmentswhich has successfully been integrated in many governments' policymaking process for more than 35 years. So, why can't it be partof Internet regulation?Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.