Even when facial recognition software works well, it still performs pretty poorly. When algorithms aren't generating false positives, they're acting on the biases programmed into them, making it far more likely for minorities to be misidentified by the software.The better the image quality, the better the search results. The use of a low-quality image pulled from a store security camera resulted in the arrest of the wrong person in Detroit, Michigan. The use of another image with the same software -- one that didn't show the distinctive arm tattoos of the non-perp hauled in by Detroit police -- resulted in another bogus arrest by the same department.In both cases, the department swore the facial recognition software was only part of the equation. The software used by Michigan law enforcement warns investigators search results should not be used as sole probable cause for someone's arrest, but the additional steps taken by investigators (which were minimal) still didn't prevent the arrests from happening.That's the same claim made by Las Vegas law enforcement: facial recognition search results are merely leads, rather than probable cause. As is the case everywhere law enforcement uses this tech, low-quality input images are common. Investigating crimes means utilizing security camera footage, which utilizes cameras far less powerful than the multi-megapixel cameras found on everyone's phones. The Las Vegas Metro Police Department relied on low-quality images for many of its facial recognition searches, documents obtained by Motherboard show.
In 2019, the LVMPD conducted 924 facial recognition searches using the system it purchased from Vigilant Solutions, according to data obtained by Motherboard through a public records request. Vigilant Solutions—which also leases its massive license plate reader database to federal agencies—was bought last year by Motorola Solutions for $445 million.Of those searches, 471 were done using images the department deemed “suitable,” and they resulted in matches with at least one “likely positive candidate” 67% of the time. But 451 searches, nearly half, were run on “non-suitable” probe images. Those searches returned likely positive matches—which could mean anywhere from one to 20 or more mugshots, all with varying confidence scores assigned by the system—only 18% of the time.
Fortunately, low-quality images seemingly rarely return anything investigators can use. (Although that 18% is still 82 "likely positive matches...") If the system did, we'd be seeing far more bogus arrests than we've seen to this point. Of course, prosecutors and police aren't letting suspects know facial recognition software contributed to their arrests, so courtroom challenges have been pretty much nonexistent.Although most of the information in the documents is redacted -- making it difficult to verify LVMPD claims about the software's contribution to arrests and prosecutions -- enough details remained to provide a suspect facing murder charges with information the LVMPD had never turned over to him or admitted to in court.
Clark Patrick, the Las Vegas attorney representing [Alexander] Buzz, told Motherboard that neither the LVMPD nor the Clark County District Attorney’s office ever informed him that investigators identified Buzz as a suspect using, at least in part, facial recognition technology. The Clark County District Attorney’s office did not respond to an interview request or written questions.
Had this information been given to Buzz and his attorney at the beginning of the trial, he likely would not have waived his right to a preliminary evidentiary hearing. If this had taken place -- along with knowledge of a private company's contribution to the investigation -- prosecutors may have had to produce information about the tech and the surveillance footage it pulled images from.The documents don't appear to show a reliance on low-quality images to make arrests, but they do show investigators will run nearly any image through the software to see if it generates some hits. The precautions taken after this matter most. If investigators are only considering matches to be leads, it will head off most false arrests. But if investigators take shortcuts -- as appears to have happened in Detroit -- the outcome is disastrous for those falsely arrested. A person's rights and freedoms shouldn't be at the mercy of software that performs poorly even when given good images to work with. The use of this software is never going to go away completely, but agencies can mitigate the damage by refusing to treat matches as probable cause.
Summary: Creating family friendly environments on the internet presents some interesting challenges that highlight the trade-offs in content moderation. One of the founders of Electric Communities, a pioneer in early online communities, gave a detailed overview of the difficulties in trying to build such a virtual world for Disney that included chat functionality. He described being brought in by Disney alongside someone from a kids' software company, Knowledge Adventure, who had built an online community in the mid-90s called KA-Worlds. Disney wanted to build a virtual community space, HercWorld, to go along with the movie Hercules. After reviewing Disney's requirements for an online community, they realized chat would be next to impossible:
Even in 1996, we knew that text-filters are no good at solving this kind of problem, so I asked for a clarification: "I'm confused. What standard should we use to decide if a message would be a problem for Disney?"The response was one I will never forget: "Disney's standard is quite clear:No kid will be harassed, even if they don't know they are being harassed."..."OK. That means Chat Is Out of HercWorld, there is absolutely no way to meet your standard without exorbitantly high moderation costs," we replied.One of their guys piped up: "Couldn't we do some kind of sentence constructor, with a limited vocabulary of safe words?"Before we could give it any serious thought, their own project manager interrupted, "That won't work. We tried it for KA-Worlds.""We spent several weeks building a UI that used pop-downs to construct sentences, and only had completely harmless words - the standard parts of grammar and safe nouns like cars, animals, and objects in the world.""We thought it was the perfect solution, until we set our first 14-year old boy down in front of it. Within minutes he'd created the following sentence:
I want to stick my long-necked Giraffe up your fluffy white bunny.
In that initial 1996 project, chat was abandoned, but as they continued to develop HercWorld, they quickly realized that they still had to worry about chat, even without a chat feature:
It was standard fare: Collect stuff, ride stuff, shoot at stuff, build stuff Oops, what was that last thing again?"kids can push around Roman columns and blocks to solve puzzles, make custom shapes, and buildings.", one of the designers said.I couldn't resist, "Umm. Doesn't that violate the Disney standard? In this chat-free world, people will push the stones around until they spell Hi! or F-U-C-K or their phone number or whatever. You've just invented Block-ChatTM. If you can put down objects, you've got chat. We learned this in Habitat and WorldsAway, where people would turn 100 Afro-Heads into a waterbed." We all laughed, but it was that kind of awkward laugh that you know means that we're all probably just wasting our time.
Decisions for family-friendly community designers:
Is there a way to build a chat that will not be abused by clever kids to reference forbidden content (e.g., swearing, innuendo, harassment, abuse)?
Can you build a chat that does not require universal moderation and pre-approval of everything that users will say?
Are there ways in which kids will still be to communicate with others even without an actual chat feature?
How much of a community do you have with no chat or extremely limited chat?
Questions and policy implications to consider:
Is it possible to create an online family friendly environment that will work?
If so how do you prevent abuse?
If not, how do you handle the fact that kids will get online whether they are allowed to or not?
How do you incentivize companies to create spaces that actually remain as child-friendly as possible?
If the kids will always find a way to get around limitations, does it make sense to hold the companies themselves responsible?
Should family friendly environments require full-time monitoring, or pre-vetting of any usage?
Resolution: Disney eventually abandoned the idea of HercWorld due to all of the issues raised. However, the interview highlights the fact that they tried again a couple of years later, with an online chat where users could only pull from a pre-selected list of sentences, but it did not have much success:
"The Disney Standard" (now a legend amongst our employees) still held. No harassment, detectable or not, and no heavy moderation overhead.Brian had an idea though: Fully pre-constructed sentences - dozens of them, easy to access. Specialize them for the activities available in the world. Vaz Douglas, our project manager working with Zoog, liked to call this feature "Chatless Chat." So, we built and launched it for them. Disney was still very tentative about the genre, so they only ran it for about six months; I doubt it was ever very popular.
The same interview notes that Disney tried once again in 2002 with a new world called ToonTown, with pulldown menus that allowed you to construct very narrowly tailored speech within the chat to try to avoid anything that violated the rules.As the story goes, Disney still had problems with this. To make sure people were only communicating with people they knew in real life, one of the restrictions in this new world was that you had to have a secret code from any user you wished to chat with. The thinking was that parents would print these out for kids who could then share them with their friends in real life, and they could link up and chat in the online world.And yet, once again, people figured out how to get around the restrictions:
Sure enough, chatters figured out a few simple protocols to pass their secret code, several variants are of this general form:User A:"Please be my friend." User A:"Come to my house?" User B:"Okay." A:[Move the picture frames on your wall, or move your furniture on the floor to make the number 4.] A:"Okay" B:[Writes down 4 on a piece of paper and says] "Okay." A:[Move objects to make the next letter/number in the code] "Okay" B:[Writes] "Okay" A:[Remove objects to represent a "space" in the code] "Okay" [Repeat steps as needed, until] A:"Okay" B:[Enters secret code into Toontown software.] B:"There, that worked. Hi! I'm Jim 15/M/CA, what's your A/S/L?"
For example, let's say you have a secret code (1hh 5rj) which you would like to give to a toon named Bob.First, you should make clear that you want to become their SF. You: Please be my friend! You: (random SF chat) You: I can't understand you You: Let's work on that Bob: Yes Now, start the secret. You: (Jump 1 time and say OK. Jump 1 time because that is the first thing in your code. Say OK to confirm that was part of your secret.) Bob: OK (Wait for this, as this means he has written down or otherwise recorded the 1) You: Hello! OK (Say hello because the first letter of hello is h, which is the second part of your secret.) Bob: OK (again, wait for confirmation) Repeat above step, as you have the same letter for the third part of your secret. Bob: OK (by now you should know to wait for this) You: (Jump 5 times and say OK. Jump 5 times as this is the 4th part of your secret) Bob: OK You: Run! OK (The 5th part of your secret is r, and "Run!" starts with r) Bob: OK You: Jump! OK (Say this because j is the last part of your secret.) Bob: OK At this point, you have successfully transmitted the code to Bob. Most likely, Bob will understand, and within seconds, you will be Secret Friends!
So even though Disney eventually did enable a very limited chat, with strict rules to keep people safe, it still left open many challenges for early trust & safety work.Images from HabitChronicles
Todesign better regulation for the Internet, it is important tounderstand two things: the first one is that today's Internet,despite how much it has evolved, still continues to depend on itsoriginal architecture; and, the second relates to how preserving thisdesign is important for drafting regulation that is fitfor purpose.On top of this, the Internet invites a certain way of networking -let's call it the Internet way of networking. There are manytypes of networking out there, but the Internet way ensuresinteroperability and global reach, operates on building blocks thatare agile, while its decentralized management and general purposefurther ensure its resilience and flexibility. Rationalizing this,however, can be daunting because the Internet is multifaceted, whichmakes its regulation complicated. The entire regulatory processinvolves the reconciliation of a complex mix of technology and socialrules that can be incompatible and, in some cases, irreconcilable.Policy makers, therefore, are frequently required to make toughchoices, which often manage to strike the desired balance, while,other times, they lead to a series of unintended consequences.Europe'sGeneral Data Protection Regulation (GDPR) is a good example. Thepurpose of the regulation was simple: fix privacy by providing aframework that would allow users to understand how their data isbeing used, while forcing businesses to alter the way they treat thedata of their customers. The GDPR was set to create much-neededstandardsfor privacy in the Internet and, despite continuous enforcement andcompliance challenges, this has majorly been achieved. But,when it comes to the effect it has had on the Internet, the GDPR hasposed some challenges. Almost two months after going into effect, itwas reportedthat more than 1000 websites were affected, becoming unavailable toEuropean users. And, even now, two years after, fragmentationcontinues to be an issue.So,what is there to do? How can policy makers strike a balance betweenaddressing social harms online and policies that do not harm theInternet?A starting point isto perform a regulatory impact assessment for the Internet. It atestedmethod of policy analysis, intended to assist policy makers in thedesign, implementation and monitoring of improvements to theregulatory system; it provides the methodology for producing highquality regulation, which can, in turn, allow for sustainabledevelopment, market growth and constant innovation. A regulatoryimpact assessment constitutes a tool that ensures regulation isproportional(appropriate to the size of the problem it seeks to address),targeted(focused and without causing any unintended consequences),predictable(it creates legal certainty), accountable(in terms of actions and outcomes) and, transparent(on how decisions are made).Thistype of thinking can work to the advantage of the Internet. TheInternet is an intricate system of interconnected networks thatoperates according to certain rules. It consists of a set offundamental properties that contribute to its flexible and agilecharacter, while ensuring its continuous relevance and constantability to support emerging technologies; it is self-perpetuating inthe sense that it systematically evolves while its foundation remainsintact. Understanding and preserving the idiosyncrasy of the Internetshould be key in understanding how best to approach regulation.Ingeneral, determining the context, scope and breadth of Internetregulation is important to determine whether regulation is needed andthe impact it may have. Asking questions that under normalcircumstances policy makers contemplate when seeking to make informedchoices is the first step. These include: Does the proposed new rulesolve the problem and achieve the desired outcome? Does it balanceproblem reduction with other concerns, such as costs? Does it resultin a fair distribution of the costs and benefits across segments ofsociety?Is it legitimate, credible and, trustworthy? But, there should be anadditional question: Does the regulation create any consequences forthe Internet?Activelyseeking answers to these questions is vital because regulation isgenerally risky, and risksarise from acting as well as from not acting.To appreciate this, imagine if the choices made in the early days ofthe Internet dictated a high regulatory regime in the deployment ofadvanced telecommunications and information technologies. TheInternet would, most certainly, not be able to evolve the way it hasand, equally the quality of regulation would suffer.Inthis context, the scope of regulation is important. The fundamentalproblem with much of the current Internet regulation is that it seeksto fix social problems by interfering with the underlying technologyof the Internet. Across a wide range of policymaking, we know thatsolely technical fixes rarely fix social problems. It is importantthat governments do not regulate aspects of the Internet that couldbe seen as compromising network interoperability, to solve societalproblems. Thisis a "category error" or, more elaborately, amisunderstanding of the technical design and boundaries of theInternet. Such a misunderstanding tends to confuse the salientsimilarities and differences between the problem and where thisproblem occurs; it not only fails to tackle the root of the problembut causes damage to the networks we all rely on. Take, for instance,data localization rules, which seek to force data to remain withincertain geographical boundaries. Various countries, most recentlyIndia,are tryingto forcibly localize data, and risk impeding the openness andaccessibility of the global Internet. Data will not be able to flowuninterrupted on the basis of network efficiency; rather, specialarrangements will need to be put in place in order for that data tostay within the confines of a jurisdiction. The result will beincreased barriers to entry, to the detriment of users, businessesand governments seeking to access the Internet. Ultimately, forceddata localization makes the Internet less resilient, less global,more costly, and less valuable.Thisis where a regulatory risk impact analysis can come in handy.Generally, what the introduction of a risk impact analysis does isthat it shows how policy makers can make informed choices about howsome of the regulatory claims can or cannot possibly be true. Thiswould require a shift in the behavior of policy makers from solelyfocusing on process to a more performance-oriented and result-basedapproach.Thissounds more difficult than it actually is. Jurisdictions around theworld are accustomed to performing regulatory impact assessmentswhich has successfully been integrated in many governments' policymaking process for more than 35 years. So, why can't it be partof Internet regulation?Dr. Konstantinos Komaitis is the Senior Director, Policy Strategy and Development at the Internet Society.
This is an apple. Some people might try to tell you that this is a banana. They might scream banana, banana, banana. Over and over and over again. They might put BANANA in all caps. You might even start to believe that this is a banana. But it's not. This is an apple. Now, why would I subject our dear readers to one of the most insultingly patronizing, insipid advertisements ever run by a news organization, nevermind CNN? Because it does make at least one point relatively well: apples are not bananas. Logic holds, therefore, that if apples are not bananas, they are also quite unlikely to be grapes, or kiwis, or, say, pears.And, yet, it appears that Apple, tech manufacturer most notable for making rounded corners, would like to attempt to animate CNN's commercial into some flavor of real life. See, Apple has decided to oppose a recipe app's logo because it consists of the shape of a pear. Prepear (groan) must have assumed that everyone would agree that we could tell fruits apart with the following logo.
In an Instagram post, the app's developer said Apple has objected to the firm's logo, claiming that the pear used is "too close" to the Apple logo and hurts the Apple brand. The filing also cites brand confusion and dilution caused by "blurring." According to the publication, the trademark was filed in 2017 and accepted by the US Trademark Office. It was only on the last day possible for objections to be filed that Apple did so.
This is a pear. Some enormous corporations might tell you morons in a hurry would think it was an apple. They might scream apple, apple, apple. Over and over and over again. They might put their apple next to your pear and insist they look alike. You might even start to believe that someone out there could mistake the pear for the apple. But they won't. Because a pear is not a fucking apple.Also because the logos don't actually look anything alike. The color scheme is wildly different, the drawing lines totally distinct, and the style fully unique. There is literally no reason to think there is any chance of confusion here, not to mention that the companies are quite distinct in how the public perceives their product offerings.But, of course, trademark bullying doesn't work on the merits. It works on the size of the legal war chest.
"To fight this it will cost tens of thousands of dollars," Prepear claims. "The CRAZY thing is that Apple has done this to dozens of other small business fruit logo companies, and many have chosen to abandon their logo or close doors."Prepear has launched a change.org petition in an attempt to convince Apple to drop legal action as the process reaches the discovery phase, a particularly expensive part of the process. The company has only five members and says that fighting Apple on this matter could cost tens of thousands of dollars.
Yeah. And, unless Prepear gets some kind of rescue here, the most likely scenario is that it will need to change its logo. Losing all kinds of time and money in developing its branding. Or, it can risk bankrupting itself by fighting back.Trademark bullying works. Again, not because of any legitimate legal or market concern. But purely as a matter of who can fight the fight and who cannot.
This seems like the sort of thing a court shouldn't need to sort out, but here we are. More specifically, here are two plaintiffs suing over Oakland County, Michigan's forfeiture policy. This isn't civil asset forfeiture -- where property is treated as guilty until proven innocent. This isn't even criminal asset forfeiture -- the seizure of property by the government following a conviction.But this form of forfeiture can be just as abusive as regular civil asset forfeiture. There's no criminal act involved -- real or conjectured. It's the result of a civil violation: the nonpayment of property taxes. And Oakland County, the plaintiffs argue, is performing unconstitutional takings to unjustly enrich itself.It's not that these sorts of things are uncommon. Tax liens are often put on property when tax payments are delinquent. It's that one of these seizures -- and subsequent auction -- was triggered by a delinquent amount that would have required the county to make change from a $10 bill. (via Volokh Conspiracy)This is from the opening of the state Supreme Court's decision [PDF], which shows just how much the county government can profit from these forfeitures.
Plaintiff Rafaeli, LLC, owed $8.41 in unpaid property taxes from 2011, which grew to $285.81 after interest, penalties, and fees. Oakland County and its treasurer, Andrew Meisner (collectively, defendants), foreclosed on Rafaeli’s property for the delinquency, sold the property at public auction for $24,500, and retained all the sale proceeds in excess of the taxes, interest, penalties, and fees.
That's right. It only took $8.41 to initiate these proceedings. Even after accounting for the additional fees, the county turned less than $300 in delinquencies into a $24,200 profit.Rafaeli, LLC isn't the only plaintiff. Another property owner, Andre Ohanessian, saw $6000 in taxes, fines, and fees turn into a $76,000 net gain for the county when it auctioned his property for $82,000 and kept everything above what it was owed.The lower court said there was nothing wrong with the government keeping thousands of dollars property owners didn't owe it.
The circuit court granted summary disposition to defendants, finding that defendants did not “take” plaintiffs’ properties because plaintiffs forfeited all interests they held in their properties when they failed to pay the taxes due on the properties. The court determined that property properly forfeited under the GPTA [General Property Tax Act] and in accordance with due process is not a “taking” barred by either the United States or Michigan Constitution. Because the GPTA properly divested plaintiffs of all interests they had in their properties, the court concluded that plaintiffs did not have a property interest in the surplus proceeds generated from the tax-foreclosure sale of their properties.
The appeals court felt the same way about the issue, resulting in this final appeal to the state's top court. The Michigan Supreme Court says this isn't proper, going all the way back to English common law that had been adopted by the new nation more than two hundred years ago.
At the same time that it was common for any surplus proceeds to be returned to the former property owner, it was also generally understood that the government could only collect those taxes actually owed and nothing more.[...]This Court recognized a similar principle in 1867, stating that “[n]o law of the land authorizes the sale of property for any amount in excess of the tax it is legally called upon to bear.” Indeed, any sale of property for unpaid taxes that was in excess of the taxes owed was often rendered voidable at the option of the landowner. Rather than selling all of a person’s land and risk the sale being voided, officers charged with selling land for unpaid taxes often only sold that portion of the land that was needed to satisfy the tax debt. That is, early in Michigan’s statehood, it was commonly understood that the government could not collect more in taxes than what was owed, nor could it sell more land than necessary to collect unpaid taxes.
That all changed with the General Property Tax Act. The current version of the GPTA unilaterally declares all ownership rights "extinguished" the moment the government begins proceedings against the property, well before the foreclosure sale occurs.This law -- as exercised in these forfeitures and auctions -- is unlawful, the Supreme Court says.
We conclude that our state’s common law recognizes a former property owner’s property right to collect the surplus proceeds that are realized from the tax-foreclosure sale of property. Having originated as far back as the Magna Carta, having ingratiated itself into English common law, and having been recognized both early in our state’s jurisprudence and as late as our decision in Dean in 1976, a property owner’s right to collect the surplus proceeds from the tax-foreclosure sale of his or her property has deep roots in Michigan common law. We also recognize this right to be “vested” such that the right is to remain free from unlawful governmental interference.
The government argued that without being able to take everything (even when less is owed), it does not have a stick of sufficient size to wield against delinquent taxpayers. Nonsense, says the state's top court. The state can still collect what is owed. What it can't do is take more than that.
We recognize that municipalities rely heavily on their citizens to timely pay real-property taxes so that local governments have a source of revenue for their operating costs. Nothing in this opinion impedes defendants’ right to hold citizens accountable for failing to pay property taxes by taking citizens’ properties in satisfaction of their tax debts. What defendants may not do under the guise of tax collection is seize property valued far in excess of the amount owed in unpaid taxes, penalties, interest, and fees and convert that surplus into a public benefit. The purpose of taxation is to assess and collect taxes owed, not appropriate property in excess of what is owed.
If the county wants its eight dollars, it can take its eight dollars. Everything above that still belongs to the original property owner. This should seem obvious, but it isn't. It took the state's top court 49 pages to arrive at this conclusion. What seems obvious to citizens is far too often deliberately unclear to government agencies. Legislation is rarely written in plain language. And it's crafted by people who have a vested interest in ensuring their employer's financial stability. The end result -- years down the road -- is the government turning a $285 foreclosure into a $24,000 surplus. The final insult is taxpayers paid for county officials to argue against the taxpayers' best interests. But, from now on, the government will have to share its takings with the people it's taking property from.
Last month, scammers hijacked the Twitter accounts of former President Barack Obama and dozens of other public figures to trick victims into sending money. Thankfully, this brazen act of digital impersonation only fooled a few hundred people. But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.Deepfakes have inspired dread since the term was first coined three years ago. The most widely discussed scenario is a deepfake smear of a candidate on the eve of an election. But while this fear remains hypothetical, another threat is currently emerging with little public notice. Criminals have begun to use deepfakes for fraud, blackmail, and other illicit financial schemes.This should come as no surprise. Deception has always existed in the financial world, and bad actors are adept at employing technology, from ransomware to robo-calls. So how big will this new threat become? Will deepfakes erode truth and trust across the financial system, requiring a major response by the financial industry and government? Or are they just an exotic distraction from more mundane criminal techniques, which are far more prevalent and costly?The truth lies somewhere in between. No form of digital disinformation has managed to create a true financial meltdown, and deepfakes are unlikely to be the first. But as deepfakes become more realistic and easier to produce, they offer powerful new weapons for tech-savvy criminals.Consider the most well-known type of deepfake, a “face-swap” video that transposes one person’s expressions onto someone else’s features. These can make a victim appear to say things she never said. Criminals could share a face-swap video that falsely depicts a CEO making damaging private comments—causing her company’s stock price to fall, while the criminals profit from short sales.At first blush, this scenario is not much different than the feared political deepfake: a false video spreads through social or traditional media to sway mass opinion about a public figure. But in the financial scenario, perpetrators can make money on rapid stock trades even if the video is quickly disproven. Smart criminals will target a CEO already embroiled in some other corporate crisis, who may lack the credibility to refute a clever deepfake.In addition to video, deepfake technology can create lifelike audio mimicry by cloning someone’s voice. Voice cloning is not limited to celebrities or politicians. Last year, a CEO’s cloned voice was used to defraud a British energy company out of $243,000. Financial industry contacts tell me this was not an isolated case. And it shows how deepfakes can cause damage without ever going viral. A deepfake tailored for and sent directly to one person may be the most difficult kind to thwart.AI can generate other forms of synthetic media beyond video and audio. Algorithms can synthesize photos of fictional objects and people, or write bogus text that simulates human writing. Bad actors could combine these two techniques to create authentic-seeming fake social media accounts. With AI-generated profile photos and AI-written posts, the fake accounts could pass as human and earn real followers. A large network of such accounts could be used to denigrate a company, lowering its stock price due to false perceptions of a grassroots brand backlash.These are just a few ways that deepfakes and other synthetic media can enable financial harm. My research highlights ten scenarios in total—one based in fact, plus nine hypotheticals. Remarkably, at least two of the hypotheticals already came true in the few months since I first imagined them. A Pennsylvania attorney was scammed by imposters who reportedly cloned his own son’s voice, and women in India were blackmailed with synthetic nude photos. The threats may still be small, but they are rapidly evolving.What can be done? It would be foolish to pin hopes on a silver bullet technology that reliably detects deepfakes. Detection tools are improving, but so are deepfakes themselves. Real solutions will blend technology, institutional changes, and broad public awareness.Corporate training and controls can help inoculate workers against deepfake phishing calls. Methods of authenticating customers by their voices or faces may need to be re-examined. The financial industry already benefits from robust intelligence sharing and crisis planning for cyber threats; these could be expanded to cover deepfakes.The financial sector must also collaborate with tech platforms, law enforcement agencies, journalists, and others. Many of these groups are already working to counter political deepfakes. But they are not yet as focused on the distinctive ways that deepfakes threaten the financial system.Ultimately, efforts to counter deepfakes should be part of a broader international strategy to secure the financial system against cyber threats, such as the one the Carnegie Endowment is currently developing together with the World Economic Forum.Deepfakes are hardly the first threat of financial deception, and they are far from the biggest. But they are growing and evolving before our eyes. To stay ahead of this emerging challenge, the financial sector should start acting now.Jon Bateman is a Cyber Policy Initiative, Technology and International Affairs Fellow at the Carnegie Endowment for International Peace.
Editor's Note: Originally, this article was set to run before the article of Crystal Dynamics defending this decision... but somehow that didn't happen. You can read that article here if you like, or if you haven't already, you can read this one first, and recognize that time has no meaning any more, so the linear publishing of articles is no longer necessary... or maybe Mike just screwed things up. One of those.For anything that isn't first-party content, I will never understand why games sell as console exclusives. Maybe there is math out there that makes having a game publisher limit itself to one sliver of the potential market make sense, but somehow I have a hard time believing it. That's all the more the case given that the recent trend has been less exclusivity, rather than more. While the PC market is now seeing platform exclusivity emerge, something which makes even less sense than with consoles, game franchises that were once jealously guarded exclusives, such as MLB The Show, are announcing opening up to more systems, including PCs.But it seems the instinct to carve out something exclusive for your system is hard to shake. Or, that's at least the case for Sony, which has managed to retain exclusive rights for the character Spider-Man in the upcoming Marvel's Avengers game.
In a move already being roundly criticized on social media, Crystal Dynamics’ Jeff Adams revealed today that Spider-Man will be available as a free update for PlayStation players of this September’s Marvel’s Avengers game in “early 2021.” PC and Xbox One players, apparently, won’t get to play as him.Adams announced the move in a PlayStation blog post, offering no insight as to why PC and Xbox players would miss out and outlining no exclusive content for those games. It doesn’t appear to be a timed exclusive. When Kotaku reached out to Square Enix, the game’s publisher, for comment, about that and the rest of the deal, we were directed to Adams’ blog post—which didn’t answer any of our questions.
Now, there is some complicated licensing potentially at issue here. While Disney owns the rights to The Avengers generally, Sony has retained many of the publishing rights for the Spider-Man character. In 2018, the excellent Spider-Man video game came out as a PlayStation and many assumed that Sony had the sole game publishing rights to the character. But that doesn't seem to be true, no matter what noises Sony's made in the past. Instead, these rights still seem to reside with Marvel, which has tended to lean towards the PlayStation. But, as the Kotaku article points out, it's not as though Spider-Man has never made an appearance on other systems. He's been in Nintendo games, along with other games, such as Marvel's Lego series of games.The idea behind these exclusive deals, be it for entire game franchises or for characters like Spider-Man, is to try to engender some kind of loyalty among the fan-base by having this exclusive content. And perhaps at one point that worked. But these days, the only thing Sony seems to be getting for its trouble is backlash. And when Forbes is out here saying that this character exclusive isn't just bad for the other platforms the game will appear on, but bad for PlayStation players as well, then maybe it's time to rethink this whole thing.
The problem with exclusives is that they not only hurt the obvious suspects, the platforms that are not getting X or Y exclusive, which in this case is Xbox and PC players, but they even hurt the platform that’s supposed to benefit from them.With Avengers, it’s easy to see how this could play out in a similar fashion. While the main storyline of Avengers seems to be playing out around six launch heroes, Black Widow, Hulk, Thor, Captain America, Iron Man and Ms. Marvel, the entire point of the game is that it will be an ongoing story that unfolds in time. It’s easy to see how a character like Spider-Man, a prominent Avenger in both the MCU and the comics, could have been integrated into a major storyline at some point in the future as the game expands. But the fact that he’s exclusive to PlayStation essentially insures that he cannot be a major player in the story, relegated to some sort of introductory side mission, and that’s it, or as a tag-along to other missions without a major active role.
So why do this at all? Because old habits are hard to shake, probably. And, frankly, Sony's gonna Sony. But that doesn't make any of this less dumb, less bad for the gaming community, or less bad for even those who will get this exclusive character.
Summary: The ability to instantly upload recordings and stream live video has made content moderation much more difficult. Uploads to YouTube have surpassed 500 hours of content every minute (as of May 2019), making any form of moderation inadequate.The same goes for Twitter and Facebook. Facebook's user base exceeds two billion worldwide. Over 500 million tweets are posted to Twitter every day (as of May 2020). Algorithms and human moderators are incapable of catching everything that violates terms of service.When the unthinkable happens -- as it did on August 26, 2015 -- these two social media services swiftly responded. But even their swift efforts weren't enough. The videos posted by Vester Lee Flanagan, a disgruntled former employee of CBS affiliate WDBJ in Virginia, showed him tracking down a WDBJ journalist and cameraman and shooting them both.
Both platforms removed the videos and deactivated Flanagan's accounts. Twitter's response took only minutes. But the spread of the videos had already begun, leaving moderators to try to track down duplicates before they could be seen and duplicated yet again. Many of these ended up on YouTube, where moderation efforts to contain the spread still left several reuploads intact. This was enough to instigate an FTC complaint against Google, filed by the father of the journalist killed by Flanagan. Google responded by stating it was still removing every copy of the videos it could locate, using a combination of AI and human moderation.Users of Facebook and Twitter raised a novel complaint in the wake of the shooting, demanding "autoplay" be opt in -- rather than the default setting -- to prevent them from inadvertently viewing disturbing content.Moderating content as it is created continues to pose challenges for Facebook, Twitter, and YouTube -- all of which allow live-streaming.Decisions to be made by social media platforms:
What efforts are being put in place to better handle moderation of streaming content?
What efforts -- AI or otherwise -- are being deployed to potentially prevent the streaming of criminal acts? Which ones should we adopt?
Once notified of objectionable content, how quickly should we respond?
Are there different types of content that require different procedures for responding rapidly?
What is the internal process for making moderation decisions on breaking news over streaming?
While the benefits of auto-playing content are clear for social media platforms, is making this the default option a responsible decision -- not just for potentially-objectionable content but for users who may be using limited mobile data?
Questions and policy implications to consider:
Given increasing Congressional pressure to moderate content (and similar pressure from other governments around the world), are platforms willing to "over-block" content to demonstrate their compliance with these competing demands? If so, will users seek out other services if their content is mistakenly blocked or deleted?
If objectionable content is the source for additional news reporting or is of public interest (like depictions of violence against protesters, etc.), do these concerns override moderation decisions based on terms of service agreements?
Does the immediate removal of criminal evidence from public view hamper criminal investigations?
Are all criminal acts of violence considered violations of content guidelines? What if the crime is being committed by government agents or law enforcement officers? What if the video is of a criminal act being performed by someone other than the person filming it?
Resolution: All three platforms have made efforts to engage in faster, more accurate moderation of content. Live-streaming presents new challenges for all three platforms, which are being met with varying degrees of success. These three platforms are dealing with millions of uploads every day, ensuring objectionable content will still slip through and be seen by hundreds, if not thousands, of users before it can be targeted and taken down.Content like this is a clear violation of terms of service agreements, making removal -- once notified and located -- straightforward. But being able to "see" it before dozens of users do remains a challenge.
We had just been talking about the upcomingMarvel's Avengers multi-platform game and its very strange plan to make Spider-Man a PlayStation exclusive character. In that post, I mentioned that I don't think these sorts of exclusive deals, be they for games or characters, make any real sense. Others quoted in the post have actually argued that exclusive characters specifically hurt everyone, including owners of the exclusive platform, since this can only serve to limit the subject of exclusion within the game. But when it came to why this specific deal had been struck, we were left with mere speculation. Was it to build on some kind of PlayStation loyalty? Was it to try to drive more PlayStation purchases? Was it some kind of Sony licensing thing?Well, we have now gotten from the head of the publishing studio an...I don't know... answer? That seems to be what was attempted, at least, but I'll let you all see for yourselves, if you can make out what the actual fuck is going on here. The co-leader of Crystal Dynamics gave an interview to ComicBook and touched on the subject.
So the beauty of Spider-Man, and what Spider-Man represents as a character, and as a world is...again, it comes back to the relationship with PlayStation and Marvel. We happened to be...once you can execute and deliver, when it comes down to choices of where and what Spider-Man can be, that’s a relationship question that PlayStation absolutely has the rights to, that as you guys know, with Sony’s ownership there, and Marvel with Sony saying, ‘Hey, this is something we can do. This is something we can do on this platform.’
If anything was deserving of a Jonathan Swan meme, this must surely be it. I have read the above paragraph no less than ten times and I have no idea what the hell it is saying. There seems to be some nod to Sony's publishing rights for video games and Spider-Man, but, as we've said previously, those rights don't seem to actually exist. Then there's some talk about how special Spider-Man is, alongside "Hey, this is something we can do."...okay. It doesn't get any better as it goes on.
And so, what we do as creators is say, ‘This is an opportunity that we can make something unique, and fun, and awesome that we all...you just talked about Black Widow, and to be able to have that experience. So we love the idea of being able to bring this character to the PlayStation players.
Blink, blink. But why exclusively? Why wouldn't you love to bring that character to Xbox owners? PC gamers? Nothing in this dump truck of words strung together seems to have anything to do with the exclusivity deal this man's studio struck with Sony. What the hell?
But I really do think people will look at this and say, ‘Yeah, okay, we get that, we can understand the business behind that’, but in general, we’re making this game for everybody.
They sure as shit don't. The response to this deal has been nearly universally negative. Which makes all the sense in the world. Owners of other platforms don't get to play the character. PlayStation owners might be glad they do, but does anyone really think they're also cheering on owners of other systems not getting to play Spider-Man? Why in the world would they even care?Whatever else, the studio should try harder to explain its decisions rather than simply trot out an ill-prepared studio head to weave a tangled web of words.
Another day in which we get to explain how content moderation is impossible to do well at scale. On Wednesday, Twitter (and Facebook) chose to lock the Trump campaign's account after it aired a dangerous and misleading clip from Fox News' "Fox & Friends" in which the President falsely claimed that children are "almost immune" from COVID-19.People can debate whether it was appropriate or not for Twitter (and Facebook) to make those content moderation decisions, but it seems perfectly defensible. Claiming that kids are "almost immune" is insane and dangerous. However, where things get sketchy on the content moderation front is that Twitter also then ended up freezing the accounts of journalists and activists who fact checked that "Fox & Friends" nonsense:
This is absolutely nuts, @TwitterSupport. My account was locked for quoting and fact-checking Trump, and I was forced to delete this tweet. Why am I getting punished for shining a light on the president's falsehoods? pic.twitter.com/UtbsGBe3cd— Aaron Rupar (@atrupar) August 6, 2020
Or in the case of Bobby Lewis from Media Matters, Twitter suspended his account for simply mocking part of the Fox & Friends clip, noting that when a host asked the President to "say something to heal the racial divisions in America" Trump couldn't do it and could only brag about himself:
Now, tons of people are reasonably pointing out that this is ridiculous, and arguing that Twitter is "bad" at content moderation. But, again, this all comes just a few weeks (has it been a few weeks? time has no meaning) since Facebook, Twitter, and YouTube all received tremendous criticism from people for not being fast enough in pulling down another nonsense video -- one that Breitbart livestreamed of "doctors" spewing under nonsense about COVID-19 in front of the Supreme Court. Indeed, at least week's Congressional anti-trust hearing, Rep. David Cicilline lit into Facebook for leaving that video up for five hours, allowing it to get 20 million views (meanwhile, multiple Republican representatives yelled at Zuckerberg for taking down the video).So, if you have some politicians screaming about how any clip of disinformation about COVID-19 must be taken down, it's no surprise that social media platforms are going to rush to take that content down -- and the easiest way to do that is to take down any of the clips, even the clips that are people debunking, criticizing, or mocking the speech. Would it be nice if content moderation systems could figure out which one is which? Yes, absolutely it would. But doing so would mean taking extra time to understand context (which isn't always so easy to understand), and in the process also allowing the videos that some say are dangerous by themselves to remain online.In fact, if Twitter said to keep up the videos that are people fact checking or criticizing the videos, you create a new dilemma -- in that those who want the dangerous nonsense to spread can, themselves, retweet the videos criticizing the content, and add their own commentary in support of the video. And then what should Twitter do?Part of the issue here is that there are always these difficult trade-offs in making these decisions, and even if you think it's an easy call, the reality is that it's going to be more complex than you think.
Summary: Though social media networks take a wide variety of evolving approaches to their content policies, most have long maintained relatively broad bans on nudity and sexual content, and have heavily employed automated takedown systems to enforce these bans. Many controversies have arisen from this, leading some networks to adopt exceptions in recent years: Facebook now allows images of breastfeeding, child-birth, post-mastectomy scars, and post-gender-reassignment surgery photos, while Facebook-owned Instagram is still developing its exception for nudity in artistic works. However, even with exceptions in place, the heavy reliance on imperfect automated filters can obstruct political and social conversations, and block the sharing of relevant news reports.
One such instance occurred on June 11, 2020 following controversial comments by Australian Prime Minister Scott Morrison, who stated in a radio interview that there was no slavery in Australia. This sparked widespread condemnation and rebuttals from both the public and the press, pointing to the long history of enslavement of Australian Aboriginals and Pacific Islanders in the country. One Australian Facebook user posted a late 19th century photo from the state library of Western Australia, depicting Aboriginal men chained together by their necks, along with a statement:
Kidnapped, ripped from the arms of their loved ones and forced into back-breaking labour: The brutal reality of life as a Kanaka worker - but Scott Morrison claims 'there was no slavery in Australia'
Facebook removed the post and image for violation of their policy against nudity, although no genitals are visible, and restricted the user's account. The Guardian Australia contacted Facebook to determine if this decision was made in error and, the following day, Facebook restored the post and apologized to the user, explaining that it was an erroneous takedown caused by a false positive in the automated nudity filter. However, at the same time, Facebook continued to block posts that included The Guardian's news story about the incident, which featured the same photo, and placed 30-day suspensions on some users who attempted to share it. Facebook's community standards report shows that in the first three months of 2020, 39.5-million pieces of content were removed for nudity or sexual activity, over 99% of those takedowns were automated, 2.5-million appeals were filed, and 613,000 of the takedowns were reversed.Decisions to be made by Facebook:
Can nudity filters be improved to result in fewer false-positives, and/or is more human review required?
For appeals of automated takedowns, what is an adequate review and response time?
Should automated nudity filters be applied to the sharing of content from major journalistic sources such as The Guardian?
Should questions about content takedowns from major news organizations be prioritized over those from regular users?
Should 30-day suspensions and similar account restrictions be manually reviewed only if the user files an appeal?
Questions and policy implications to consider:
Should automated filter systems be able to trigger account suspensions and restrictions without human review?
Should content that has been restored in one instance be exempted from takedown, or flagged for automatic review, when it is shared again in future in different contexts?
How quickly can erroneous takedowns be reviewed and reversed, and is this sufficient when dealing with current, rapidly-developing political conversations?
Should nudity policies include exemptions for historical material, even when such material does include visible genitals, such as occurred in a related 2016 controversy over a Vietnam War photo?
Should these policies take into account the source of the content?
Should these policies take into account the associated messaging?
Resolution: Facebook's restoration of the original post was undermined by its simultaneous blocking of The Guardian's news reporting on the issue. After receiving dozens of reports from its readers that they were blocked from sharing the article and in some cases suspended for trying, The Guardian reached out to Facebook again and, by Monday, June 15, 2020, users were able to share the article without restriction. The difference in response times between the original incident and the blocking of posts is possibly attributable to the fact that the latter came to the fore on a weekend, but this meant that critical reporting on an unfolding political issue was blocked for several days while the subject was being widely discussed online.Photo Credit (for first photo): State Library of Western Australia [Screenshot is taken directly from a Twitter embed]
There are many ways to respond to a cease and desist notice over trademark rights. The most common response is probably fear-based capitulation. After all, trademark bullying works for a reason, and that reason is that large companies have access to large legal war chests while smaller companies usually just run away from their own rights. Another response is the aggressive defenses against the bullying. And, finally, every once in a while you get a response so snarky in tone that it probably registers on the richter scale, somehow.The story of how a law firm called Southtown Moxie responded to a C&D from a (maybe?) financial services firm called Financial Moxie is of the snark variety. But first, some background.
Financial Moxie is a financial advisory catering to working moms. Or at least I think it is… the website also lists multiple fitness instructors on staff so I don’t know what that’s all about. The “moxie” term aligns with the phenomenon of “Moxie Tribes” which seem to be groups for working moms to talk about how awesome they are. It’s basically Goop with fewer vagina candles. Meanwhile “Southtown Moxie” is a law firm in Tennessee and North Carolina.After receiving a cease and desist letter demanding that Southtown Moxie withdraw its trademark application, Kevin Christoper of Rockridge Venture Law (Southtown Moxie’s sibling firm) sat down with a beer to pen a response.
Which is how we get to the response. The full letter is embedded below, but you damn well know you're in for a treat when the response to a C&D notice begins with:
Dear Ms. Harper,THANK YOU SO MUCH for your C&D letter and notice of opposition to our trademark application! This case presents a wonderful training opportunity for our noob associates. (And, lawyer-to-lawyer I must add it’s an honor to correspond with you. You are obviously a sensational salesperson-attorney to convince your client to pay you for challenging another law firm’s trademark application—I’m truly in awe and look forward to learning a thing or two from you. When I think of it, your client is paying you, and also giving us good trademark cannon fodder for our noobs, so it’s a win-win all around.)
And we're off! The letter then goes into noting all of the things Ms. Harper's client could buy instead of wasting everyone's time on a losing potential lawsuit. Examples include: a speedboat, glamorous clothing and jewelry, or hiring a social media influencer. The most important part of all of this, I have to stress, is that each example comes with an embedded photo of a barbie doll pantomiming these suggestions.With that throat-clearing complete, the response goes on to note in creative terms that financial and legal services are not the same thing, nor in the same markets, and therefore any trademark concern evaporates.
But I wouldn’t be drinking a Purple Haze in my skivvies if I didn’t point out the irony that your client has hired you to represent her BECAUSE SHE IS NOT LICENSED TO PRACTICE LAW. Based on your letter, she claims that our mark, limited to the provision of legal services, infringes upon her financial advisory, personal coaching, and tribal businesses and causes her great harm. Basically she thinks someone looking for “Moxie Tribe” fellowship is going to get sucked up into our vortex of intellectual property services.
The notice then goes on to note that Financial Moxie has a disclaimer listed on its site that all communication is intended for select states in America, none of which include North Carolina or Tennessee, where Southtown Moxie is located. So, different industries and different geographic locations. None of this adds up to a valid trademark dispute and it seems likely that Southtown Moxie is going to win in front of the Trademark Trial and Appeal Board.But, hey, we should at least thank Financial Moxie and its legal team for setting things up for this gem of a C&D response.
If ever there were an artist who seems to straddle the line of aggressive intellectual property enforcement, that artist must surely be Taylor Swift. While Swift has herself been subject to silly copyright lawsuits, she has also been quite aggressive and threatening on matters of intellectual property and defamation when it comes to attacking journalists and even her own fans over trademark rights. So, Taylor Swift is, among other things, both the perpetrator and the victim of expansive permission culture.You would think someone this steeped in these concerns would be quite cautious about stepping on the rights of others. And, yet, it appears that some of the iconography for Swift's forthcoming album and merchandise was fairly callous about those rights for others.
Amira Rasool, founder of the online retailer The Folklore, accused the pop star last week of selling merchandise that ripped off the logo of her company, which sells apparel, accessories and other products by designers in Africa and the diaspora.Rasool shared photos on Twitter and Instagram that showed cardigans and sweatshirts with the words "The Folklore Album" for sale on Swift's website.
Are those logos confusingly similar? Given the shared brand name... yeah, probably! While not exactly the same, particularly given the font and style choices, the overall placement of the words in each logo is similar enough that I can see a valid trademark issue here.Now, let's be super clear about a couple of things. First, Swift has changed the logo after Rasool's complaint. She also reached out to Rasool and commended her organization and appears to have made a contribution to it as well. Rasool herself has responded appreciatively and has said the matter is closed. A monster Taylor Swift is not.But that isn't really the point. In many instances, this is how trademark infringement issues happen. I have seen nothing to suggest that Swift's team knew of Rasool's organization and blatantly ripped off her logo. Maybe they did, maybe they didn't. But it's not tough to picture how this could have happened relatively innocently. And that immediately brings to mind the following question: would Swift have offered the same grace to the targets of her own enforcement as did Rasool? Given how aggressive she's been in trying to trademark all the things and then going after her own fans as a result, it seems doubtful.But maybe this is the learning opportunity she needs. I won't hold my breath.
Senator Lindsey Graham very badly wants to push the extremely dangerous EARN IT Act across the finish line. He's up for re-election this fall, and wants to burnish his "I took on big tech" creds, and sees EARN IT as his path to grandstanding glory. Never mind the damage it will do to basically every one. While the bill was radically changed via his manager's amendment last month, it's still an utter disaster that puts basically everything we hold dear about the internet at risk. It will allow for some attacks on encryption and (somewhat bizarrely) will push other services to more fully encrypt. For those that don't do that, there will still be new limitations on Section 230 protections and, very dangerously, it will create strong incentives for internet companies to collect more personal information about every one of their users to make sure they're complying with the law.It's a weird way to "attack" the power of big tech by forcing them to collect and store more of your private info. But, hey, it's not about what's actually in the bill. It's about whatever bullshit narrative Graham and others know the press will say is in the bill.Either way, we've heard that Graham and his bi-partisan supporter for EARN IT, Senator Richard Blumenthal, are looking to rush EARN IT through with no debate, via a process known as hotlining. Basically, it's a way to try to get around any floor debate, by asking every Senator's office (by email, apparently!) if they would object to a call for unanimous consent. If no Senator objects, then they basically know they can skip debate and get the bill approved. If Senators object, then (behind the scenes) others can start to lean on (or horse trade) with the Senators to get the objections to go away without it all having to happen on the floor of the Senate. In other words, Graham and Blumenthal are recognizing that they probably can't "earn" the EARN IT Act if it has to go through the official process to have it debated and voted on on the floor, and instead are looking to sneak it through when no one's looking.While Senator Wyden (once again) has said he'll do whatever he can to to block this, it would help if other Senators would stand up as well. Here's what Wyden had to say about it:
The EARN IT Act will not protect children. It will not stop the spread of child sexual abuse material, nor target the monsters who produce and share it, and it will not help the victims of these evil crimes. What it will do is threaten the free speech, privacy, and security of every single American. This is because, at its core, the amended EARN IT Act magnifies the failures of the Stop Enabling Sex Traffickers Act--SESTA--and its House companion, the Fight Online Sex Trafficking Act--FOSTA. Experts believe that SESTA/FOSTA has done nothing to help victims or stop sex trafficking, while creating collateral damage for marginalized communities and the speech of all Americans. A lawsuit challenging the constitutionality of FOSTA on First Amendment grounds is proceeding through the courts, and there is bicameral Federal legislation to study the widespread negative impacts of the bill on marginalized groups.Yet, the authors of the EARN IT Act decided to take this kind of carveout and expand it further to State civil and criminal statutes. By allowing any individual State to set laws for internet content, this bill would create massive uncertainty, both for strong encryption and constitutionally protected speech online. What is worse, the flood of State laws that could potentially arise under the EARN IT Act raises strong Fourth Amendment concerns, meaning that any CSAM evidence collected could be rendered inadmissible in court and accused CSAM offenders could get off scot-free. This is not a risk that I am willing to take.Let me be clear: The proliferation of these heinous crimes against children is a serious problem. However, for these reasons and more, the EARN IT Act is not the solution. Moreover, it ignores what Congress can and should be doing to combat this heinous crime. The U.S. has a number of important evidence-based programs in existence that are proven to keep kids safe, and they are in desperate need of funding to do their good work. Yet the EARN IT Act doesn't include a single dollar of funding for these important programs. It is time for the U.S. Government to spend the funds necessary to save children's lives now.
While a Wyden hold would block any attempt to get unanimous consent via the hotlining process, it would help quite a lot if other Senators were willing to speak up and stand with him as well. If it's just Wyden, then he'll face tremendous pressure to remove the hold. If more Senators join Wyden in saying this isn't okay, then Graham and Blumenthal will realize they have a bigger challenge in front of them.Again, if you haven't been following this debate closely, everything that Wyden says above is accurate. EARN IT is an attack on both free speech and privacy (a twofer) without doing anything to actually deal with the problem of child sexual abuse material online. That is very much a law enforcement issue, and it's one which Congress has failed to provide the funds to law enforcement that it promised on this issue, and (even worse) the DOJ has simply ignored its requirement mandates to deal with this issue as required by Congress. The DOJ seems more focused on attacking tech companies and blaming them for its own failure to do its job.The EARN IT Act is an incredibly dangerous piece of legislation, but it's also a complicated one -- one that many people don't understand. But Senators see something that says "protect the children" and they immediately think "well, of course we support that." But this bill doesn't protect children. It attacks free speech and privacy online in very insidious ways. Please call your Senators and ask them not to let this through.
When we released our CIA: Collect It All card game based on a declassified CIA training card game, we had included a fun little Easter egg in there, with help from Jon Callas, who helped create modern day encryption. So far, I believe a grand total of... two people have found it, solved it, and told me about it (though it's possible many more have done so). That was neat, but we had nothing to give them beyond the satisfaction of having solved the puzzle. It seems that others have gone much, much farther with this idea.Five years ago, Tarah Wheeler put together a big Kickstarter for the book Women in Tech, with advice/ideas/thoughts/stories from a variety of successful women in the tech field.Five years after publishing that book, Wheeler has now revealed that she flooded the book with hidden puzzles, and while releasing the book itself was a massively difficult project, the fact that a bunch of people found and worked on the puzzles was part of what made it all worth it:
I hated this fucking book. I hated it while I was writing it. I didn't think that would happen. But I did....And yet...There was a secret in that book. It's a secret that I've kept for half a decade, and while I've loved the wonderful messages and notes of support from people who've benefitted from this work, the puzzles I hid in it are the only unmitigated, unsoured, pure joy I've experienced in creating this Frankenstein's Creature of a book.I filled it with puzzles. I plastered it with puzzles. I was filled with anticipation at the thought that someday, people would see it.Then some people noticed the codes and puzzles. A few people tried to solve them. Teams formed on Reddit and Twitter and Discord. And one small crew of four people finally journeyed to the end of the epic. And that's how they won the secret buried treasure of pounds of precious silver.
The link above has some examples of the hidden puzzles, but here's just one:
Tarah then worked with Jon Callas (a familiar name!) to create amazing cipher wheels out of silver. You can see the wheels demonstrated in a video that Tarah put up recently:Even better, she put up details on how to make your own cipher wheels, including 3D printing files to make your own as well at Github. This is a very cool project that, along with everything else that's fun about it, is a neat way to demonstrate how encryption works and why it's so important.
Earlier today we wrote about how Ajit Pai was pushing ahead with the Commerce Department's silly FCC petition regarding a re-interpretation of Section 230 of the Communications Decency Act. We noted that it wouldn't actually be that hard to just say that the whole thing is unconstitutional and outside of the FCC's authority (which it is). Some people have pushed back on us saying that if Pai didn't do this, Trump would fire him and promote some Trump stan to push through whatever unconstitutional nonsense is wanted.Well, now at least there's some evidence to suggest that Trump also views the FCC -- a supposedly "independent" agency -- as his personal speech police. Of the Republican Commissioners, Brendan Carr has been quite vocal in his Trump boot-licking, especially with regards to Section 230. He's been almost gleeful in his pronouncements about how evil "big tech" is for "censoring conservatives," and how much he wants to chip away at Section 230. Pai has been pretty much silent on the issue until the announcement today. But the other Republican Commissioner, Mike O'Rielly, has at least suggested that he recognizes the Trump executive order is garbage. Six weeks ago he said he hadn't done his homework yet, but suggested he didn't think Congress had given the FCC any authority on this matter (he's right).Just last week, during a speech, he made it pretty clear where he stood on this issue. While first saying he wasn't necessarily referencing the Trump executive order, he said the following:
Today, I would like to address a particularly ominous development in this space. To be clear, thefollowing critique is not in any way directed toward President Trump or those in the White House, whoare fully within their rights to call for the review of any federal statute's application, the result of whichwould be subject to applicable statutory and constitutional guardrails. Rather, I am very troubled bycertain opportunists elsewhere who claim to be the First Amendment's biggest heroes but only come toits defense when convenient and constantly shift its meaning to fit their current political objectives. Theinconsistencies and contradictions presented by such false prophets would make James Madison's headspin, were he alive to witness them.The First Amendment protects us from limits on speech imposed by the governmentnot privateactorsand we should all reject demands, in the name of the First Amendment, for private actors tocurate or publish speech in a certain way. Like it or not, the First Amendment's protections apply tocorporate entities, especially when they engage in editorial decision making. I shudder to think of a dayin which the Fairness Doctrine could be reincarnated for the Internet, especially at the ironic behest ofso-called free speech defenders. It is time to stop allowing purveyors of First Amendment gibberish toclaim they support more speech, when their actions make clear that they would actually curtail itthrough government action. These individuals demean and denigrate the values of our Constitution andmust be held accountable for their doublespeak and dishonesty. This institution and its members havelong been unwavering in defending the First Amendment, and it is the duty of each of us to continue touphold this precious protection.
To be clear: I agree 100% with that statement, and am glad that O'Rielly was willing to stand up on principle to defend it.And then, today, it was announced that the White House is pulling his renomination to the FCC. In other words, the White House is being a petty asshole, again, and firing anyone for not being in lockstep with the President's ridiculous unconstitutional whims.There was some talk last week about how Senator James Inhofe's office was blocking O'Rielly's renomination over a different issue: the approval of L-Band spectrum for use by Ligado (formerly LightSquared). A variety of government organizations had opposed the use of this spectrum, fearing that it might interfere with GPS systems. However, the Ligado deal was unanimously approved by all five commissioners, so it's difficult to see why O'Rielly would be singled out, other than his nomination was up. The Inhofe/Ligado thing feels like a smokescreen for the 230 issue.The question now is whether or not O'Rielly will serve out his term, or if he'll leave now that his renomination is not being considered. One hopes that he'll at least stick it out long enough to vote down the Petition on 230. Even if he did leave, it's unclear if a new Commissioner would get through any confirmation process prior to the election. Either way, at least it's nice to see one Republican Commissioner willing to stand up to Trump. We've criticized O'Rielly plenty of times in the past, but at least he's not taking the path of Carr (and even Pai) in dealing with this nonsense.
This seemed fairly inevitable, after it became quite clear that the Twitter hack from a few weeks ago was done by teen hackers who didn't seem to do much to cover their tracks, but officials in Florida announced the arrest of a Florida teenager for participating in the hack, followed by the DOJ announcing two others as well -- a 19 year old in the UK and a 22 year old in Florida.As for why the first announced was separate and done by Florida officials, it appears that it involved a 17-year-old, and apparently it was easier to charge him as an adult under state laws, rather than under federal law, as with the other two.
Hillsborough State Attorney Andrew Warren filed 30 felony charges against the teen this week for scamming people across America in connection with the Twitter hack that happened on July 15. The charges he's facing include one count of organized fraud, 17 counts of communications fraud, one count of fraudulent use of personal information with over $100,000 or 30 or more victims, 10 counts of fraudulent use of personal information and one count of access to computer or electronic device without authority.Hillsborough County Jail records show Clark was booked into jail shortly after 6:30 a.m. Friday.Warren's office says the scheme to defraud stole the identities of prominent people and posted messages in their names directing victims to send Bitcoin to accounts that were associated with the Tampa teen. According to the state attorney, the scheme reaped more than $100,000 in Bitcoin in just one day.
Once again, it's looking like we got incredibly lucky -- that it was just some young hackers mostly messing around, rather than anyone with serious ill-intent and the ability to plan something bigger. It now appears that Twitter's internal security controls were kind of a mess. Over 1,000 employees had access to the control panel that would allow people to make the changes that enabled the hack -- and even that some staffers and contractors somehow made it a game to abuse their powers to spy on users.Once again, it seems that Twitter needs to fix up a lot of things on the security side, including figuring out how to do end-to-end encryption for direct messages.
Summary: With news breaking so rapidly, it's possible that even major newspapers or official sources may get information wrong. Social media sites, like Twitter, need to determine how to deal with news tweets that later turn out to be misleading -- even when coming from major news organizations, citing official government organizations.With widespread protests around the United States calling attention to police brutality and police activity disproportionately targeting the black community, the NY Post tweeted a link to an article discussing an internal communication by the NY Police Department (NYPD) warning of concrete disguised as ice cream cups that were supposedly found at some of the protests, with the clear implication being that this was a way to disguise items that could be used for violence or property destruction.
The article was criticized widely by people who pointed out that the items in fact appear to be part of a standard process for testing concrete mixtures, with the details of each mixture written on the side of the containers. Since these were found at a construction site, it seems likely that the NYPD's alert was, at best, misleading.In response to continuing criticism, the NY Post made a very minor edit to the story, noting only that the markings on the cups make them resemble concrete sample tests commonly used on construction sites. However, the story and its title remained unchanged and the NY Post retweeted it a day later -- leading some to question why the NY Post was publishing misinformation, even if it was accurately reporting the content of an internal police memo.Questions for Twitter:
Should it flag potentially misleading tweets when published in major media publications, such as the NY Post?
Should it matter if the information originated at an official government source, such as the NYPD?
How much investigation should be done to determine the accuracy (or not) of the internal police report? How should the NY Post's framing of the story reflect this investigation?
Does it matter that the NY Post retweeted the story a day after the details were credibly called into question?
Questions and policy implications to consider:
Do different publications require different standards of review?
Does it matter if underlying information is coming from a governmental organization?
If a media report accurately reports on the content of an underlying report that is erroneous or misleading, does that make the report itself misleading?
How much does wider context (protests, accusations of violence, etc.) need to be considered when making determinations regarding moderation?
Resolution: To date, Twitter has left the tweets up, and the NY Post article remains online with only the very minor edit that was added a few hours after the article received widespread criticism. The NY Post tweets have not received any fact check or other moderation to date. There are, however, many replies and quote tweets calling out what people feel to be misleading aspects of the story (as well as plenty from people taking the content of the story at face value, and worrying about how the items might be used for violence).