e dot dot dot
a mostly about the Internet blog by

January 2019
Sun Mon Tue Wed Thu Fri Sat
   
   


Facebook's Privacy Problems Are Piling Up Too Quickly To Chronicle

Furnished content.


Another day, another Facebook privacy mess. Actually, this one is a few different privacy messes that we'll roll up into a single post because, honestly, who can keep track of them all these days? While we've noted that the media is frequently guilty of exaggerating or misunderstanding certain claims about Facebook and privacy, Facebook does continue to do a really, really awful job concerning how it handles privacy and its transparency about these things with its users. And that's a problem that comes from the executive team, who still doesn't seem to fully comprehend what a mess they have on their hands.The latest flaps both involve questionable behavior targeted at younger Facebook users. First there's a followup on a story we wrote about a few weeks ago, involving internal Facebook documents showing staffers gleefully refusing to refund money spent unwittingly by kids on games on the Facebook platform. Reveal, from the Center for Investigative Reporting, who broke that story, also had a much more detailed and much more damning followup, about how Facebook was clearly knowingly duping young children out of their parents' money.

Facebook encouraged game developers to let children spend money without their parents' permission - something the social media giant called friendly fraud - in an effort to maximize revenues, according to a document detailing the company's game strategy.Sometimes the children did not even know they were spending money, according to another internal Facebook report. Facebook employees knew this. Their own reports showed underage users did not realize their parents' credit cards were connected to their Facebook accounts and they were spending real money in the games, according to the unsealed documents.For years, the company ignored warnings from its own employees that it was bamboozling children.A team of Facebook employees even developed a method that would have reduced the problem of children being hoodwinked into spending money, but the company did not implement it, and instead told game developers that the social media giant was focused on maximizing revenues.
Yes, they not only called it "friendly fraud," but in an internal memo, they explained "why you shouldn't try to block it" (i.e., why you should let game developers scam kids out of their parents' money).
This reminds me so much of the early days of adware scammers, who pulled similar kinds of stunts -- and it's incredible to think that Facebook, which presented itself as a squeaky clean alternative to the open web where those kinds of scams piled up, was basically doing the same thing on a much larger scale. The Reveal article has much more on this, and is worth reading in full to see how the focus on revenue had the company deliberately look the other way as it scooped up cash from kids.But rather than focus on that, we already need to move on to the more recent Facebook privacy scandal, which also (partially) involves kids. Last summer, we wrote about how Apple had booted Facebook's Onavo app from its app store. Facebook had marketed it as a privacy protecting "VPN," but it was really pretty blatant spyware. Indeed, late last year when yet another Facebook privacy scandal broke, it was revealed that Facebook had been using Onavo data to determine what competitive apps were most popular -- including giving it ideas on what apps to buy or (much more damning) what apps to hinder or block from Facebook.Apparently, even having Apple boot the app didn't give Facebook the idea that maybe this spyware was going a bit too far. Instead, it now appears that Facebook "pivoted" into paying teens to install Onavo on iPhones in a way that routed around Apple's App Store blocks, by saying it was a part of "Facebook Research." And they hid this from Apple by using third party "beta testing" services:
The program is administered through beta testing services Applause, BetaBound and uTest to cloak Facebook's involvement, and is referred to in some documentation as Project Atlas a fitting name for Facebook's effort to map new trends and rivals around the globe.
Facebook appears to have desperately wanted all of this data, if it was willing to go these lengths even after Apple had booted Onavo. After TechCrunch broke this story, Facebook claimed that it would stop that program on iPhones, while Apple claims it banned the app before Facebook itself could pull it.For years, people like Jaron Lanier have argued that Facebook should pay its users for all the data they get -- but I think even people who wanted payment would balk a bit at how much access people were giving in exchange for $20/month in gift cards.
By installing the software, you're giving our client permission to collect data from your phone that will help them understand how you browse the internet, and how you use the features in the apps you've installed . . . This means you're letting our client collect information such as which apps are on your phone, how and when you use them, data about your activities and content within those apps, as well as how other people interact with you or your content within those apps. You are also letting our client collect information about your internet browsing activity (including the websites you visit and data that is exchanged between your device and those websites) and your use of other online services. There are some instances when our client will collect this information even where the app uses encryption, or from within secure browser sessions.
And, of course, the setup required you to keep the app running and spying on everything if you wanted to keep getting paid.Facebook, in response to the TechCrunch story, did its standard PR tap dance, insisting that they weren't hiding anything (Apple's response suggests otherwise, as does the fact that Facebook specifically used these 3rd party services). But, once again, like with so many other Facebook privacy scandals, the reason why so many people get upset about this is because they were not open and transparent about what was going on, and that's why it's so surprising to everyone.The only "good" news is that on the same day all of this came out, it was announced that Facebook has just hired two of its biggest privacy critics to work on privacy issues at the company: EFF's Nate Cardozo and Open Technology Institute's Robyn Greene (*Disclosure: I know both Nate and Robyn, and Nate did, very helpfully, represent us on one issue while he was at EFF.) I know some may cynically see this as Facebook trying to co-opt some of its critics, but both Nate and Robyn have incredibly strong track records on privacy, including being vocally critical of Facebook and its policies. Hopefully this is a sign that the company is actually taking these issues seriously (better a decade too late than never).

Permalink | Comments | Email This Story


Read more here

posted at: 4:16pm on 30-Jan-2019
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



Deep Fakes: Let's Not Go Off The Deep End

Furnished content.


In just a few short months, "deep fakes" are striking fear in technology experts and lawmakers. Already there are legislative proposals, a law review article, national security commentaries, and dozens of opinion pieces claiming that this new deep fake technology — which uses artificial intelligence to produce realistic-looking simulated videos — will spell the end of truth in media as we know it.But will that future come to pass?Much of the fear of deep fakes stems from the assumption that this is a fundamentally new, game-changing technology that society has not faced before. But deep fakes are really nothing new; history is littered with deceptive practices — from Hannibal's fake war camp to Will Rogers' too-real impersonation of President Truman to Stalin's disappearing of enemies from photographs. And society's reaction to another recent technological tool of media deception — digital photo editing and Photoshop — teaches important lessons that provide insight into deep fakes’ likely impact on society.In 1990, Adobe released the groundbreaking Adobe Photoshop to compete in the quickly-evolving digital photograph editing market. This technology, and myriad competitors that failed to reach the eventual popularity of Photoshop, allowed the user to digitally alter real photographs uploaded into the program. While competing services needed some expertise to use, Adobe designed Photoshop to be user-friendly and accessible to anyone with a Macintosh computer.With the new capabilities came new concerns. That same year, Newsweek published an article called, “When Photographs Lie.” As Newsweek predicted, the consequences of this rise in photographic manipulation techniques could be disastrous: “Take China's leaders, who last year tried to bar photographers from exposing [the leaders’] lies about the Beijing massacre. In the future, the Chinese or others with something to hide wouldn't even worry about photographers.”These concerns were not entirely without merit. Fred Ritchin, formerly the picture editor of The New York Times Magazine who is now the Dean Emeritus of the International Center of Photography School, has continued to argue that trust in photography has eroded over the past few decades thanks to photo-editing technology:

There used to be a time when one could show people a photograph and the image would have the weight of evidence—the “camera never lies.” Certainly photography always lied, but as a quotation from appearances it was something viewers counted on to reveal certain truths. The photographer’s role was pivotal, but constricted: for decades the mechanics of the photographic process were generally considered a guarantee of credibility more reliable than the photographer’s own authorship. But this is no longer the case.
It is true that the “camera never lies” saying can no longer be sustained — the camera can and often does lie when the final product has been manipulated. Yet the crisis of truth that Ritchin and Newsweek predicted has not come to pass.Why? Because society caught on and adapted to the technology.Think back to June 1994, when Time magazine ran O.J. Simpson’s mugshot on the cover of its monthly issue. Time had drastically darkened the mugshot, making Simpson appear much darker than he actually was. What’s worse, Newsweek ran the unedited version of the mugshot, and the two magazines sat side-by-side on supermarket shelves. While Time defended this as an artistic choice with no intended racial implications, the obviously edited photograph triggered massive public outcry.Bad fakes were only part of the growing public awareness of photographic manipulation. For years, fashion magazines have employed deceptive techniques to alter the appearance of cover models. Magazines with more attractive models on the cover generally sell more copies than those featuring less attractive ones, so editors retouch photos to make them more appealing to the public. Unfortunately, this practice created an unrealistic image of beauty in society and, once this was discovered, health organizations began publically warning about the dangers this phenomenon caused — most notably eating disorders. And due to the ensuing public outcry, families across the country became aware of photo-editing technology and what it was capable of.Does societal adaptation mean that no one falls for photo manipulation anymore? Of course not. But instead of prompting the death of truth in photography, awareness of the new technology has encouraged people to use other indicators — such as trustworthiness of the source — to make informed decisions about whether an image presented is authentic. And as a result, news outlets and other publishers of photographs have gone on to establish policies and make decisions regarding the images they use with an eye toward fostering their audience’s trust. For example, in 2003, the Los Angeles Times quickly fired a reporter who had digitally altered Iraq War photographs because the editors realized that publishing a manipulated image would diminish their reader’s perception of the paper’s veracity.No major regulation or legislation was needed to prevent the apocalyptic vision of Photoshop’s future; society adapted on its own.Now, however, the same “death of truth” claims — mainly in the context of fake news and disinformation — ring out in response to deep fakes as new artificial-intelligence and machine-learning technology enter the market. What if someone released a deep fake of a politician appearing to take a bribe right before an election? Or of the president of the United States announcing an imminent missile strike? As Andrew Grotto, International Security Fellow at the Center for International Security and Cooperation at Stanford University, predicts, “This technology … will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions.” Perhaps even more problematic, if society has no means to distinguish a fake video from a real one, any person could have plausible deniability for anything they do or say on film: It’s all fake news.But who is to say that societal response to deep fakes will not evolve similarly to the response to digitally edited photographs?Right now, deep fake technology is far from flawless. While some fakes may appear incredibly realistic, others have glaring imperfections that can alert the viewer to their forged nature. As with Photoshop and digital photograph editing before it, poorly made fakes generated through cellphone applications can educate viewers about the existence of this technology. When the public becomes aware, the harms posed by deep fakes will fail to materialize to the extent predicted.Indeed, new controversies surrounding the use of this technology are likewise increasing public awareness about what the technology can do. For example, the term “deep fake” actually comes from a Reddit user who began using this technology to generate realistic-looking fake pornographic videos of celebrities. This type of content rightfully sparked outrage as an invasion of the depicted person’s privacy rights. As public outcry began to ramp up, the platform publically banned the deep fake community and any involuntary pornography from its website. As with the public outcry that stemmed from the use of Photoshop to create an unrealistic body image, the use of deep fake technology to create inappropriate and outright appalling content will, in turn, make the public more aware of the technology, potentially stemming harms.Perhaps most importantly, many policymakers and private companies have already begun taking steps to educate the public about the existence and capabilities of deep fakes. Notable lawmakers such as Sens. Mark Warner of Virginia, and Ben Sasse of Nebraska, have recently made deep fakes a major talking point. Buzzfeed released a public service announcement from “President Obama,” which was in fact a deep fake video with a voice-over from Jordan Peele, to raise awareness of the technology. And Facebook recently announced that it is investing significant resources into deep fake identification and detection. With so much focus on educating the public about the existence and uses of this technology, it will be more difficult for bad actors to successfully spread harmful deep fake videos.That is not to say deep fakes do not pose any new harms or threats. Unlike Photoshop, anyone with a smartphone can use deep fake technology, meaning that a larger number of deep fakes may be produced and shared. And unlike during the 1990s, significantly more people use the internet to share news and information today, facilitating the dissemination of content across the globe at breakneck speeds.However, we should not assume that society will fall into an abyss of deception and disinformation if we do not take steps to regulate the technology. There are many significant benefits that the technology can provide, such as aging photos of children missing for decades or creating lifelike versions of historical figures for children in class. Instead of rushing to draft legislation, lawmakers should look to the past and realize that deep fakes are not some unprecedented problem. Instead, deep fakes simply represent the newest technique in a long line of deceptive audiovisual practices that have been used throughout history. So long as we understand this fact, we can be confident that society will come up with ways of mitigating new harms or threats from deep fakes on its own.Jeffrey Westling is a Technology and Innovation policy associate at the R Street Institute, a free-market think tank based in Washington, D.C.

Permalink | Comments | Email This Story


Read more here

posted at: 4:16pm on 30-Jan-2019
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



January 2019
Sun Mon Tue Wed Thu Fri Sat
   
   







RSS (site)  RSS (path)

ATOM (site)  ATOM (path)

Categories
 - blog home

 - Announcements  (0)
 - Annoyances  (0)
 - Career_Advice  (0)
 - Domains  (0)
 - Downloads  (3)
 - Ecommerce  (0)
 - Fitness  (0)
 - Home_and_Garden  (0)
     - Cooking  (0)
     - Tools  (0)
 - Humor  (0)
 - Notices  (0)
 - Observations  (1)
 - Oddities  (2)
 - Online_Marketing  (0)
     - Affiliates  (1)
     - Merchants  (1)
 - Policy  (3743)
 - Programming  (0)
     - Bookmarklets  (1)
     - Browsers  (1)
     - DHTML  (0)
     - Javascript  (3)
     - PHP  (0)
     - PayPal  (1)
     - Perl  (37)
          - blosxom  (0)
     - Unidata_Universe  (22)
 - Random_Advice  (1)
 - Reading  (0)
     - Books  (0)
     - Ebooks  (0)
     - Magazines  (0)
     - Online_Articles  (5)
 - Resume_or_CV  (1)
 - Reviews  (2)
 - Rhode_Island_USA  (0)
     - Providence  (1)
 - Shop  (0)
 - Sports  (0)
     - Football  (0)
          - Cowboys  (0)
          - Patriots  (0)
     - Futbol  (0)
          - The_Rest  (0)
          - USA  (0)
 - Technology  (1167)
 - Windows  (1)
 - Woodworking  (0)


Archives
 -2024  April  (103)
 -2024  March  (179)
 -2024  February  (168)
 -2024  January  (146)
 -2023  December  (140)
 -2023  November  (174)
 -2023  October  (156)
 -2023  September  (161)
 -2023  August  (49)
 -2023  July  (40)
 -2023  June  (44)
 -2023  May  (45)
 -2023  April  (45)
 -2023  March  (53)


My Sites

 - Millennium3Publishing.com

 - SponsorWorks.net

 - ListBug.com

 - TextEx.net

 - FindAdsHere.com

 - VisitLater.com