e dot dot dot
a mostly about the Internet blog by

February 2019
Sun Mon Tue Wed Thu Fri Sat
         
   


The Most Popular Make Landing Page Provider

Furnished content.


Take what feedback you can get and enhance the webpage until to get happy by it. Simply take a notion you have and acquire a landing page for that a product based on such an idea. With Google Varieties, you are able to without difficulty create a completely free landing page and begin collecting data […]The post The Most Popular Make Landing Page Provider appeared first on Adotas.

Read more here


posted at: 12:00am on 12-Feb-2019
path: /Online_Marketing | permalink | edit (requires password)

0 comments, click here to add the first



US Newspapers Now Salivating Over Bringing A Google Snippet Tax Stateside

Furnished content.


As the EU is still trying to figure out what it's going to do about the highly contested EU Copyright Directive, it appears that at least one of the controversial parts, the ridiculous Article 11 link tax, is spreading to the US. David Chavern, the CEO of the News Media Alliance (a trade group representing legacy news publishers), is agitating in the NY Times for a US version of Article 11. The article if is so chock full of "wrong" that it's embarrassing. Let's dig in.

Facebook and Google have been brutal to the news business.
Citation needed. Seriously. Nothing in this piece explains how this is true. I know that lots of journalists claim it to be true, but they are lacking in evidence. The truth is Facebook and Google have been very good for some news operations, very bad for others, and all over the spectrum for others. It kinda depends on the news organization and the choices of those news organizations specifically. In other words: it's the news organizations' fault if they're suddenly having trouble because their traffic has dried up.
But this primarily reflects a failure of imagination. The tech giants are the world's best distribution platforms and could be an answer for journalism instead of a grave threat.
Again, for many news organizations, these platforms are an answer: an answer that drives traffic.
As readers have shifted to digital sources, the two companies have taken a large majority of online advertising revenue.
Note the verb choice: "taken." As if it was snatched away from the rightful owners: the legacy news business who did fuck all to adapt to the internet. No, the large majority of online advertising went to those platforms because those platforms provided a better result for advertisers. We can discuss whether or not that's a good thing, and whether or not advertisers are silly to focus on those platforms (indeed, I'd argue, they are!). But to blame Facebook and Google for making advertisers happier seems weird.
More important, the platforms now act as regulators of the news business determining what information gets delivered to whom, and when. With the flick of an algorithmic finger, those two companies decide what news you see and whether a publisher lives or dies.
They only do that if the news publications focused solely on chasing traffic, rather than building up loyal audiences who come directly to their sites. Nothing Google or Facebook do really has that much of an impact on our traffic. Because we don't rely on them for traffic. They send us some -- which is great -- but our strategy has always focused on loyal readers, not drive by traffic. So, no, Techdirt readers don't rely on those platforms to get our content. Nor should they.If your entire business strategy is based on some third party you can't control, it seems a little, well, dubious, for you to whine that they don't act the way you want them to.
The impact on journalism has been clear. Just within the past week, we have seen over 1,000 planned layoffs at Gannett, BuzzFeed and HuffPost, and no one thinks we are anywhere near the end.
This is also misleading. While, yes, there were some high profile layoffs that included a bunch of journalists -- and that sucks -- the 1,000 number is greatly exaggerated. As Peter Sterne pointed out, the vast majority of that 1,000 number (~800) came from "Oath" the Verizon-owned Frankenstein's monster made up of various properties from HuffPost to Yahoo to AOL -- and the majority of them were not journalists. So, yes, it's still bad to see these layoffs. But using this 1,000 number to imply that that many journalists lost their job is highly misleading, and pretty shameful for a guy who represents news publishers.
We can start with the fact that free isn't a good business model for quality journalism.
Free is not the fucking business model. Free has never been a business model. However, free can very often be a key part of a very compelling business model.
Facebook and Google flatly refuse to pay for news even though they license many other types of content. Both companies have deals to pay music publishers when copyrighted songs play on their platforms. And the companies also aggressively bid to stream live sports and entertainment content to run on Facebook Watch and YouTube. These deals are varied and often secret, but none of them are based on free.
And this may be the dumbest thing that Chavern has written in this entire article full of bad ideas. Google and Facebook pay licenses for that other content because they host that content full on their sites. They don't pay for news because they're not hosting the news, but rather sending traffic to those news sites. For free.
Why are the platforms so unwilling to pay news publishers for access to the quality journalism that users need and value?
Again, because you're comparing apples to oranges. This is comparing totally different situations in a way that makes no sense.
There's no reason those who produce the news shouldn't enjoy the same intellectual property protections as songwriters and producers (regulators in Europe are looking at replicating some of these safeguards for journalism).
These are not "the same intellectual property protections as songwriters and producers." News already has the same "intellectual property protections as songwriters and producers." It's called copyright and it applies to news as well as songs. The issue is that what's happening here is entirely different. Google and Facebook pay for hosting music. They're not hosting news (other than in very minor ways where news orgs choose to host on their platforms for specific purposes). Instead, Google and Facebook are sending people off to the news sites themselves, which should be a better deal, because then you have those people on your own damn site where you can offer all sorts of other things -- some of which might even make the publishers some money. Or, build a loyal fan base who won't need to go through those dastardly platforms in the future.And, yes, it's blatantly misleading to claim that the EU's ridiculous Article 11 is the EU "replicating some of these safeguards for journalism." Hell, this is close to journalistic malpractice from a guy who pretends to represent journalism. Remember, we already know what happens with an Article 11 type setup: it didn't magically lead to the big platforms paying news publishers, and it actually did significant harm to news publishers, in particular the smaller ones.
The tech giants are also run as walled gardens that minimize brands and separate publishers from their readers even while hoarding information about those same readers.
I don't think he knows what a "walled garden" means. And, again, these services work by sending readers to the news publication sites themselves. That's not "separating publishers from readers" unless the publishers are so clueless they do nothing to build a loyal community.
Imagine trying to build a trusted relationship with an audience when you can't even know who they are.
That's how every community works. You don't know who they are at first. You build up trust and maybe they tell you. But you need to work on building a direct relationship yourself. You don't sit there and just wait for the audience to magically find you and then blame Google when they don't.
Publishers need new economic terms that include more revenue and more information about our readers.
So, uh, build the new revenue models that involve building up a loyal community who chooses to share info and you get that. And Google and Facebook don't.
Facebook and Google also need to be willing to acknowledge investments in quality journalism through their algorithms. They are constantly on the defensive about spreading false and misleading news that hurts people. They could start to address the problem by simply recognizing that The Miami Herald is a much better news source than Russian bots or Macedonian teenagers and highlighting original, quality content accordingly.
Um, both Facebook and (especially) Google already do that. How does he not know this. Indeed, the entire point of Google is to promote the more trustworthy content. It fails sometimes, but this paragraph misleadingly suggests that Google treats Macedonian teens at the same level as it treats the Miami Herald and that's laughably wrong. You can't make good policy decisions if you simply are spouting off nonsense.
Recognizing and promoting publishers that have consistently delivered quality news content can't be that difficult for sophisticated tech companies. And there are a range of qualified independent ratings organizations, such as NewsGuard, that could help them separate the wheat from the chaff.
Again, that's exactly what Google already does.
Whether they like to admit it or not, Facebook and Google are at real risk when it comes to the news business. Under the adage You break it, you buy it, the platforms now own what happens when quality journalism goes away.
Facebook and Google didn't break news. Newspapers failed to adapt and now they're whining about it.A true leader for the news publishers wouldn't be begging platforms like Google and Facebook for money like that. He'd be helping those platforms adapt and build more loyal audiences, and experiment with more sophisticated business models. And, really, the most incredible part of this strategy from Chavern and the News Media Alliance is that it would only serve to do one thing: making those news publishers more reliant on Google and Facebook, giving them even more power.News organizations deserve better than to have a trade organization spewing such utter nonsense.

Permalink | Comments | Email This Story


Read more here

posted at: 1:46pm on 01-Feb-2019
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



Deep Fakes: Let's Not Go Off The Deep End

Furnished content.


In just a few short months, "deep fakes" are striking fear in technology experts and lawmakers. Already there are legislative proposals, a law review article, national security commentaries, and dozens of opinion pieces claiming that this new deep fake technology — which uses artificial intelligence to produce realistic-looking simulated videos — will spell the end of truth in media as we know it.But will that future come to pass?Much of the fear of deep fakes stems from the assumption that this is a fundamentally new, game-changing technology that society has not faced before. But deep fakes are really nothing new; history is littered with deceptive practices — from Hannibal's fake war camp to Will Rogers' too-real impersonation of President Truman to Stalin's disappearing of enemies from photographs. And society's reaction to another recent technological tool of media deception — digital photo editing and Photoshop — teaches important lessons that provide insight into deep fakes’ likely impact on society.In 1990, Adobe released the groundbreaking Adobe Photoshop to compete in the quickly-evolving digital photograph editing market. This technology, and myriad competitors that failed to reach the eventual popularity of Photoshop, allowed the user to digitally alter real photographs uploaded into the program. While competing services needed some expertise to use, Adobe designed Photoshop to be user-friendly and accessible to anyone with a Macintosh computer.With the new capabilities came new concerns. That same year, Newsweek published an article called, “When Photographs Lie.” As Newsweek predicted, the consequences of this rise in photographic manipulation techniques could be disastrous: “Take China's leaders, who last year tried to bar photographers from exposing [the leaders’] lies about the Beijing massacre. In the future, the Chinese or others with something to hide wouldn't even worry about photographers.”These concerns were not entirely without merit. Fred Ritchin, formerly the picture editor of The New York Times Magazine who is now the Dean Emeritus of the International Center of Photography School, has continued to argue that trust in photography has eroded over the past few decades thanks to photo-editing technology:

There used to be a time when one could show people a photograph and the image would have the weight of evidence—the “camera never lies.” Certainly photography always lied, but as a quotation from appearances it was something viewers counted on to reveal certain truths. The photographer’s role was pivotal, but constricted: for decades the mechanics of the photographic process were generally considered a guarantee of credibility more reliable than the photographer’s own authorship. But this is no longer the case.
It is true that the “camera never lies” saying can no longer be sustained — the camera can and often does lie when the final product has been manipulated. Yet the crisis of truth that Ritchin and Newsweek predicted has not come to pass.Why? Because society caught on and adapted to the technology.Think back to June 1994, when Time magazine ran O.J. Simpson’s mugshot on the cover of its monthly issue. Time had drastically darkened the mugshot, making Simpson appear much darker than he actually was. What’s worse, Newsweek ran the unedited version of the mugshot, and the two magazines sat side-by-side on supermarket shelves. While Time defended this as an artistic choice with no intended racial implications, the obviously edited photograph triggered massive public outcry.Bad fakes were only part of the growing public awareness of photographic manipulation. For years, fashion magazines have employed deceptive techniques to alter the appearance of cover models. Magazines with more attractive models on the cover generally sell more copies than those featuring less attractive ones, so editors retouch photos to make them more appealing to the public. Unfortunately, this practice created an unrealistic image of beauty in society and, once this was discovered, health organizations began publically warning about the dangers this phenomenon caused — most notably eating disorders. And due to the ensuing public outcry, families across the country became aware of photo-editing technology and what it was capable of.Does societal adaptation mean that no one falls for photo manipulation anymore? Of course not. But instead of prompting the death of truth in photography, awareness of the new technology has encouraged people to use other indicators — such as trustworthiness of the source — to make informed decisions about whether an image presented is authentic. And as a result, news outlets and other publishers of photographs have gone on to establish policies and make decisions regarding the images they use with an eye toward fostering their audience’s trust. For example, in 2003, the Los Angeles Times quickly fired a reporter who had digitally altered Iraq War photographs because the editors realized that publishing a manipulated image would diminish their reader’s perception of the paper’s veracity.No major regulation or legislation was needed to prevent the apocalyptic vision of Photoshop’s future; society adapted on its own.Now, however, the same “death of truth” claims — mainly in the context of fake news and disinformation — ring out in response to deep fakes as new artificial-intelligence and machine-learning technology enter the market. What if someone released a deep fake of a politician appearing to take a bribe right before an election? Or of the president of the United States announcing an imminent missile strike? As Andrew Grotto, International Security Fellow at the Center for International Security and Cooperation at Stanford University, predicts, “This technology … will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions.” Perhaps even more problematic, if society has no means to distinguish a fake video from a real one, any person could have plausible deniability for anything they do or say on film: It’s all fake news.But who is to say that societal response to deep fakes will not evolve similarly to the response to digitally edited photographs?Right now, deep fake technology is far from flawless. While some fakes may appear incredibly realistic, others have glaring imperfections that can alert the viewer to their forged nature. As with Photoshop and digital photograph editing before it, poorly made fakes generated through cellphone applications can educate viewers about the existence of this technology. When the public becomes aware, the harms posed by deep fakes will fail to materialize to the extent predicted.Indeed, new controversies surrounding the use of this technology are likewise increasing public awareness about what the technology can do. For example, the term “deep fake” actually comes from a Reddit user who began using this technology to generate realistic-looking fake pornographic videos of celebrities. This type of content rightfully sparked outrage as an invasion of the depicted person’s privacy rights. As public outcry began to ramp up, the platform publically banned the deep fake community and any involuntary pornography from its website. As with the public outcry that stemmed from the use of Photoshop to create an unrealistic body image, the use of deep fake technology to create inappropriate and outright appalling content will, in turn, make the public more aware of the technology, potentially stemming harms.Perhaps most importantly, many policymakers and private companies have already begun taking steps to educate the public about the existence and capabilities of deep fakes. Notable lawmakers such as Sens. Mark Warner of Virginia, and Ben Sasse of Nebraska, have recently made deep fakes a major talking point. Buzzfeed released a public service announcement from “President Obama,” which was in fact a deep fake video with a voice-over from Jordan Peele, to raise awareness of the technology. And Facebook recently announced that it is investing significant resources into deep fake identification and detection. With so much focus on educating the public about the existence and uses of this technology, it will be more difficult for bad actors to successfully spread harmful deep fake videos.That is not to say deep fakes do not pose any new harms or threats. Unlike Photoshop, anyone with a smartphone can use deep fake technology, meaning that a larger number of deep fakes may be produced and shared. And unlike during the 1990s, significantly more people use the internet to share news and information today, facilitating the dissemination of content across the globe at breakneck speeds.However, we should not assume that society will fall into an abyss of deception and disinformation if we do not take steps to regulate the technology. There are many significant benefits that the technology can provide, such as aging photos of children missing for decades or creating lifelike versions of historical figures for children in class. Instead of rushing to draft legislation, lawmakers should look to the past and realize that deep fakes are not some unprecedented problem. Instead, deep fakes simply represent the newest technique in a long line of deceptive audiovisual practices that have been used throughout history. So long as we understand this fact, we can be confident that society will come up with ways of mitigating new harms or threats from deep fakes on its own.Jeffrey Westling is a Technology and Innovation policy associate at the R Street Institute, a free-market think tank based in Washington, D.C.

Permalink | Comments | Email This Story


Read more here

posted at: 4:16pm on 30-Jan-2019
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



Techdirt Podcast Episode 190: Should We Break Up Big Tech?

Furnished content.


A few weeks ago, we featured a panel discussion with Mike and others at the Lincoln Network's Reboot conference on the podcast. This week we're doing something a little different and featuring another panel discussion from that conference, but one in which Mike wasn't involved. Instead, it's an interesting — and at times contentious — debate about one big question: do the big tech firms need to be broken up?Follow the Techdirt Podcast on Soundcloud, subscribe via iTunes or Google Play, or grab the RSS feed. You can also keep up with all the latest episodes right here on Techdirt.


Permalink | Comments | Email This Story


Read more here

posted at: 12:06am on 28-Nov-2018
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



February 2019
Sun Mon Tue Wed Thu Fri Sat
         
   







RSS (site)  RSS (path)

ATOM (site)  ATOM (path)

Categories
 - blog home

 - Announcements  (2)
 - Annoyances  (0)
 - Career_Advice  (1)
 - Domains  (0)
 - Downloads  (4)
 - Ecommerce  (2369)
 - Fitness  (0)
 - Home_and_Garden  (0)
     - Cooking  (0)
     - Tools  (0)
 - Humor  (1)
 - Notices  (0)
 - Observations  (1)
 - Oddities  (2)
 - Online_Marketing  (3444)
     - Affiliates  (1)
     - Merchants  (1)
 - Policy  (1157)
 - Programming  (0)
     - Browsers  (1)
     - DHTML  (0)
     - Javascript  (536)
     - PHP  (0)
     - PayPal  (1)
     - Perl  (37)
          - blosxom  (0)
     - Unidata_Universe  (22)
 - Random_Advice  (1)
 - Reading  (0)
     - Books  (0)
     - Ebooks  (1)
     - Magazines  (0)
     - Online_Articles  (4)
 - Resume_or_CV  (1)
 - Reviews  (1)
 - Rhode_Island_USA  (0)
     - Providence  (1)
 - Shop  (0)
 - Sports  (0)
     - Football  (1)
          - Cowboys  (0)
          - Patriots  (0)
     - Futbol  (1)
          - The_Rest  (0)
          - USA  (1)
 - Woodworking  (1)


Archives
 -2019  February  (2)
 -2019  January  (1)
 -2018  November  (2)
 -2018  October  (3)
 -2018  September  (3)
 -2018  August  (4)
 -2018  July  (1)
 -2018  June  (5)
 -2018  May  (3)
 -2018  April  (3)
 -2018  March  (6)
 -2018  February  (3)


My Sites

 - Millennium3Publishing.com

 - SponsorWorks.net

 - ListBug.com

 - TextEx.net

 - FindAdsHere.com

 - VisitLater.com