e dot dot dot
a mostly about the Internet blog by

January 2020
Sun Mon Tue Wed Thu Fri Sat
     
 


Lawsuit Says Clearview's Facial Recognition App Violates Illinois Privacy Laws

Furnished content.


Clearview has gathered a whole lot of (negative) attention ever since its exposure by Kashmir Hill for the New York Times. The facial recognition app developed by Hoan Ton-That (whose previous app was a novelty that allowed users to transpose President Trump's distinctive hairdo on their own heads) relies on scraped photos to perform its questionable magic. Rather than limiting themselves to law enforcement databases, cops can upload a photo and search a face against pictures taken from dozens of websites.The company's marketing materials claim cops have access to 3 billion face photos via Clearview -- all pulled from public accounts linked to names, addresses, and any other personal info millions of unwitting social media users have uploaded to the internet.Its marketing materials also claims it has been instrumental in solving current crimes and generating suspect lists for cold cases. So far, very few of these claims seem to be based on fact. That's only one of the company's issues. Another is the heat it's drawing from companies like Twitter and Facebook who claim photo scraping violates their terms of service. That's one for the courts and it's only a matter of time before someone sues.Someone has sued, but it's not an affected service provider. It's some guy from Illinois trying to fire up a class action lawsuit against the company for violating his home state's privacy laws. Here's Catalin Cimpanu of ZDNet with the details:

According to a copy of the complaint obtained by ZDNet, plaintiffs claim Clearview AI broke Illinois privacy laws.Namely, the New York startup broke the Illinois Biometric Information Privacy Act (BIPA), a law that safeguards state residents from having their biometrics data used without consent.According to BIPA, companies must obtain explicit consent from Illinois residents before collecting or using any of their biometric information -- such as the facial scans Clearview collected from people's social media photos."Plaintiff and the Illinois Class retain a significant interest in ensuring that their biometric identifiers and information, which remain in Defendant Clearview's possession, are protected from hacks and further unlawful sales and use," the lawsuit reads.
Hmm. This doesn't seem to have much going for it. And, believe it or not, it's not a pro se lawsuit. Whether it's possible to violate a privacy law by scraping public photos remains to be litigated, but it would seem the word "public" is pretty integral here. Unless Clearview found some way to scrape photos not published publicly, the lawsuit is dead in the water.It shouldn't take too long for a judge to declare public and private legally contradictory. This lawsuit was composed by a member of the bar, but it reads more like a Facebook note the lawyer published accidentally. From the lawsuit [PDF]:
Without obtaining any consent and without notice, Defendant Clearview used the internet to covertly gather information on millions of American citizens, collecting approximately three billion pictures of them, without any reason to suspect any of them of having done anything wrong, ever.
Wat? Just because Clearview is aggressively pitching LEOs doesn't mean Clearview can only scrape photos of people it suspects of wrongdoing. Yes, it's disturbing Clearview has decided to make its stalker-enabling AI available to people who can hurt, maim, jail, and kill you, but there's nothing on any law book that says collecting pictures of faces can only be done if the people are probably criminals -- even if the targeted end users of this software are people who go after criminals.Putting it in a sworn document doesn't make it any less ridiculous. But it does get more ridiculous.
[A]lmost none of the citizens in the database has ever been arrested, much less been convicted. Yet these criminal investigatory records are being maintained on them, and provide government almost instantaneous access to almost every aspect of their digital lives.
Facebook is collecting photos of people, almost none of which have been criminally charged. They reside in Facebook's database. Facebook is publicly searchable, and public profiles can be searched for photos, even by law enforcement officers. Is Facebook breaking state law by "collecting" photos of innocent people? No rational person would argue that it is. And yet, this is the same argument and it's no less stupid just because an actual lawyer is involved.Look, I also don't want Clearview pushing this "product," much less to people with the power to do incredible amounts of damage to anyone the AI mistakes for a criminal. But this isn't going to fix anything. The lawsuit makes better points about Clearview's end of the deal, which makes it easier for it to look over law enforcement's shoulder. Since Clearview hosts all the pictures on its own servers, it can see what cops are looking for and do its own digging into the personal lives of anyone cops might be thinking about targeting. That's an ugly byproduct of this service and Clearview hasn't said anything about siloing itself off from government queries.The claims in this suit are almost certain to fail. Clearview streamlines processes cops can perform on their own, like reverse image searches and browsing of social media accounts. Actions you can perform one person at a time without violating the Constitution (or state law), you can most likely do in bulk. For now. A more realistic approach would be to take edge cases to the Supreme Court, which has been more receptive to expanding the boundaries of citizens' expectations of privacy in the digital era. This lawsuit may raise limited awareness about Clearview (and discovery could be very interesting) but it's not going to end Clearview's scraping or deter law enforcement from using it. And it's certainly not going to earn a payout for the plaintiff.

Permalink | Comments | Email This Story


Read more here

posted at: 12:00am on 31-Jan-2020
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



YouTube Takes Down Live Stream Over Copyright Claim...Before Stream Even Starts

Furnished content.


It seems that the concern over how YouTube is handling its platform when it comes to enforcing copyright claims is reaching something of a fever pitch. Hell, in just the last couple of weeks we've seen a YouTuber have his videos demonitized over copyright claims to the numbers "36" and "50", rampant abuse of ContentID even as the EU edges closer to making that platform a requirement through Article 17, and wider concerns about YouTube's inability to enforce moderation at scale in a way that makes even a modicum of sense. The point is that it's becoming all the more clear that YouTube's efforts at content moderation and copyright enforcement on its site are becoming a nightmare.And perhaps there is no better version of that nightmare than when one YouTube streamer found his live stream taken down when Warner Bros. claimed copyright on it... before that live stream had even begun. Matt Binder hosts the political podcast "DOOMED with Matt Binder." He also livestreams the show on YouTube. The night of the last Democratic Presidential debate, he scheduled a livestream to discuss the debate with a guest.

Earlier in the evening, I'd scheduled a YouTube livestream, as I always do the night of a debate, in order to discuss the event with progressive activist Jordan Uhl after CNN's broadcast wrapped up. I'd even labeled it as a “post-Democratic debate” show featuring Uhl's name directly in the scheduled stream title. These post-debate shows consist entirely of webcam feeds of my guest and myself, split-screen style, breaking down the night's events. Shortly after setting up the stream, which wasn't scheduled to start for hours, I received an email from YouTube:“[Copyright takedown notice] Your video has been taken down from YouTube.”The notice informed me that I had received a copyright strike for my scheduled stream. That one copyright strike was enough to disable livestreaming on my channel for the strike's three-month duration. If I were to accumulate three strikes, YouTube would just shut down my channel completely, removing all of my content.
Reasonable people can disagree on just how much collateral damage is acceptable when enforcing copyright. What no reasonable person can agree with is the idea that a livestream ought to be taken down and a 3 month stream ban be put in place over copyright on content that hasn't even been created yet. In fact, were there a perfect antithesis to the entire point of copyright law, it certainly must be this: the prevention of valid content creation via copyright claim.So, what happened? Well, it appears based on the notice that Warner Bros., parent company for CNN, issued the copyright claim. CNN hosted the debate and Binder's reference in the title of his stream may have caused someone at WB to think that this was either the stream of the event, which would be copyright infringement, or a stream of CNN's post-debate commentary, which would also be copyright infringement. This was, after all, a manual block, not some automated system. But, mistake or not, this shows a glaring flaw in CNN's enforcement of copyright.
“Your case is the most extreme I’ve heard about. Congratulations,” Electronic Frontier Foundation Manager of Policy and Activism, Katharine Trendacosta, said to me in a phone conversation on the issue. “This is the first time I've heard about this happening to something that didn't contain anything. And I have heard a lot of really intense stories about what's happening on YouTube.”
If there were any question that there are serious problems in YouTube's enforcement mechanism, this situation answers those questions. YouTube ended up reversing the copyright strike, of course, but the damage in this case had already been done. Binder was unable to stream that night, all because YouTube is so bent towards claimers of copyright rather than its own content producers that its enforcement cannot possibly work without massive collateral damage, such as this.I suspect we're going to continue to see these situations arise, until YouTube takes a hard look at its policies.

Permalink | Comments | Email This Story


Read more here

posted at: 12:00am on 31-Jan-2020
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



January 2020
Sun Mon Tue Wed Thu Fri Sat
     
 







RSS (site)  RSS (path)

ATOM (site)  ATOM (path)

Categories
 - blog home

 - Announcements  (0)
 - Annoyances  (0)
 - Career_Advice  (0)
 - Domains  (0)
 - Downloads  (3)
 - Ecommerce  (0)
 - Fitness  (0)
 - Home_and_Garden  (0)
     - Cooking  (0)
     - Tools  (0)
 - Humor  (0)
 - Notices  (0)
 - Observations  (1)
 - Oddities  (2)
 - Online_Marketing  (0)
     - Affiliates  (1)
     - Merchants  (1)
 - Policy  (3743)
 - Programming  (0)
     - Bookmarklets  (1)
     - Browsers  (1)
     - DHTML  (0)
     - Javascript  (3)
     - PHP  (0)
     - PayPal  (1)
     - Perl  (37)
          - blosxom  (0)
     - Unidata_Universe  (22)
 - Random_Advice  (1)
 - Reading  (0)
     - Books  (0)
     - Ebooks  (0)
     - Magazines  (0)
     - Online_Articles  (5)
 - Resume_or_CV  (1)
 - Reviews  (2)
 - Rhode_Island_USA  (0)
     - Providence  (1)
 - Shop  (0)
 - Sports  (0)
     - Football  (0)
          - Cowboys  (0)
          - Patriots  (0)
     - Futbol  (0)
          - The_Rest  (0)
          - USA  (0)
 - Technology  (1055)
 - Windows  (1)
 - Woodworking  (0)


Archives
 -2024  March  (170)
 -2024  February  (168)
 -2024  January  (146)
 -2023  December  (140)
 -2023  November  (174)
 -2023  October  (156)
 -2023  September  (161)
 -2023  August  (49)
 -2023  July  (40)
 -2023  June  (44)
 -2023  May  (45)
 -2023  April  (45)
 -2023  March  (53)
 -2023  February  (40)


My Sites

 - Millennium3Publishing.com

 - SponsorWorks.net

 - ListBug.com

 - TextEx.net

 - FindAdsHere.com

 - VisitLater.com