e dot dot dot
a mostly about the Internet blog by

December 2020
Sun Mon Tue Wed Thu Fri Sat
   
   


After Being Notified Of Info It Should Have Already Been Aware Of, LAPD Bans Clearview Use By Investigators

Furnished content.


The Los Angeles Police Department is shutting down a very small percentage of its facial recognition searches. Last month, public records exposed the fact that the LAPD had been lying about its facial recognition use for years. Up until 2019, the department maintained it did not use the tech. Records obtained by the Los Angeles Times showed it had actually used it 30,000 times over the past decade.The most recent development in the LAPD's mostly dishonest use of this tech is that it will not allow personnel to mess around with certain third-party offerings. As Buzzfeed reports, the LAPD has forbidden the use of Clearview by officers following the release of information the department already should have already been aware of.

Documents reviewed by BuzzFeed News showed that more than 25 LAPD employees ran nearly 475 searches with Clearview AI over a three-month period beginning at the end of 2019.[...]LAPD officials confirmed that investigators were using Clearview AI but declined to say which officers and which specific cases it was used for. They also refused to say whether the facial recognition software has led to arrests of any suspects.
Now that the public knows what the LAPD already should have known, the department is changing its policy to exclude Clearview… and probably not much else.
The Los Angeles Police Department has banned the use of commercial facial recognition systems, following inquiries from BuzzFeed News about its officers' use of a controversial software known as Clearview AI.
Have to love the fact that the LAPD needed to be apprised of what its own investigators were doing by journalists. That's the level of internal oversight we've come to expect from the nation's law enforcement agencies. If you don't look for anything, it's almost impossible to find misconduct and abuses of power. No news is the best news. And it can easily be achieved by doing nothing at all.This ban only affects "commercial" software which means investigators will still be able to use (and misuse) more official products, like the facial recognition system owned by the county -- the same one the LAPD spent years denying it used.And, although it's an incremental change that seems to only forbid the use of one particular facial recognition product, it's still good to see another law enforcement agency kick Clearview to the curb. Clearview's unproven AI trawls a database of photos scraped from the internet, making it a highly questionable addition to any government agency's surveillance repertoire. And Clearview has been highly irresponsible in its marketing and distribution of its tech, making unverified claims about law enforcement successes while encouraging government employees to test drive the software by feeding it faces of friends, family members, celebrities, etc.If more agencies uninvite this third-party interloper, law enforcement critical mass will make Clearview's business plan untenable. It's already ditched most of its private customers in response to lawsuits. If the potential customers it has left refuse to do business with it, it will soon become little more than a horrible memory.

Read more here

posted at: 12:00am on 04-Dec-2020
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



Content Moderation Case Study: Documenting Police Brutality (2007)

Furnished content.


Summary: Wael Abbas is an Egyption journalist/activist who began documenting protests in Egypt in 2006, including multiple examples of Egyptian police brutality, which he would then upload to YouTube.

In 2007, after posting a few explicit examples of Egyptian police brutality, he discovered that his entire YouTube account was shut down, taking down 181 videos covering not just police brutality, but also voting irregularities, and street protests. At first YouTube refused to comment on this, and only told Abbas that the account was shut down due to multiple complaints about the content.Later, after the US press got ahold of the story, YouTube put out a statement saying:
Our general policy against graphic violence led to the removal of videos documenting alleged human rights abuses because the context was not apparentHaving reviewed the case, we have restored the account of Egyptian blogger Wael Abbas. And if he chooses to upload the video again with sufficient context so that users can understand his important message, we will of course leave it on the site
Wael believes that if large media organizations like Reuters and CNN hadn't covered his case, that it was unlikely his account would have been restored, or that he would have been allowed to re-upload the videos.Decisions to be made by YouTube:
  • How do you determine the difference between a journalist/activist documenting violence and an account that is glorifying violence?
  • Is there a way to determine the context of a video showing police brutality?
  • Should content moderation decisions change based on whether or not a specific situation is getting mainstream press attention?
  • Are there alternatives beyond shutting down an entire account based on complaints about some videos?
Questions and policy implications to consider:
  • Should context play a bigger role in content moderation and if so, how can you take that into account? Or is it the responsibility of the account holder to supply the context?
  • How do you manage moderation of content from a country with different rules than in the US?
  • Will suspensions by US social media companies be used as evidence against the content creators in certain countries?
Resolution: As noted above, YouTube did reinstate his account, but as issues like this continued to arise, the company has adjusted its policies for handling violent but newsworthy content multiple times in the intervening years. At the time of writing this case study, Abbas' videos showing Egyptian police brutality from many years ago now contain content warnings saying that the content may be inappropriate for some viewers and asking users to acknowledge that before being able to view the videos.
Abbas has faced many more content moderation challenges since then with his work in Egypt. Yahoo shut down his email account after getting complaints. Both Twitter and Facebook have suspended his accounts at times as well.In both 2010 and 2018 Abbas was arrested in Egypt for his work, with Egyptian authorities using the social media account suspensions as evidence of his alleged crimes in spreading fake news.

Read more here

posted at: 12:00am on 20-Nov-2020
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



Instructors And School Administrators Are Somehow Managing To Make Intrusive Testing Spyware Even Worse

Furnished content.


The COVIDian dystopia continues. After a brief respite, infections and deaths have surged, strongly suggesting the "we're not doing anything about it" plan adopted by many states is fattening the curve. With infections spreading once again, the ushering of children back to school seems to have been short-sighted.But not all the kids are in school. Some are still engaged in distance learning. For many, this means nothing more than logging in and completing posted assignments using suites of tools that slurp up plenty of user data. For others, it feels more being forced to bring their schools home. In an effort to stop cheating and ensure "attendance," schools are deploying spyware that makes the most of built-in cameras, biometric scanning, and a host of other intrusions that make staying home at least as irritating as actually being in school.The EFF covered some of these disturbing developments back in August, when some schools were kicking off their school years. Bad news abounded.

Recorded patterns of keystrokes and facial recognition supposedly confirm whether the student signing up for a test is the one taking it; gaze-monitoring or eye-tracking is meant to ensure that students don’t look off-screen too long, where they might have answers written down; microphones and cameras record students’ surroundings, broadcasting them to a proctor, who must ensure that no one else is in the room.
So much for the sanctity of the home -- the location regarded as the most private of private spaces, worthy of the utmost in Fourth Amendment protections. Unfortunately, the tradeoff for distance learning appears to mean students must give up almost all of their privacy in exchange for not being arrested for truancy.School isn't out yet. And there's even more intrusiveness to report. It's not just the stripping of privacy that's adding to the dystopian atmosphere hovering oppressively over 2020. It's also the Kafka+Orwell aspects of at-home monitoring, as Todd Feathers and Janus Rose report for Vice.The first part of this aligns with the EFF's earlier reporting: exam software developers are giving school administrators an insane amount of access to students' devices.
Like its competitors in the exam surveillance industry, Respondus uses a combination of facial detection, eye tracking, and algorithms that measure “anomalies” in metrics like head movement, mouse clicks, and scrolling rates to flag students exhibiting behavior that differs from the class norm.
Then it just gets surreal.
These programs also often require students to do 360-degree webcam scans of the rooms in which they’re testing to ensure they don’t have any illicit learning material in sight.
Not surreal enough for Respondus and its customers, apparently. Instructions vary from school to school, but Wilfrid Laurier University students are given an entire gauntlet to run through just for the privilege of taking a test. One set of instructions seems to ask students to roll the dice on permanently damaging their ears.
[O]ne WLU professor wrote that anyone who wished to use foam noise-cancelling ear plugs must “in plain view of your webcam … place the ear plugs on your desk and use a hard object to hit each ear plug before putting it in your ear—if they are indeed just foam ear plugs they will not be harmed.”
And there's so much more! Instructors are taking the intrusiveness baked into Respondus' exam spyware and adding their own twists. If these weren't tied to education products, one might assume sexual predators were on the prowl. (One might still assume that, perhaps not even incorrectly. We'll see how this all shakes out!)
Other instructors required students to buy hand mirrors and hold them up to their webcams prior to beginning a test to ensure they hadn’t written anything on the webcam.
Not every instructor is adding more evil. Some seem to be concerned about the software itself -- mainly its reliability and its willingness to see everything unexpected as cheating. But it's not much less dystopian to advise students on how best to ensure the school's spyware functions properly during tests. Advice from profs includes telling students to keep everyone else at home off the internet while testing (presumably so no one pings out while submitting answers) and to avoid sitting in front of posters or decorations featuring people or animals so the spyware won't flag them for having other people in the room during a test.And it's not just Canada. An email sent by an instructor at Arkansas Tech told students to engage in a whole bunch of pre-test setup just to assure this small-minded prof they weren't cheating.
Before beginning an exam, students were required to hold a mirror or their phone's front-facing camera to reflect the computer screen, and then adjust the webcam so the instructor can "see your face, both hands, your scratch paper, calculator, and the surface of your desk," according to an email obtained by Motherboard.
If students failed to jump through all these distance learning hoops, the instructor would "set [their] exam score to 0%."The coupling of intrusive spyware with increasingly ridiculous demands from instructors has led to open, if mostly remote, revolt. Petitions have been circulated demanding software like this be banned. Feedback sites like ratemyprofessor have been bombed with negative reviews. Unfortunately, the schools have almost all the leverage. It's not that simple to take your "being educated" business elsewhere, especially in the middle of a global pandemic.That's not to say there haven't been any successes. Blowback from Wilfrid Laurier students forced the Canadian university to withdraw its demand that students set up their own in-home surveillance system by purchasing both an external webcam and a tripod. And some school administrators are at least responding with statements that indicate they recognize the people paying their salaries are unhappy. WLU administrators are promising to "look into" the reported problems, but it seems unlikely it will ditch its proctoring software. What it may do is clarify what instructors can actually ask students to do, which would address at least some of the complaints.But half-assing it isn't going to change the intrusive nature of the software itself. But, as noted earlier, students already well on their way to degrees or diplomas can't just head to the nearest competitor. And there's a good chance the nearest competitor is using something similar to reduce cheating, which means students will be jumping through one set of hoops just to find themselves jumping through another set at another school.This pandemic isn't going to be forever. If it's in the best interests of everyone to remain as distanced as possible, schools just need to accept the fact that cheating may be a bit more common. Accepting the reality of the situation would be healthier for everyone. Making a bad situation even worse with pervasive surveillance and insane instructions from administrators is the last thing students (and teachers) need right now.

Read more here

posted at: 12:00am on 18-Nov-2020
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



Content Moderation Case Studies: Using AI To Detect Problematic Edits On Wikipedia (2015)

Furnished content.


Summary: Wikipedia is well known as an online encyclopedia that anyone can edit. This has enabled a massive corpus of knowledge to be created, that has achieved high marks for accuracy, while also recognizing that at any one moment some content may not be accurate, as anyone may have entered in recent changes. Indeed, one of the key struggles that Wikipedia has dealt with over the years is with so-called vandals who change a page not to improve the quality of an entry, but to deliberately decrease the quality.In late 2015, the Wikimedia Foundation, which runs Wikipedia, announced an artificial intelligence tool, called ORES (Objective Revision Evaluation Service) which they hoped might be useful to effectively pre-score edits for the various volunteer editors so they could catch vandalism quicker.

ORES brings automated edit and article quality classification to everyone via a set of open Application Programming Interfaces (APIs). The system works by training models against edit- and article-quality assessments made by Wikipedians and generating automated scores for every single edit and article.What's the predicted probability that a specific edit be damaging? You can now get a quick answer to this question. ORES allows you to specify a project (e.g. English Wikipedia), a model (e.g. the damage detection model), and one or more revisions. The API returns an easily consumable response in JSON format:
The system was not designed, necessarily, to be user-facing, but rather as a system that others could build tools on top of to help with the editing process. Thus it was designed to feed some of its output into other existing and future tools.Part of the goal of the system, according to the person who created it, Aaron Halfaker, was to hopefully make it easier to teach new editors how to be productive editors on Wikipedia. There was a concern that more and more of the site was controlled by an increasingly small number of volunteers, and new entrants were scared off, sometimes by the various arcane rules. Thus, rather than seeing ORES as a tool for automating content moderation, or as a tool for quality control over edits, Halfaker saw it more as a tool to help experienced editors better guide new, well-meaning, but perhaps inexperienced editors in ways to improve.
The motivation for Mr. Halfaker and the Wikimedia Foundation wasn't to smack contributors on the wrist for getting things wrong. I think we who engineer tools for social communities, have a responsibility to the communities we are working with to empower them, Mr. Halfaker said. After all, Wikipedia already has three AI systems working well on the site's quality control, Huggle, STiki and ClueBot NG.I don't want to build the next quality control tool. What I'd rather do is give people the signal and let them work with it, Mr. Halfaker said.The artificial intelligence essentially works on two axes. It gives edits two scores: first, the likelihood that it's a damaging edit, and, second, the odds that it was an edit made in good faith or not. If contributors make bad edits in good faith, the hope is that someone more experienced in the community will reach out to them to help them understand the mistake.If you have a sequence of bad scores, then you're probably a vandal, Mr. Halfaker said. If you have a sequence of good scores with a couple of bad ones, you're probably a good faith contributor.
Decisions to be made by Wikipedia:
  • How useful is artificial intelligence in helping to determine the quality of edits?
  • How best to implement a tool like ORES?
    • Should it automatically revert likely bad edits?
    • Should it be used for quality control?
    • Should it be a tool to just highlight edits for volunteers to review?
  • What is likely to encourage more editors to help keep Wikipedia as up to date and clean of vandalism?
  • What data do you train ORES on?  How do you validate the accuracy of the training data?
Questions and policy implications to consider:
  • Are there issues when, because the AI has scored something, the tendency is to assume the AI must be correct? How do you make sure the AI is accurate?
  • Does AI help bring on new editors or does it scare away new editors?
  • Are there ways to prevent inherent bias from being baked into any AI moderation system, especially one trained by existing moderators?
Resolution: Halfaker, who later left Wikimedia to go to Microsoft Research, has published a few papers about ORES since it launched. In 2017, a paper by Halfaker and a few others noted that the tool was increasingly used over the previous three years.
The ORES service has been online since July 2015. Since then, usage has steadily risen as we've developed and deployed new models and additional integrations are made by tool developers and researchers. Currently, ORES supports 78 different models and 37 different language-specific wikis.Generally, we see 50 to 125 requests per minute from external tools that are using ORES' predictions (excluding the MediaWiki extension that is more difficult to track). Sometimes these external requests will burst up to 400-500 requests per second
One thing they noticed was that those using the ORES output often wanted search through the metrics and set their own thresholds rather than accepting the hard coded ones in ORES:
Originally, when we developed ORES, we defined these threshold optimizations in ourdeployment configuration. But eventually, it became apparent that our users wanted tobe able to search through fitness metrics to choose thresholds that matched their ownoperational concerns. Adding new optimizations and redeploying quickly became a burden on us and a delay for our users. In response, we developed a syntax for requesting an optimization from ORES in realtime using fitness statistics from the models tests
The project also appeared to be successful in getting built into various editing tools, and possibly inspiring ideas for new editing quality tools:
Many tools for counter-vandalism in Wikipedia were already available when we developed ORES. Some of them made use of machine prediction (e.g. Huggle27, STiki, ClueBot NG), but most did not. Soon after we deployed ORES, many developers that had not previously included their own prediction models in their tools were quick to adopt ORES. For example, RealTime Recent Changes includes ORES predictions along-side their realtime interface and FastButtons, a Portuguese Wikipedia gadget, began displaying ORES predictions next to their buttons for quick reviewing and reverting damaging edits.Other tools that were not targeted at counter-vandalism also found ORES predictionsspecifically that of article quality (wp10)useful. For example, RATER,30 a gadget forsupporting the assessment of article quality began to include ORES predictions to help their users assess the quality of articles and SuggestBot,31[5] a robot for suggesting articles to an editor, began including ORES predictions in their tables of recommendations.Many new tools have been developed since ORES was released that may not have been developed at all otherwise. For example, the Wikimedia Foundation product department developed a complete redesign on MediaWiki's Special:RecentChanges interface that implements a set of powerful filters and highlighting. They took the ORES Review Tool to it's logical conclusion with an initiative that they referred to as Edit Review Filters. In this interface, ORES scores are prominently featured at the top of the list of available features, and they have been highlighted as one of the main benefits of the new interface to the editing community.
In a later paper, Halfaker explored, among other things, concerns about how AI systems like ORES might reinforce inherent bias.
A 2016 ProPublica investigation [4] raised serious allegations of racial biases in a ML-based tool sold to criminal courts across the US. The COMPAS system by Northpointe, Inc. produced risk scores for defendants charged with a crime, to be used to assist judges in determining if defendants should be released on bail or held in jail until their trial. This expos began a wave of academic research, legal challenges, journalism, and organizing about a range of similar commercial software tools that have saturated the criminal justice system. Academic debates followed over what it meantfor such a system to be fair or biased. As Mulligan et al. discuss, debates over these essentially contested concepts often focused on competing mathematically-definedcriteria, like equality of false positives between groups, etc.When we examine COMPAS, we must admit that we feel an uneasy comparison between how it operates and how ORES is used for content moderation in Wikipedia. Of course, decisions about what is kept or removed from Wikipedia are of a different kind of social consequence than decisions about who is jailed by the state. However, just as ORES gives Wikipedia's human patrollers a score intended to influence their gatekeeping decisions, so does COMPAS give judges a similarly functioning score. Both are trained on data that assumes a knowable ground truth for the question to be answered by the classifier. Often this data is taken from prior decisions, heavily relyingon found traces produced by a multitude of different individuals, who brought quite different assumptions and frameworks to bear when originally making those decisions


Read more here

posted at: 12:00am on 31-Oct-2020
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



December 2020
Sun Mon Tue Wed Thu Fri Sat
   
   







RSS (site)  RSS (path)

ATOM (site)  ATOM (path)

Categories
 - blog home

 - Announcements  (1)
 - Annoyances  (0)
 - Career_Advice  (0)
 - Domains  (0)
 - Downloads  (3)
 - Ecommerce  (0)
 - Fitness  (0)
 - Home_and_Garden  (0)
     - Cooking  (0)
     - Tools  (0)
 - Humor  (1)
 - Notices  (0)
 - Observations  (1)
 - Oddities  (2)
 - Online_Marketing  (146)
     - Affiliates  (1)
     - Merchants  (1)
 - Policy  (2167)
 - Programming  (0)
     - Browsers  (1)
     - DHTML  (0)
     - Javascript  (5)
     - PHP  (0)
     - PayPal  (1)
     - Perl  (37)
          - blosxom  (0)
     - Unidata_Universe  (22)
 - Random_Advice  (1)
 - Reading  (0)
     - Books  (0)
     - Ebooks  (1)
     - Magazines  (0)
     - Online_Articles  (4)
 - Resume_or_CV  (1)
 - Reviews  (1)
 - Rhode_Island_USA  (0)
     - Providence  (1)
 - Shop  (0)
 - Sports  (0)
     - Football  (0)
          - Cowboys  (0)
          - Patriots  (0)
     - Futbol  (1)
          - The_Rest  (0)
          - USA  (1)
 - Windows  (1)
 - Woodworking  (0)


Archives
 -2020  December  (1)
 -2020  November  (1)
 -2020  October  (7)
 -2020  August  (3)
 -2020  July  (4)
 -2020  June  (3)
 -2020  May  (3)
 -2020  April  (6)
 -2020  March  (6)
 -2020  February  (3)
 -2020  January  (1)
 -2019  December  (4)


My Sites

 - Millennium3Publishing.com

 - SponsorWorks.net

 - ListBug.com

 - TextEx.net

 - FindAdsHere.com

 - VisitLater.com