e dot dot dot
a mostly about the Internet blog by

January 2022
Sun Mon Tue Wed Thu Fri Sat
           
         


Unsecured Data Leak Shows Predicitive Policing Is Just Tech-Washed, Old School Biased Policing

Furnished content.


Don't kid yourselves, techbros. Predictive policing is regular policing, only with confirmation bias built in. The only question for citizens is whether or not they want to pay tech companies millions to give them the same racist policing they've been dealing with since policing began.Gizmodo (working with The Markup) was able to access predictive policing data stored on an unsecured server. The data they obtained reinforces everything that's been reported about this form of "smarter" policing, confirming its utility as a law enforcement echo chamber that allows cops to harass more minorities because that's always what they've done in the past.

Between 2018 and 2021, more than one in 33 U.S. residents were potentially subject to police patrol decisions directed by crime-prediction software called PredPol.The company that makes it sent more than 5.9 million of these crime predictions to law enforcement agencies across the country—from California to Florida, Texas to New Jersey—and we found those reports on an unsecured server.Gizmodo and The Markup analyzed them and found persistent patterns.Residents of neighborhoods where PredPol suggested few patrols tended to be Whiter and more middle- to upper-income. Many of these areas went years without a single crime prediction.By contrast, neighborhoods the software targeted for increased patrols were more likely to be home to Blacks, Latinos, and families that would qualify for the federal free and reduced lunch program.
Targeted more? In some cases, that's an understatement. Predictive policing algorithms compound existing problems. If cops patrolled neighborhoods mainly populated by minorities frequently in the past due to biased pre-predictive policing habits, the introduction of that data into the system returns "predictions" that "predict" more crime to be committed in areas where officers have most often been located historically.The end result is what you see summarized above: non-white neighborhoods receive the most police attention, resulting in more data to feed to the machine, which results in more outputs that say cops should do the same thing they've been doing for decades more often. Run this feedback loop through enough iterations and it results in the continued infliction of misery on certain members of the population.
These communities weren’t just targeted more—in some cases, they were targeted relentlessly. Crimes were predicted every day, sometimes multiple times a day, sometimes in multiple locations in the same neighborhood: thousands upon thousands of crime predictions over years.
That's the aggregate. This is the personal cost.
Take the 111-unit Buena Vista low-income housing complex in Elgin. Six times as many Black people live in the neighborhood where Buena Vista is located than the city average.Police made 121 arrests at the complex between Jan. 1, 2018, and Oct. 15, 2020, according to records provided by the city, many for domestic abuse, several for outstanding warrants, and some for minor offenses, including a handful for trespassing by people excluded from the complex.Those incidents, along with 911 calls, fed the algorithm, according to Schuessler, the Elgin Police Department’s deputy chief.As a result, PredPol’s software predicted that burglaries, vehicle crimes, robberies, and violent crimes would occur there every day, sometimes multiple times a day—2,900 crime predictions over 29 months.
That's not policing. That's oppression. Both law enforcement and a percentage of the general public still believe cops are capable of preventing crime, even though that has never been a feature of American law enforcement. PredPol software leans into this delusion, building on bad assumptions fueled by biased data to claim that data-based policing can convert police omnipresence into crime reduction. The reality is far more dire: residents in over-policed areas are confronted, detained, or rung up on bullshit charges with alarming frequency. And this data gets fed back into the software to generate more of the same abuse.None of this seems to matter to law enforcement agencies paying for this software paid for with federal and local tax dollars. Only one law enforcement official -- Elgin (IL) PD's deputy police chief -- called the software "bias by proxy." For everyone else, it was law enforcement business as usual.That also goes for the company supplying the software. PredPol -- perhaps recognizing some people might assume the "Pred" stands for "Predatory" -- rebranded to the much more banal "Geolitica" earlier this year. The logo swap doesn't change the underlying algorithms, which have accurately predicted biased policing will result in more biased policing.When confronted with the alarming findings following Gizmodo's and The Markup's examination of Geolitica predictive policing data, the company's first move was to claim (hilariously) that data found on unsecured servers couldn't be trusted.
PredPol, which renamed itself Geolitica in March, criticized our analysis as based on reports “found on the internet.”
Finding an unsecured server with data isn't the same thing as finding someone's speculative YouTube video about police patrol habits. What makes this bizarre accusation about the supposed inherent untrustworthiness of the data truly laughable is Geolitica's follow-up:
But the company did not dispute the authenticity of the prediction reports, which we provided, acknowledging that they “appeared to be generated by PredPol.”
Geolitica says everything is good. Its customers aren't so sure. Gizmodo received responses from 13 of 38 departments listed in the data and most sent back written statements that they no longer used PredPol. That includes the Los Angeles Police Department, an early adopter that sent PredPol packing after discovering it was more effective at generating lawsuits and complaints from residents than actually predicting or preventing crime.This report -- which is extremely detailed and well-worth reading in full -- shows PredPol is just another boondoggle, albeit one that's able to take away people's freedoms along with their tax dollars. Until someone's willing to build a system that doesn't consider all cop data to be created equally, so-called "smart" policing is just putting a shiny tech sheen on old-school cop work that relies on harassing minorities to generate biased busywork for police officers.

Read more here

posted at: 12:00am on 28-Dec-2021
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



California Police Officers' Bigoted Text Messages Have Just Undone Dozens Of Felony Cases

Furnished content.


Racism and policing go hand-in-hand. It's been this way ever since police forces were created for the purpose of tracking down escaped slaves and returning them to their owners. Flash forward 150 years and very little has changed other than the ending of slavery.Unsurprisingly, the advent of social media platforms and the increase in smartphone use has exposed the racism that still flows through far too many law enforcement agencies. Multiple investigations have been triggered by the exposure of bigoted communications between officers. It hasn't exactly resulted in a nationwide reckoning for racist officers, but it has at least seen a few bad apples tossed from barrels across the country.If cops aren't worried about what happens to them -- as is evidenced by their carefree deployment of casual racism -- it's doubtful they're too worried about what happens to the general public. They claim to be the thin blue line standing between us and criminal chaos, but their racist words are erasing that line, allowing criminal suspects to return to the streets.The Torrance Police Department in California is the epicenter of the latest garbage racist cop shitstorm. And rightfully so, given what's been uncovered there. Convictions and pending criminal cases are now in jeopardy because of officers texting each other things like this:

The caption read “hanging with the homies.”The picture above it showed several Black men who had been lynched.Another photo asked what someone should do if their girlfriend was having an affair with a Black man. The answer, according to the caption, was to break “a tail light on his car so the police will stop him and shoot him.”Someone else sent a picture of a candy cane, a Christmas tree ornament, a star for the top of the tree and an “enslaved person.”“Which one doesn’t belong?” the caption asked.“You don’t hang the star,” someone wrote back.
Documents obtained by the Los Angeles Times -- which includes open investigations into some of these officers -- shows the Torrance PD has a racism problem, one it is now forced to confront. It's one thing when it's officers being lousy human beings. That can be swept under the rug. It's quite another when dozens of criminal cases might be tossed because these officers have shown they can't be trusted on the streets, much less in court.
While no officers currently face criminal charges in direct relation to the text messages, the racist exchanges have led to the dismissal of at least 85 criminal cases involving the officers implicated in the scandal. County prosecutors had tossed 35 felony cases as of mid-November, and the Torrance city attorney’s office has dismissed an additional 50, officials said.
The bleeding is unlikely to stop there. Records from the District Attorney's office shows the officers implicated in this new scandal are (or were) listed as potential witnesses in nearly 1,400 cases spanning the last decade. The LA County public defender's office has been swamped since this information came to light, receiving nearly 300 letters disclosing possible misconduct by officers during one single week in November.The officers didn't just target black people with these texts. They also joked about "gassing" Jews, assaulting (sexually or otherwise) LGBTQ persons, assaulting suspects, and lying during investigations.It's possible this hatred and misconduct would never have been exposed. But two officers apparently felt untouchable enough that they felt comfortable spray-painting a swastika on a vehicle they towed following a report of mail theft. An investigation into the actions of Officers Cody Weldin and Christopher Tomsic uncovered racist messages originating from Tomsic.
District attorney’s records reviewed by The Times showed Tomsic sent a slew of racist images and messages, including a picture of former President Reagan feeding a monkey with a caption stating Reagan “used to babysit [former President] Obama.”Another picture he sent referred to an “African American baby” as a “Pet Niguana,” according to the records, and he also sent a message mocking the fact that he was the subject of a racial profiling complaint.“So we totally racially profiled his ass, haha … Shopping at 7/11 while Black, he didn’t know the rules lol,” Tomsic wrote, according to the records.
That led to the exposure of more bigoted messages from cops. There are a total of 18 officers implicated. The names of thirteen of those officers are known and have been published by the LA Times. Several of those officers have been investigated for deploying excessive force or killing citizens. In almost every case, they've been cleared of wrongdoing.Fortunately, the Torrance PD seems to be taking this seriously. It has given the DA's office 200 gigabytes of data covering officers' text messages. And the DA's office has been ensuring this information is passed on to the public defender's office, so both parties can determine what cases might be affected by these cops and their racist attitudes.But cops don't just start sending racist texts to each other without feeling comfortable doing it. At some level, the Torrance PD made it clear this sort of behavior was, at minimum, ignored, if not actively tolerated. Now, these self-proclaimed protectors of the innocent have shown they only care about certain people, and are apparently willing to set criminals free rather than reign in their bigoted impulses.

Read more here

posted at: 12:00am on 28-Dec-2021
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



An Unplanned, Ad-Hoc Collaboration Reveals The On-The-Ground Truth About China's Internment Camps For Uyghurs

Furnished content.


The US, UK and Australia have all announced a diplomatic boycott of the Beijing Winter Olympics. The reason given for the move is because of human rights abuses in China, particularly in the turkic-speaking region of Xinjiang. Techdirt has been writing about the Chinese authorities' use of technology to censor and carry out surveillance on the local Uyghur population, among others, for some years. One of the most controversial aspects of China's policy in the region is the use of huge detention camps. According to the authorities there, these camps are for educational and vocational training. Human rights organizations call them internment camps; some governments speak of "genocide" against the Uyghurs.Given the highly sensitive nature of the topic, it is naturally hard to ascertain what is really happening in these camps. One solution is to use satellite imagery to peek inside China's tightly-controlled borders. Perhaps the best-researched investigation using this technique appeared on BuzzFeed News last year. The main article, and the four follow-ups, revealed the hitherto unknown scale of the internment camps, but were necessarily limited by their use of an extreme physical viewpoint -- the view from space.A Chinese travel blogger going by the name of Guanguan decided to investigate on the ground some of the camps located by BuzzFeed News, by driving to them. The remarkable 20-minute video summary of his travels provides unique views of the camps, which complement the satellite imagery used by BuzzFeed News. Specifically, they show in some detail side-views of the camps. This allows Guanguan to make reasonable guesses about which camps are indeed for education and training of some kind, and which ones are likely to be high-security internment camps.The video is well-worth watching in its entirely, since it provides probably our best glimpse yet of the reality of China's internment camps for Uyghurs and others (wisely, Guanguan seems to be out of China now). In fact, the quality of the video images is such that IPVM, which specializes in covering the world of video surveillance, was able to recognize several of the security cameras used at the internment camps. There are a few cameras from the Chinese company Dahua Technology, but the majority identified come from Hikvision. This, Techdirt readers will recall, is the company whose director of cybersecurity and privacy said that IoT devices with backdoors "can't be used to spy on companies, individuals, or nations." IPVM reported that Hikvision "declined to comment" on these latest findings. Its article noted that the visual evidence of Hikvision cameras being used in multiple internment camps, the result of an interesting unplanned, ad-hoc collaboration between Western journalists and a Chinese video blogger, is likely to make things even worse for a company already blacklisted by the US government.Follow me @glynmoody on Twitter, Diaspora, or Mastodon.

Read more here


posted at: 12:00am on 10-Dec-2021
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



Controversial Facial Recognition Company Calls Out Clearview, Demands It Ditch Its Database Of 10 Billion Scraped Images

Furnished content.


Clearview has burned its bridges inside the facial recognition tech industry. Despite it being largely morally malleable, the industry as a whole appears to have cut ties with CEO Hoan Ton-That's startup, which relies on more than 10 billion images scraped from the web to generate a database for its customers to match faces with.The company played it fast and loose upon rollout, handing it out to whoever seemed interested, inviting them to run searches against photos of friends and family members. The "give it a spin" invitations were handed out to government agencies as well, inviting cops to play image roulette with Clearview's ever-expanding database.To date, the efficacy of Clearview's AI remains untested. Clearview did finally volunteer to have the National Institute of Science and Technology (NIST) examine its product for accuracy but only allowed it to run a one-to-one test -- the kind that would, for instance, allow a phone user to unlock their phone via facial recognition.This is not the product Clearview sells. Clearview sells one-to-many matching, relying on images and other personal info scraped from the internet. This practice has alienated the internet. It has also gotten Clearview kicked out of two countries and served with multiple lawsuits. Consequently, other facial recognition tech companies are cursing Clearview both over and under their breath as the company somehow manages to remain viable in the face of months of negative press coverage.Persona non grata status apparently applies to even other controversial facial recognition tech companies. AnyVision spent some time in the media spotlight for being used by the Israeli military to surveill Palestinians. It managed to take down Microsoft with it (albeit temporarily), exposing the tech giant's pinky-swear-rejection of enabling abusive surveillance to be the lip service it was. AnyVision doubled down on its first mistake(s) by threatening to sue news agencies that reported on this factual development.AnyVision resurfaced a couple of years later as the company behind pervasive surveillance systems deployed by Texas public schools. AnyVision appeared to be good at what it did. It matched faces per school hot lists and let administrators know any time those faces were detected. A little too good, perhaps. The system racked up 164,000 "hits" during its seven-day test run, returning as many as 1,100 matches for a single student.AnyVision is back. Sort of. It has rebranded as Oosto, divested itself of some of its more problematic deployments, and is now taking shots at Clearview.Back in September, it argued that facial recognition companies selling to government agencies should offer customers a blank database -- one the end users could fill with mugshots and persons of interest. This was a clear shot across Clearview's bow. Clearview's main selling point is its scraped database of 10 billion images. AnyVision also forwarded this suggestion to a number of government bodies, including the UK's Surveillance Commissioner and the NIST.It's always good to be wary of private companies pleading for the government to regulate them more. It often means incumbents are looking for better ways to stave off competition by helping enact rigorous guidelines that upstarts can't afford to implement. And AnyVision's suggested "fixes" for facial recognition tech obviously aims to exclude Clearview from the government market in multiple countries, making it that much easier for the rebranded company to find customers in need of controversial tech that at least won't be as controversial as Clearview.The war of words continues, albeit behind a paywall:

Oosto (formerly Anyvision) has called out Clearview AI, stating "that biometrics should be deployed with empty databases" in a recent release, referring to the US company's practice of scraping billions of photos from social media to use in its database.
Clearview has offered a rebuttal, but until this one is scraped from its host, we won't know exactly how it defended itself. But it seems clear even from this very truncated exchange of ideas that AnyVision believes Clearview is bad and Clearview believes it is good… making it a minority of one. But one thing is clear: if Clearview gets out of the scraping business it's less likely to make enemies of governments, private citizens, and its competitors in the market. But if Clearview hasn't changed its practices following a year of caustic press coverage, it's unlikely to do so just because a competitor is waving its revamped Gadsden flag in front of any regulatory agency that will listen to it.

Read more here

posted at: 12:00am on 09-Dec-2021
path: /Policy | permalink | edit (requires password)

0 comments, click here to add the first



January 2022
Sun Mon Tue Wed Thu Fri Sat
           
         







RSS (site)  RSS (path)

ATOM (site)  ATOM (path)

Categories
 - blog home

 - Announcements  (0)
 - Annoyances  (0)
 - Career_Advice  (0)
 - Domains  (0)
 - Downloads  (3)
 - Ecommerce  (0)
 - Fitness  (0)
 - Home_and_Garden  (0)
     - Cooking  (0)
     - Tools  (0)
 - Humor  (0)
 - Notices  (0)
 - Observations  (1)
 - Oddities  (2)
 - Online_Marketing  (145)
     - Affiliates  (1)
     - Merchants  (1)
 - Policy  (2801)
 - Programming  (0)
     - Browsers  (1)
     - DHTML  (0)
     - Javascript  (4)
     - PHP  (0)
     - PayPal  (1)
     - Perl  (37)
          - blosxom  (0)
     - Unidata_Universe  (22)
 - Random_Advice  (1)
 - Reading  (0)
     - Books  (0)
     - Ebooks  (1)
     - Magazines  (0)
     - Online_Articles  (5)
 - Resume_or_CV  (1)
 - Reviews  (2)
 - Rhode_Island_USA  (0)
     - Providence  (1)
 - Shop  (0)
 - Sports  (0)
     - Football  (0)
          - Cowboys  (0)
          - Patriots  (0)
     - Futbol  (0)
          - The_Rest  (0)
          - USA  (0)
 - Windows  (1)
 - Woodworking  (0)


Archives
 -2021  December  (5)
 -2021  November  (3)
 -2021  October  (8)
 -2021  September  (2)
 -2021  August  (1)
 -2021  July  (4)
 -2021  May  (5)
 -2021  April  (4)
 -2021  March  (6)
 -2021  February  (1)
 -2021  January  (3)
 -2020  December  (5)


My Sites

 - Millennium3Publishing.com

 - SponsorWorks.net

 - ListBug.com

 - TextEx.net

 - FindAdsHere.com

 - VisitLater.com