ars technica: “Civil rights activists have filed an official complaint against the Detroit police, alleging the department arrested the wrong man based on a faulty and incorrect match provided by facial recognition software—the first known complaint of this kind. The American Civil Liberties Union filed the complaint (PDF) Wednesday on behalf of Robert Williams, a Michigan man who was arrested in January based on a false positive generated by facial recognition software. “At every step, DPD’s conduct has been improper,” the complaint alleges. “It unthinkingly relied on flawed and racist facial recognition technology without taking reasonable measures to verify the information being provided” as part of a “shoddy and incomplete investigation.”…
The New York Times – “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit…A nationwide debate is raging about racism in law enforcement. Across the country, millions are protesting not just the actions of individual officers, but bias in the systems used to surveil communities and identify people for prosecution. Facial recognition systems have been used by police forces for more than two decades. Recent studies by M.I.T. and the National Institute of Standards and Technology, or NIST, have found that while the technology works relatively well on white men, the results are less accurate for other demographics, in part because of a lack of diversity in the images used to develop the underlying databases. Last year, during a public hearing about the use of facial recognition in Detroit, an assistant police chief was among those who raised concerns. “On the question of false positives — that is absolutely factual, and it’s well-documented,” James White said. “So that concerns me as an African-American male.” This month, Amazon, Microsoft and IBM announced they would stop or pause their facial recognition offerings for law enforcement. The gestures were largely symbolic, given that the companies are not big players in the industry. The technology police departments use is supplied by companies that aren’t household names, such as Vigilant Solutions, Cognitec, NEC, Rank One Computing and Clearview AI. Clare Garvie, a lawyer at Georgetown University’s Center on Privacy and Technology, has written about problems with the government’s use of facial recognition. She argues that low-quality search images — such as a still image from a grainy surveillance video — should be banned, and that the systems currently in use should be tested rigorously for accuracy and bias. “There are mediocre algorithms and there are good ones, and law enforcement should only buy the good ones,” Ms. Garvie said. About Mr. Williams’s experience in Michigan, she added: “I strongly suspect this is not the first case to misidentify someone to arrest them for a crime they didn’t commit. This is just the first time we know about it.”…
BuzzFeedNews – Boston Just Banned Its Government From Using Facial Recognition Technology – “Citing “racial bias in facial surveillance,” the Boston City Council voted unanimously to stop the use of facial recognition by public agencies…”
Sorry, comments are closed for this post.