Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Category Archives: Legal Research

GAO Report Shows Government Uses Face Recognition with No Accountability, Transparency, or Training

EFF: “…The government watchdog issued yet another report – Facial Recognition Services: Federal Law Enforcement Agencies Should Take Actions to Implement Training, and Policies for Civil Liberties this month about the dangerously inadequate and nonexistent rules for how federal agencies use face recognition, underlining what we’ve already known: the government cannot be trusted with this flawed and dangerous technology. The GAO review covered seven agencies within the Department of Homeland Security (DHS) and Department of Justice (DOJ), which together account for more than 80 percent of all federal officers and a majority of face recognition searches conducted by federal agents. Across each of the agencies, GAO found that most law enforcement officers using face recognition have no training before being given access to the powerful surveillance tool. No federal laws or regulations mandate specific face recognition training for DHS or DOJ employees, and Homeland Security Investigations (HSI) and Marshals Service were the only agencies reviewed to now require training specific to face recognition. Though each agency has their own general policies on handling personally identifiable information (PII), like facial images used for face recognition, none of the seven agencies included in the GAO review fully complied with them. Thousands of face recognition searches have been conducted by the federal agents without training or policies. In the period GAO studied, at least 63,000 searches had happened, but this number is a known undercount. A complete count of face recognition use is not possible. The number of federal agents with access to face recognition, the number of searches conducted, and the reasons for the searches does not exist, because some systems used by the Federal Bureau of Investigation (FBI) and Customs and Border Protection (CBP) don’t track these numbers. Our faces are unique and mostly permanent — people don’t usually just get a new one— and face recognition technology, particularly when used by law enforcement and government, puts into jeopardy many of our important rights. Privacy, free expression, information security, and social justice are all at risk.”

FTC Sues Amazon for Illegally Maintaining Monopoly Power

Tech Crunch: “Attorneys general from 17 states joined the FTC in the lawsuit, alleging that Amazon leverages a “set of interlocking anticompetitive and unfair strategies” to maintain a monopoly. The states that signed onto the FTC’s action are Connecticut, Delaware, Maine, Maryland, Massachusetts, Michigan, Minnesota, New Jersey, New Hampshire, New Mexico, Nevada, New York, Oklahoma,… Continue Reading

Language Models, Plagiarism, and Legal Writing

Smith, Michael L., Language Models, Plagiarism, and Legal Writing (August 16, 2023). University of New Hampshire Law Review, Vol. 22, (Forthcoming), Available at SSRN: https://ssrn.com/abstract=4542723. “Language models like ChatGPT are the talk of the town in legal circles. Despite some high-profile stories of fake ChatGPT-generated citations, many practitioners argue that language models are the way… Continue Reading

Can Sensitive Information Be Deleted From LLMs?

Can Sensitive Information Be Deleted From LLMs? Objectives for Defending Against Extraction Attacks. Vaidehi Patil, Peter Hase, Mohit Bansal: “Pretrained language models sometimes possess knowledge that we do not wish them to, including memorized personal information and knowledge that could be used to harm people. They can also output toxic or harmful text. To mitigate… Continue Reading

Is Your AI Model Going Off the Rails?

WSJ – “As generative AI creates new risks for businesses, insurance companies sense an opportunity to cover the ways AI could go wrong…Taking a page from cybersecurity insurance, which saw an uptick in the wake of major breaches several years ago, insurance providers have started taking steps into the AI space by offering financial protection… Continue Reading

September 2023 Issue of LLRX

LLRX Articles and Columns for September 2023 Adding a ‘Group Advisory Layer’ to Your Use of Generative AI Tools Through Structured Prompting: The G-A-L Method – The emergence of Large Language Models (LLMs) in legal research signifies a transformative shift – Dennis Kennedy Keeping Up With Generative AI in the Law – Rebecca Fordon AI in… Continue Reading

Cities Should Act NOW to Ban Predictive Policing

EFF: “Sound Thinking, the company behind ShotSpotter—an acoustic gunshot detection technology that is rife with problems—is reportedly buying Geolitica, the company behind PredPol, a predictive policing technology known to exacerbate inequalities by directing police to already massively surveilled communities. Sound Thinking acquired the other major predictive policing technology—Hunchlab—in 2018. This consolidation of harmful and flawed… Continue Reading

DOJ finally posted that “embarrassing” court doc Google wanted to hide

Ars Technica: “The US Department of Justice has finally posted what Judge Amit Mehta described at the Google search antitrust trial as an “embarrassing” exhibit that Google tried to hide from the public. The document in question contains meeting notes that Google’s vice president for finance, Michael Roszak, “created for a course on communications,” Bloomberg… Continue Reading

Adding a ‘Group Advisory Layer’ to Your Use of Generative AI Tools Through Structured Prompting

Via LLRX – Adding a ‘Group Advisory Layer’ to Your Use of Generative AI Tools Through Structured Prompting: The G-A-L Method – The emergence of Large Language Models (LLMs) in legal research signifies a transformative shift – Dennis Kennedy asks us to Imagine a world where expert advice is at your fingertips, instantly available, tailored just… Continue Reading