Hagan, Margaret, Good AI Legal Help, Bad AI Legal Help: Establishing quality standards for response to people’s legal problem stories (November 21, 2023). Available at SSRN: https://ssrn.com/abstract=4640596
“Much has been made of generative AI models’ ability to perform legal tasks or pass legal exams, but a more important question for public policy is whether AI platforms can help the millions of people who are in need of legal help around their housing, family, domestic violence, debt, criminal records, and other important problems. When a person comes to a well-known, general generative AI platform to ask about their legal problem, what is the quality of the platform’s response? Measuring quality is difficult in the legal domain, because there are few standardized sets of rubrics to judge things like the quality of a professional’s response to a person’s request for advice. This study presents a proposed set of 22 specific evaluation criteria to evaluate the quality of a system’s answers to a person’s request for legal help for a civil justice problems. It also presents the review of these evaluation criteria by legal domain experts like legal aid lawyers, courthouse self help center staff, and legal help website administrators. The result is a set of standards, context, and proposals that technologists and policymakers can use to evaluate quality of this specific legal help task in future benchmark efforts.”
Sorry, comments are closed for this post.