GCN: Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability, a report by the AI Now Institute, a partnership between New York University, the American Civil Liberties Union and the Partnership on AI. [h/t Pete Weiss]
Why: As public agencies increasingly turn to automated processes and algorithms to make decisions, they need frameworks for accountability that can address inevitable questions – from software bias to the system’s impact on the community. The AI Now Institute’s Algorithmic Impact Assessment gives public agencies a practical way to assess automated decision systems and to ensure public accountability.
Proposal: Just as an environmental impact statement can increase agencies’ sensitivity to environmental values and effectively inform the public of coming changes, an AIA aims to do the same for algorithms before governments put them to use. The process starts with a pre-acquisition review in which an agency, other public officials and the public at large are given a chance to review the proposed technology before the agency enters into any formal agreements. Part of this process would include defining what the agency considers an “automated decision system,” disclosing details about the technology and its use, evaluating the potential for bias and inaccuracy as well as planning for third-party researchers to study the system after it becomes operational…”
Sorry, comments are closed for this post.