“Government leaders and staff who leverage algorithms are facing increasing pressure from the public, the media, and academic institutions to be more transparent and accountable about their use. Every day, stories come out describing the unintended or undesirable consequences of algorithms. Governments have not had the tools they need to understand and manage this new class of risk. GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them.
We developed this because:
- We saw a gap. There are many calls to arms and lots of policy papers, one of which was a DataSF research paper, but nothing practitioner-facing with a repeatable, manageable process.
- We wanted an approach which governments are already familiar with: risk management. By identifing and quantifying levels of risk, we can recommend specific mitigations.
Our goals for the toolkit are to:
- Elicit conversation.
- Encourage risk evaluation as a team.
- Catalyze proactive mitigation strategy planning.
We assumed:
- Algorithm use in government is inevitable.
- Data collection is typically a separate effort with different intentions from the analysis and use of it.
- All data has bias.
- All algorithms have bias.
- All people have bias. (Thanks #D4GX!)…”
Sorry, comments are closed for this post.