The Register speaks to the folks behind the AI Incident Database: “Interview – False images of Donald Trump supported by made-up Black voters, middle-schoolers creating pornographic deepfakes of their female classmates, and Google’s Gemini chatbot failing to generate pictures of White people accurately. These are some of the latest disasters listed on the AI Incident Database – a website keeping tabs on all the different ways the technology goes wrong. Initially launched as a project under the auspices of the Partnership On AI, a group that tries to ensure AI benefits society, the AI Incident Database is now a non-profit organization funded by Underwriters Laboratories – the largest and oldest (est. 1894) independent testing laboratory in the United States. It tests all sorts of products – from furniture to computer mouses – and its website has cataloged over 600 unique automation and AI-related incidents so far. “There’s a huge information asymmetry between the makers of AI systems and public consumers – and that’s not fair”, argued Patrick Hall, an assistant professor at the George Washington University School of Business, who is currently serving on the AI Incident Database’s Board of Directors. He told The Register: “We need more transparency, and we feel it’s our job just to share that information.” The AI Incident Database is modeled on the CVE Program set up by the non-profit MITRE, or the National Highway Transport Safety Administration’s website reporting publicly disclosed cyber security vulnerabilities and vehicle crashes. “Any time there’s a plane crash, train crash, or a big cyber security incident, it’s become common practice over decades to record what happened so we can try to understand what went wrong and then not repeat it.”
Sorry, comments are closed for this post.