Vice: The new government guidelines present a framework for mitigating AI harms across a wide swath of society. For better or worse, artificial intelligence (AI) tools are permeating all aspects of society, and the U.S. government wants to ensure that it doesn’t break it. AI chatbots like ChatGPT are being used on school assignments, even passing tests required for being a doctor or to get a business degree. Automated art tools like DALL-E and Stable Difusion are changing the art world, to the collective outrage of human artists. Scientists have developed AI methods that can generate new enzymes. Media companies are turning to AI to cheaply generate news articles and quizzes. Because of these impressive advancements, many of which are aimed at putting people out of a job, people are worried about the rise of AI. Machine learning programs are known to have baked-in racist and sexist biases, and the institutions that use them aren’t any better—multiple innocent Black men have been arrested due to being mistakenly identified by facial recognition. There are concerns about digital redlining and how algorithms might decide the fates of marginalized people in the criminal justice system. And then there are the bigger, existential risks: If AI is ever put in charge of an important sector of society, can we do anything to stop it if it breaks bad? Now, the National Institute of Standards and Technology (NIST) has decided to take matters into its own hands. On January 26, the government agency released a set of guidelines which, according to a press release, is for “voluntary use by organizations designing, developing, deploying or using AI systems to help manage the many risks of AI technologies.”
The press release also notes that this framework follows a direction from Congress, has been in the works for 18 months, and was a collaboration with both the private and public sectors from more than 240 different organizations. The 40+ page document acknowledges AI’s potential, noting that it can transform people’s lives, drive inclusive economic growth, and support scientific advancements that improve the conditions of the world. However, it then pivots to discussing how the tool can also be a risk to society, and how the risks posed by AI systems are unique. For example, AI systems are complex, are often trained on data that can change over time, and are inherently socio-technical in nature. The document also outlines that there are various types of risks: harm to people, harm to an organization, and harm to an ecosystem. Specifically, the document refers to “harm to the global financial system, supply chain, or interrelated systems.”
Sorry, comments are closed for this post.