Bloomberg Opinion [alt free link] – Stable Diffusion’s text-to-image model amplifies stereotypes about race and gender — here’s why that matters: “The world according to Stable Diffusion is run by White male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers. Stable Diffusion generates images using artificial intelligence, in response to written prompts. Like many AI models, what it creates may seem plausible on its face but is actually a distortion of reality. An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes — worse than those found in the real world. This phenomenon is worth closer examination as image-generation models such as Stability AI’s Stable Diffusion, OpenAI’s Dall-E, and other tools like them, rapidly morph from fun, creative outlets for personal expression into the platforms on which the future economy will be built. Text-to-image AI is already being used in applications from visual communication giant Adobe Inc. and chipmaker Nvidia Corp., and is starting to power the ads we watch. The Republican National Committee used AI to generate images for an anti-Biden political ad in April depicting a group of mostly White border agents apprehending what it called “illegals” trying to cross into the country. The video, which looks real but is no more authentic than an animation, has reached close to a million people on social media. Some experts in generative AI predict that as much as 90% of content on the internet could be artificially generated within a few years. As these tools proliferate, the biases they reflect aren’t just further perpetuating stereotypes that threaten to stall progress toward greater equality in representation — they could also result in unfair treatment. Take policing, for example. Using biased text-to-image AI to create sketches of suspected offenders could lead to wrongful convictions…”
Sorry, comments are closed for this post.