Bloomberg Technology, alt free link: “The world according to Stable Diffusion is run by White male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers. Stable Diffusion generates images using artificial intelligence, in response to written prompts. Like many AI models, what it creates may seem plausible on its face but is actually a distortion of reality. An analysis of more than 5,000 images created with Stable Diffusion found that it takes racial and gender disparities to extremes — worse than those found in the real world. This phenomenon is worth closer examination as image-generation models such as Stability AI’s Stable Diffusion,OpenAI’s Dall-E, and other tools like them, rapidly morph from fun, creative outlets for personal expression into the platforms on which the future economy will be built. Text-to-image AI is already being used in applications from visual communication giant Adobe Inc. and chipmaker Nvidia Corp., and is starting to power the ads we watch. The Republican National Committee used AI to generate images for an anti-Biden political ad in April depicting a group of mostly White border agents apprehending what it called “illegals” trying to cross into the country. The video, which looks real but is no more authentic than an animation, has reached close to a million people on social media. Some experts in generative AI predict that as much as 90% of content on the internet could be artificially generated within a few years. As these tools proliferate, the biases they reflect aren’t just further perpetuating stereotypes that threaten to stall progress toward greater equality in representation — they could also result in unfair treatment. Take policing, for example. Using biased text-to-image AI to create sketches of suspected offenders could lead to wrongful convictions.
We are essentially projecting a single worldview out into the world, instead of representing diverse kinds of cultures or visual identities,” said Sasha Luccioni, a research scientist at AI startup Hugging Face who co-authored a study of bias in text-to-image generative AI models. To gauge the magnitude of biases in generative AI, Bloomberg used Stable Diffusion to generate thousands of images related to job titles and crime. We prompted the text-to-image model to create representations of workers for 14 jobs — 300 images each for seven jobs that are typically considered “high-paying” in the US and seven that are considered “low-paying” — plus three categories related to crime. We relied on Stable Diffusion for this experiment because its underlying model is free and transparent, unlike Midjourney, Dall-E and other competitors.”
Sorry, comments are closed for this post.