Vox: “..In just a few short years, AI-generated images have come a long way. In a more innocent time (2015) Google released “DeepDream,” which used Google’s artificial neural network programs — that is, artificial intelligence that’s been trained to learn in a way that mimics a human brain’s neural networks — to recognize patterns in images and make new images from them. You’d feed it an image, and it would spit back something that resembled it but with a bunch of new images weaved in, often things approximating eyeballs and fish and dogs. It wasn’t meant to create images so much as to show, visually, how the artificial neural networks detected patterns. The results looked like a cross between a Magic Eye drawing and my junior year of college. Not particularly useful in practice, but pretty cool (or creepy) to look at. These programs got better and better, training on billions of images that were usually scraped from the internet without their original creators’ knowledge or permission. In 2021, OpenAI released DALL-E, which could make photorealistic images from text prompts. It was a “breakthrough,” says Yilun Du, a PhD student at MIT’s Computer Science and Artificial Intelligence Laboratory who studies generative models. Soon, not only was photorealistic AI-generated art shockingly good, but it was also very much available. OpenAI’s Dall-E 2, Stability AI’s Stable Diffusion, and Midjourney were all released to the general public in the second half of 2022. The expected ethical concerns followed, from copyright issues to allegations of racist or sexist bias to the possibility that these programs could put a lot of artists out of work to what we’ve seen more recently: convincing deepfakes used to spread disinformation. And while the images are very good, they still aren’t perfect. But given how quickly this technology has advanced so far, it’s safe to assume that we’ll soon be hitting a point where AI-generated images and real images are nearly impossible to tell apart.
Sorry, comments are closed for this post.