On Wednesday, OpenAI announced DALL-E 3, the latest version of its AI image synthesis model that features full integration with ChatGPT. DALL-E 3 renders images by closely following complex descriptions and handling in-image text generation (such as labels and signs), which challenged earlier models. Currently in research preview, it will be available to ChatGPT Plus and Enterprise customers in early October.
Like its predecessor, DALLE-3 is a text-to-image generator that creates novel images based on written descriptions called prompts. Although OpenAI released no technical details about DALL-E 3, the AI model at the heart of previous versions of DALL-E was trained on millions of images created by human artists and photographers, some of them licensed from stock websites like Shutterstock. It’s likely DALL-E 3 follows this same formula, but with new training techniques and more computational training time.
Judging by the samples provided by OpenAI on its promotional blog, DALL-E 3 appears to be a radically more capable image synthesis model than anything else available in terms of following prompts. While OpenAI’s examples have been cherry-picked for their effectiveness, they appear to follow the prompt instructions faithfully and convincingly render objects with minimal deformations. Compared to DALL-E 2, OpenAI says that DALL-E 3 refines small details like hands more effectively, creating engaging images by default with “no hacks or prompt engineering required.”
You are missing the bigger picture: This took SECONDS, no effort on my part and it was a first try, using technology that was a little less than three years ago at this stage. I can generate new images on any topic I want, instantly. This stuff is already incredible today and is getting better rapidly.
Meanwhile here are examples of glorious human art:
Human art is full of mistakes. The best of the best human art has “quality and meaning”, the average not really. Stuff like “Somehow, Palpatine returned” was written by humans. There is a lot of garbage that slips through, even in project that have so much money that there is really no excuse. I’ll take a few additional AI generated fingers, that are trivial to fix, over that trash.
Here some of the box art recreated with AI, again zero effort, first try: https://imgur.com/a/kHcwv4j
And you can remix it at will: https://lemmy.kya.moe/imgproxy?src=i.imgur.com%2fy38UPX6.jpg
Netflix is already running personalized thumbnails, not with AI, but that’s exactly the kind of stuff I expect AI to be used for real soon, if it isn’t already in some capacity.
Nobody cares about who makes the art outside of some art historians. Every movie, TV show or game has dozens or even hundreds of people involved, you have no idea who was responsible for what or what was going on behind the scenes. All you see is the result and you either like it or you don’t. “The Death of the Author” and all that.