
No elephants: Breakthroughs in image generation

DALL·E 2 currently has a very limited ability to render legible text. When it does, text may sometimes be nonsensical and could be misinterpreted. It’s important to track this capability as it develops, as image generative models may eventually develop novel text generation capabilities via rendering text.
dalle-2-preview/system-card.md at main · openai/dalle-2-preview
7 ways to use ChatGPT's new image AI
Cartoons
I've always wanted to draw cartoons but never had the skill. Now I can quickly prototype visual sequences. While human cartoonists bring unique creativity that AI can't replicate, this tech allows anyone to experiment.
To test continuity I generated multiple versions of the same cartoon in the same ChatG... See more
Cartoons
I've always wanted to draw cartoons but never had the skill. Now I can quickly prototype visual sequences. While human cartoonists bring unique creativity that AI can't replicate, this tech allows anyone to experiment.
To test continuity I generated multiple versions of the same cartoon in the same ChatG... See more
Wonder Tools 🎨 7 ways to use ChatGPT's new image AI
They then start with a randomized background image that looks like old-fashioned television static, and use a process called diffusion to turn random noise into a clear image by gradually refining it over multiple steps. Each step removes a bit more noise based on the text description, until a realistic image emerges.
Ethan Mollick • Co-Intelligence
Finally, once these models become faster, you can imagine a truly generative UI, where the model produces the next frame of the app you are using based on events sent to the LLM (which can do all the normal things like using tools, thinking, etc). However, I also believe that diffusion models can do some of this, in a much faster way.
4o Image Generation | Hacker News
Visual Electric
visualelectric.com