GitHub - varunshenoy/opendream: An extensible, easy-to-use, and portable diffusion web UI 👨🎨
An extensible, easy-to-use, and portable diffusion web UI 👨🎨 - GitHub - varunshenoy/opendream: An extensible, easy-to-use, and portable diffusion web UI 👨🎨
The Illustrated Stable Diffusion – Jay Alammar – Visualizing machine learning one concept at a time.
This is a gentle introduction to how Stable Diffusion works.
How To Master Lighting In Midjourney V5 | by Paul DelSignore | The Generator | Mar, 2023 | Medium
Amazing Lighting Effects Introduced In MidJourney V5. “How To Master Lighting In Midjourney V5” is published by Paul DelSignore in The Generator.
Stable Diffusion is a really big deal
Stable Diffusion is a new “text-to-image diffusion model” that was released to the public by Stability.ai six days ago, on August 22nd.
It’s similar to models like Open AI’s DALL-E, but with one crucial difference: they released the whole thing.
You can try it out online at beta.dreamstudio.ai (currently for free). Type in a text prompt and the model will generate an image.
Stable Diffusion Made Copying Artists and Generating Porn Harder - Slashdot
A lot of people commenting think that removing nudity form the training is fine, as it just makes the model safe for work, but it actually hurts the generation of non-nude humans as well. Significantly. I use the f222 model as my main general-purpose model, because it generates better clothed humans than SD1.5. The f222 model is based off of extending SD1.5 to have more knowledge of nudity (so the total opposite of the direction SD2.0 went). This actually makes f222 better at making humans IN GENERAL. The f222 model knows a lot more about the shape of humans. It's not perfect, but what f222 needs is just even more body types and ugly folks, but it's not completely lacking in the ability to generate those either. It definitely does have a bias towards pretty people, but it is not near as overwhelming as some of the other models.
Here's a good one I use a lot:
https://stablediffusion.fr/artists [stablediffusion.fr]
Another example:
https://proximacentaurib.notion.site/e28a4f8d97724f14a784a538b8589e7d?v=ab624266c6a44413b42a6c57a41d828c [notion.site]
4.2 Gigabytes, or: How to Draw Anything - ⌨️🤷🏻♂️📷
Later that night, I spent a few hours creating the following image:
How to Generate Images with Stable Diffusion in Seconds, for Pennies
The authors of Stable Diffusion, a latent text-to-image diffusion model, have released the weights of the model and it runs quite easily and cheaply on standard GPUs. This article shows you how you can generate images for pennies (it costs about 65c to generate 30–50 images).