logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Developers of Stable Diffusion Release the AI's Code

The AI's checkpoints were made available for academic research purposes upon request.

Researchers Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer released the code of Stable Diffusion, the team's text-to-image diffusion model capable of creating great images from text prompts and rough sketches, and provided a thorough instruction on how to install the tool and use it.

In case you missed it, the AI model was trained on 512x512 images from the LAION-5B library and uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM.

At the moment, Stable Diffusion's checkpoints are only available for academic research purposes upon request. According to the team, this precaution was taken to prevent misuse and harm. In the future, however, the team plans to share a public release "with a more permissive license that also incorporates ethical considerations."

You can learn more and request the checkpoints here. Also, don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we are sharing breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more