Stability AI has launched Stability for Blender, an official Stable Diffusion plug-in that brings a suite of generative AI tools to Blender’s free 3D modeling software, providing a convenient way for 3D artists to experiment with this technology. While third-party plug-ins provide comparable features, Stability AI’s implementation is expected to be more refined, with the company committing to frequent updates.
With the add-on, Blender artists can generate images through text descriptions within the software, much like the Stable Diffusion text-to-image generator. Additionally, the add-on enables users to experiment with different styles for a project by using existing renders without the need to remodel the scene fully.
Similarly, textures can be produced through text prompts with reference images. There’s even an option to generate animations from pre-existing renders. However, the quality of the results is somewhat questionable, as evidenced by Stability’s examples. Nevertheless, it’s an enjoyable experience to transform your projects into a video format coarsely.
Stability for Blender is entirely cost-free and doesn’t demand any additional software or a dedicated GPU to function. To employ Stable Diffusion within Blender, you merely require the latest version of the software, an internet connection, and a Stability API key, which is accessible directly from Stability AI. The plug-in installation process is relatively uncomplicated, and Stability has created a collection of tutorials to guide you through the utilization of its numerous features.
It’s important to note that Stability for Blender exclusively produces 2D images, not 3D models. While 3D generative AI technology is still in its nascent stages, models such as Google’s DreamFusion, Nvidia’s Get3D, and OpenAI’s Point-E can generate 3D models.