UK/California-based tech startup Stability AI has launched Stable Diffusion Reimagine, an image-to-image AI that generates brand new pictures inspired by one uploaded by a user — and it’s going to be open sourced.
The background: 2022 saw the release of a number of impressive text-to-image AIs — programs that can create images based on text prompts — with one of the most popular examples being Stability AI’s Stable Diffusion.
A major reason for this popularity was that, unlike DALL-E 2 and most other text-to-image AIs, Stable Diffusion was open source — users could access the code and make unique models, such as ones that only generated Pokémon or artwork in their personal style.
What’s new? Stability AI has now announced the release of a new tool called Stable Diffusion Reimagine; instead of generating new images based on text prompts, it creates ones inspired by uploaded images.
Stable Diffusion already had a feature called “img2img” that allowed users to upload images along with a text prompt to guide the AI. Reimagne seems to be a simplification of that feature, eliminating the option of written guidance.
“Stable Diffusion Reimagine…allows users to generate multiple variations of a single image without limits,” writes Stability AI. “No need for complex prompts: Users can simply upload an image into the algorithm to create as many variations as they want.”
Stability AI has already made Stable Diffusion Reimagine available online and says it plans to make the code available on its Github page “soon.”
Results may vary: Stability AI lists several use cases for Reimagine, noting that creative agencies might use it to generate options for clients, while web designers might upload a photo to get similar alternatives to use on their sites.
Based on our initial experience with the tool, though, its outputs don’t seem quite ready for such uses — when we uploaded the same source image in the example above, the three pictures initially generated by Reimagine were far less realistic-looking and had odd proportions.
Stability AI does note the tool’s limitations, letting users know they might get some less impressive results mixed in with the amazing ones, but after a half-dozen attempts with the same source image, we still didn’t get one that looked entirely realistic.
The bottom line: Stable Diffusion Reimagine could be a valuable source of inspiration for people who are already somewhat artistic — they might take one of the outputs above and recreate it without the wonky footboard or overextended curtain rod, for example.
Once the code is released, we may start to see more capable models trained on narrower datasets — if someone created a version that only generated bedroom interiors, for example, it might be better at getting them right.
In the meantime, there will no doubt be countless people who just want to tinker with Reimagine — in which case seeing what sort of mistakes it makes is part of the fun.
We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.