New AI can “reimagine” your pictures in infinite ways

Stability AI says it plans to open source the new tool soon.

UK/California-based tech startup Stability AI has launched Stable Diffusion Reimagine, an image-to-image AI that generates brand new pictures inspired by one uploaded by a user — and it’s going to be open sourced.

The background: 2022 saw the release of a number of impressive text-to-image AIs — programs that can create images based on text prompts — with one of the most popular examples being Stability AI’s Stable Diffusion.

A major reason for this popularity was that, unlike DALL-E 2 and most other text-to-image AIs, Stable Diffusion was open source — users could access the code and make unique models, such as ones that only generated Pokémon or artwork in their personal style.

What’s new? Stability AI has now announced the release of a new tool called Stable Diffusion Reimagine; instead of generating new images based on text prompts, it creates ones inspired by uploaded images.

Stable Diffusion already had a feature called “img2img” that allowed users to upload images along with a text prompt to guide the AI. Reimagne seems to be a simplification of that feature, eliminating the option of written guidance.

“Stable Diffusion Reimagine…allows users to generate multiple variations of a single image without limits,” writes Stability AI. “No need for complex prompts: Users can simply upload an image into the algorithm to create as many variations as they want.”

Stability AI has already made Stable Diffusion Reimagine available online and says it plans to make the code available on its Github page “soon.”

a grid of four images of a bedroom
Stability AI says Reimagine used the source image (upper-left) to generate the other images. Credit: Stability AI

Results may vary: Stability AI lists several use cases for Reimagine, noting that creative agencies might use it to generate options for clients, while web designers might upload a photo to get similar alternatives to use on their sites.

Based on our initial experience with the tool, though, its outputs don’t seem quite ready for such uses — when we uploaded the same source image in the example above, the three pictures initially generated by Reimagine were far less realistic-looking and had odd proportions.

Stability AI does note the tool’s limitations, letting users know they might get some less impressive results mixed in with the amazing ones, but after a half-dozen attempts with the same source image, we still didn’t get one that looked entirely realistic.

a grid of four images of a bedroom
After the image in the upper-left was uploaded to Stable Diffusion Reimagine, the AI produced the other three images. Credit: Stability AI / Freethink

The bottom line: Stable Diffusion Reimagine could be a valuable source of inspiration for people who are already somewhat artistic — they might take one of the outputs above and recreate it without the wonky footboard or overextended curtain rod, for example. 

Once the code is released, we may start to see more capable models trained on narrower datasets — if someone created a version that only generated bedroom interiors, for example, it might be better at getting them right.

In the meantime, there will no doubt be countless people who just want to tinker with Reimagine — in which case seeing what sort of mistakes it makes is part of the fun.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

OpenAI and Microsoft are reportedly planning a $100B supercomputer
Microsoft is reportedly planning to build a $100 billion data center and supercomputer, called “Stargate,” for OpenAI.
Can we stop AI hallucinations? And do we even want to?
“Making stuff up” and “being creative” may be two sides of the same coin — but you have to be able to tell the difference.
When AI prompts result in copyright violations, who has to pay?
Who is responsible for copyright violations when they’re produced by generative AI? The technology is outpacing the law.
Google’s Deep Mind AI can help engineers predict “catastrophic failure”
How vulnerable is the electrical grid to a malicious attacker who destroys select substations? Google’s Deep Mind can help predict the answer.
Does AI need a “body” to become truly intelligent? Meta researchers think so.
We’re finally starting to see what can happen when we put an advanced AI “brain” in a state-of-the-art robot “body” — and it’s remarkable.
Up Next
Subscribe to Freethink for more great stories