Meta's AI System Make-A-Scene Generates Images from Sketches & Text

Yet another great tool for generative art.

Meta presented a new AI system Make-A-Scene, which generates art based on your sketch and a text prompt.

Text-to-image systems are not a novelty these days, with DALL-E, Midjourney, and Disco Diffusion producing crazy results just from a couple of words. Make-A-Scene seems more like Artbreeder's newest tool Collage, which mixes images and drawings with prompts.

Meta sees the value of Make-A-Scene in the ability to control the composition, objects' sizes, and other nuances that are hard to predict.

"But text prompts, like "a painting of a zebra riding a bike," generate images with compositions that can be difficult to predict. The zebra might be on the left side of the image or the right, for example, or it might be much bigger than the bicycle or much smaller, or the zebra and bicycle may be facing the camera or facing sideways," the researchers say.

Of course, it can also generate its own scene layout with text-only prompts. According to the research team, the model focuses on learning key aspects of the imagery that are more likely to be important to the creator, such as objects or animals.

Make-A-Scene was trained using publicly available datasets "to help the broader AI community analyze, study, and understand the existing biases of the system."

For now, Meta is sharing access to the Make-A-Scene demo with a handful of AI artists and Meta employees. The tool will be available for a broader audience later, but there are no public release dates yet.

Meanwhile, you can read about Make-A-Scene here.  

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more