Photoshop is getting an infusion of generative AI today with the addition of a number of Firefly-based features that let users extend images beyond their limits with Firefly-generated backgrounds, use generative AI to add objects to images, and use a new generative fill feature to remove objects with much more precision than the previously available content-aware fill.
For now, these features are only available in the beta version of Photoshop. Adobe is also making some of these capabilities available to Firefly beta users on the web (by the way, Firefly users have now created more than 100 million images on the service).
The cool thing here is that this integration allows Photoshop users to use natural language text prompts to describe the kind of image or object they want Firefly to create. As with all generative AI tools, the results can be unpredictable at times. By default, Adobe provides users with three variations for each prompt, but unlike the Firefly web app, there’s currently no option to repeat any of these to see similar variations on a given result.
To do all this, Photoshop sends parts of a given image to Firefly – not the whole image, though the company is experimenting with that too – and creates a new layer for the results.
Maria Yap, Vice President of Digital Imaging at Adobe, gave me a demo of these new features ahead of today’s announcement. As with anything generative AI, it’s often hard to predict what the model will deliver, but some results have been surprisingly good. For example, when Firefly was asked to create a puddle under a running corgi, Firefly seemed to take the overall lighting of the image into account and even generate a realistic reflection. Not every result worked equally well – a bright purple puddle was also an option – but the model seems to do a pretty good job of adding objects and especially extending existing images beyond their frame.
Given that Firefly is trained on the photos available in Adobe Stock (as well as other commercially safe images), it may come as no surprise that it performs particularly well with landscapes. Like most generative image generators, Firefly struggles with text.
Adobe also made sure that the model returned safe results. This is partly due to the training set used, but Adobe has also implemented additional safeguards. “We combined that with a series of fast technical things that we know,” explains Yap. “We exclude certain terms, certain words that we think are not safe. And then we’re even looking at a different hierarchy of ‘if Maria selects an area with a lot of skin in it, maybe right now — and you’ll even see warning messages sometimes — we won’t promptly elaborate on that one, just because it’s unpredictable. We just don’t want to go to a place where we don’t feel comfortable.”
As with all Firefly images, Adobe automatically applies its content references to all images using these AI-based features.
Many of these features would also be very useful in Lightroom. Yap agreed and while she didn’t want to commit to a timeline, she did confirm that the company plans to bring Firefly to its photo management tool as well.