A month ago, Adobe announced Firefly, its entry into the generative AI game. Initially, Firefly’s focus was on generating commercially safe images, but the company is now pushing its technology beyond still images. As the company announced today, it will soon bring Firefly to its Creative Cloud video and audio applications. To be clear, you can’t use Firefly to create custom videos (yet). Instead, the focus here is on making it easier for anyone to edit videos, judge colors with just a few words, add music and sound effects, and create title cards with animated fonts, images, and logos. However, Firefly also promises to automatically turn scripts into storyboards and pre-visualizations – and it will recommend b-roll to liven up videos.
Perhaps the highlight of these promised new features is the ability to color a video by simply describing in a few words what a video should look like (think “golden hour” or “brighten face”).
It’s no secret that color grading is an art – and not one that comes easily to most people. Now anyone can describe the desired mood and tone of a scene and Adobe’s video tools will follow suit. In many ways, it’s this democratization of skills that’s at the heart of what Adobe is doing with Firefly in its creative tools.
Other new AI-based features include the ability to generate custom sounds and music. Firefly also helps editors create subtitles, logos and title cards by letting them describe how they want it to look. These are also somewhat specialized skills that require some familiarity with After Effects and Premiere, for example.
The real game changer, though, is that Adobe also plans to use Firefly to read scripts and automatically generate storyboards and pre-visualizations. That could be a huge time saver — and I wouldn’t be surprised if you saw those videos pop up on TikTok.
It’s worth noting that we’ve only seen Adobe’s own demos of these features for now. It remains to be seen how well they will work in practice.
Adobe’s goal is to ensure that all of its generative AI tools are safe to use in a commercial environment. With his generative image creator, this meant he could only train it on a limited number of images that were in the public domain or part of his Adobe Stock service. However, this also means that it is a lot more limited compared to Midjourney or Stable Diffusion, for example.