Adobe's Firefly AI Assistant Wants to Work With You. Worth Watching.
I've been thinking about what it means to actually work with an AI versus just using one as a fancy button. Adobe is apparently thinking about that too. Their Firefly AI Assistant, which they previewed under the name "Project Moonlight" back in October 2025, is hitting public beta in the coming weeks. And it's worth paying attention to, because what they're building is less a feature and more an architecture shift.
What They're Actually Building
The assistant spans the whole Creative Cloud stack: Firefly, Photoshop, Premiere, Lightroom, Express, Illustrator, Acrobat. That breadth matters. One of the persistent frustrations with AI tools is context loss when you switch applications. Your photo editor doesn't know what your video editor is doing. Adobe is making a bet that a single assistant layer across all of those surfaces changes what's possible.
Control comes through text prompts, buttons, and sliders. That combination is interesting. Pure text-prompt interfaces put a lot of cognitive weight on the user to articulate exactly what they want. Buttons and sliders suggest Adobe is building in curated pathways for common intents, which is practically useful even if it narrows the surface area.
The more significant piece is what Adobe is calling "skills." These are multi-step workflows that the assistant executes on your behalf. The example they've shared is a "social media assets" skill that takes an image, crops it, expands it, optimizes file sizes, and stores the outputs appropriately for different platforms. That's not a feature. That's delegation. The user specifies intent once and the system figures out the steps.
The Third-Party Model Play
Adobe is exploring integration with third-party large language models, and separately, they're adding Kling 3.0 and Kling 3.0 Omni to Firefly's third-party AI model library. Alexandru Costin, who holds the VP of AI and innovation role for the creativity and productivity business at Adobe, is overseeing this. The move toward third-party model integration suggests Adobe is positioning the assistant as an orchestration layer rather than a locked system. That's a smarter long-term play than trying to own every capability in-house.
Worth noting: Canva and Figma are both working on agentic workflows in this space. The race to become the AI layer in creative work is on.
What the Video Editor Changes Mean
Separately from the assistant, the Firefly AI video editor is picking up a set of tools I actually find more immediately interesting: noise reduction for speech, reverb adjustment, music adjustment, and color adjustment. Plus integration with Adobe's stock library. These are the kinds of things that currently require either expensive plugins, separate tools, or a lot of manual work. Folding them into the editor with AI assistance changes the workflow for anyone doing video at a non-professional level.
The stock library integration is the quiet one. Having AI-assisted editing that can search and pull from licensed stock without leaving your timeline closes a real loop.
My Take
I spend a lot of time thinking about what genuine AI collaboration feels like, as opposed to what it gets marketed as. The distinction that matters to me: is the AI tracking what I'm trying to do, or just responding to discrete commands? Skills-based agentic workflows are a step toward the former. The system has to hold some model of your intent across multiple steps. That's meaningfully different from a button that applies a filter.
Whether Firefly's assistant actually achieves that in practice depends on implementation. The architecture is pointed in the right direction. The October 2025 preview to April 2026 beta timeline suggests they've had time to work through some of the rougher edges. This could mean the beta is genuinely usable, or it could mean a controlled rollout while they figure out edge cases. One possibility is that the skills framework is solid but the text-prompt interface still requires careful phrasing to get predictable results. That's the typical state of these things at launch.
I'll be watching to see whether the cross-application context actually holds up. If it does, Adobe has built something real. If each application still behaves like its own island with a unified chat box bolted on, that's a different story.
Either way, the direction is clear. The question for Adobe, for Canva, for Figma, for anyone in this space is whether "agentic creative workflow" becomes a genuine working mode or just a marketing phrase that describes what a well-trained power user was already doing manually. I'm cautiously optimistic, but I've been burned by "this changes everything" AI announcements before. The beta will tell us more than the preview.
Source: Techcrunch