a person wearing a hoodie image altered with Photoshop AI

Written By:

The Good, Bad, and Ugly of Photoshop AI

You can’t open your eyes or ears these days without crossing paths with the term AI. It’s a hot-button topic, and for good reason. The slope is slippery and steep regarding what can be (or should be) created with the use of AI tools.

Our team has spent minimal time with tools like MidJourney, but when Adobe introduced Generative AI in Photoshop through its Firefly platform, we began to explore. A lot has changed since those early experiments, and Adobe has been moving fast. Here’s where things stand in 2026, and what we still think about all of it.

What Is Photoshop AI, and How Does It Work?

Adobe’s generative AI lives inside Photoshop through Adobe Firefly, its dedicated AI creative engine. The core idea: describe what you want in plain language, and Firefly generates or modifies imagery to match.

The tools you’ll use most inside Photoshop:

  • Generative Fill: Select any area, type a prompt, and Firefly fills it contextually.
  • Generative Expand: Extend an image beyond its original frame, great for reformatting portrait shots to landscape without cropping.
  • Generative Remove: Eliminate unwanted objects with a click.
  • Generative Upscale: Increase resolution and sharpen detail, now up to 4K with Topaz Labs integration.

One thing worth noting upfront: all of this is built on licensed content and Adobe Stock material, which matters if you’re using these tools for commercial work. That commercial safety is one of Firefly’s real competitive advantages.

Is Photoshop AI Good? Our Team’s Honest Take

Short answer: yes, for the right tasks. But the experience varies a lot depending on what you’re trying to do.

Nathaniel:

Sure, you can use generative AI to add a rainbow to your photo or replace your father-in-law with a juggling clown. But I was more interested in how generative AI could improve production processes and provide flexibility with image assets.

For example, say you snapped a photo or sourced one that allows for alterations, and the photo is in portrait format, but you need it in a square or landscape aspect ratio. With the right steps and prompts, Generative Expand can help you extend the content of the photo, essentially creating the world outside of your original frame.

There are a few ways to get there. You can expand the canvas using the crop tool, then use the marquee tool to select the area you want completed and apply Generative Fill. Or, if you’re only looking to complete the image outside the original frame, Generative Expand is the faster route. Select the crop tool, expand the canvas, click Generative Expand, and let AI determine the most appropriate variations. You can provide a prompt or leave the field blank and let it determine the most logical course of action.

The quality of these expansions has improved noticeably. The January 2026 updates to Generative Fill and Generative Expand now output at 2K resolution with sharper detail and fewer seams, a real step up from early versions.

Krysten:

Take Photoshop Generative AI for what it is at this moment. AI right now is great for helping expand backgrounds, creating some beautiful scenery, or adding an occasional element. There’s a lot of fun to be had with adding accessories, or maybe even changing what you’re wearing.

If you take the time to figure out how to prompt the program, it can be a great tool. Use smaller selections to fill in background pieces and overlap your marquee with the current background just enough to give AI something to work with.

Another tip for better results: unless you’re a fan of a few extra phalanges, steer clear of adding humans.

Why Photoshop AI Still Frustrates Designers

Here’s where we get honest.

Soti:

Photoshop Generative AI has me tripping. But it might just be a “me problem.”

After playing around with these images and trying to add realistic elements, I’ve realized that writing successful prompts is a skill to be mastered. Tips for getting better results:

Use the selection tool to be very specific about what area you want to edit. This may seem obvious, but the first time I got in there, I selected the whole canvas, wrote “Add flamethrowers behind the guy in this image,” and got some very wacky results that eliminated the guy and created a whole new scene entirely.

The AI algorithm is designed to analyze composition, framing, and cropping, but it can only do that when a partial area is selected. The trick is specific area selection paired with descriptive prompts that help the AI understand what you want. Adobe even recommends avoiding words like “add,” “fill,” or “change,” and instead describing exactly what you want generated. Doing this helped me get results closer to what I was envisioning.

It isn’t a science, so it takes a lot of experimentation to get the right image and style.

Beyond the prompt learning curve, there are still real limitations to be aware of in 2026: text generation inside images remains inconsistent, highly specific facial features can still go sideways, and results across the same prompt can vary wildly. The good news is that Adobe’s Firefly Image Model 5, released at Adobe MAX in October 2025, brought significant improvements in photorealism and prompt fidelity. But “improved” doesn’t mean “perfect.”

Photoshop AI vs. Midjourney vs. DALL-E: Which Should You Use?

Each tool has a different job. Here’s a practical breakdown:

Adobe Firefly (inside Photoshop) Best for: Editing and extending existing images, production workflows, commercial-safe content Strengths: Seamless Photoshop integration, licensed training data, non-destructive editing Limitations: Weaker at pure image generation compared to dedicated tools; human anatomy still hits and misses

Midjourney Best for: High-quality artistic image generation, creative concepting, stylized visuals Strengths: Exceptional aesthetic output, strong community and prompt library Limitations: No direct editing integration, requires Discord to access, outputs aren’t natively layered for post-production

DALL-E (via ChatGPT / OpenAI) Best for: Quick ideation, accessible for non-designers, combining image generation with text-based workflows Strengths: Easy to use, improving rapidly, strong text rendering in images Limitations: Less control over fine-tuned edits, not integrated into professional design tools

The short version: if you already live in Photoshop and need AI to help you edit, reformat, or enhance real assets, Firefly is your tool. If you’re generating concepts from scratch and want the most visually arresting output, Midjourney is hard to beat. If you need something fast and accessible with solid text-in-image capability, DALL-E is worth a look.

Worth noting: Adobe has actually integrated partner models, including options from OpenAI, Google, and Black Forest Labs, directly into Firefly and Photoshop’s Generative Fill. So the lines between these tools are blurring fast. You can dig into the full breakdown of what’s new on Adobe’s Firefly updates page.

Where This Is All Headed

Adobe’s pace of development has picked up considerably. The new AI Assistant in Photoshop (currently in public beta) lets you edit by describing what you want in plain language, with less technical prompt engineering and more conversation. According to Adobe’s MAX 2025 announcements, Firefly now also supports video generation, audio creation, and a browser-based video editor, making it closer to an all-in-one creative AI studio than the single-feature tool it started as.

For marketing teams, this opens up interesting possibilities. AI-assisted content production, when used thoughtfully, can speed up asset creation and extend the flexibility of existing imagery. That said, the human hand still matters. Knowing when to trust the tool and when to correct it is its own skill.

If you’re curious how AI fits into a broader digital marketing strategy, or what it means for how you approach content creation, our team is always up for that conversation. No sales. No B.S. Just good, honest answers.