Testing Five AI Image Generators Until Workflow Chaos Hit Again

Testing Five AI Image Generators Until Workflow Chaos Hit Again

1. Image size limits that quietly wreck your Notion embeds

The first time I realized this, I had a Midjourney-generated PNG I loved — crisp style, perfect tone. I dropped it into a shared Notion board and it just sat there with its little question mark thumbnail. No preview. No explanation. The file uploaded fine, but Notion wouldn’t render it. Turns out Midjourney files sometimes come out at non-standard DPI, which causes resolution mismatches when you paste them into apps using dynamic preview sizing like Notion or Airtable.

It’s not really about file size — it’s about the combination of pixel dimensions and metadata. Exporting the same image through Preview (on macOS) or re-saving it in Photoshop fixed it. But no AI generator warns you about this.

And yes, some of them slap weird ICC color profiles or uncompressed transparency layers that end up doubling the image size without actually changing visible quality. You’ll only notice once your doc has a 15MB mystery image dragging markdown exports into timeout territory.

2. DALL-E naming overwrites if two prompts resolve at once

I was batch-generating explainers for a new onboarding sequence — one title card, one variant showing emotional tone. Using a simple OpenAI cycle with DALL-E inside a Make.com scenario, each prompt fed a customized string from an Airtable table, like “onboarding-stage-1-joyful”. Worked fine locally. But then I turned on scheduling.

It turns out if two image-generation calls happen within the same second and you’re using deterministic naming (like slugifying by title), OpenAI sometimes overwrites the generated URLs in the return payload. As in — one of the image objects just vanishes, replaced by the other. No error. One gets duplicated. Usually the first prompt “wins,” but not reliably.

So you end up with two Airtable rows pointing to identical URLs, even though each input prompt was unique. I now append a random 3-character hash to each slug, even internally. Not visible to users, but it saved me from hunting phantom files every other day.

3. Midjourney Discord prompts still fail midstream on alt-timezones

It’s mid-afternoon here but 4am somewhere in Midjourney Discord land. I queued up like six stylized product images for someone’s slide deck. Pasted prompts as a neat group — boom, four of them just didn’t resolve. Sat in the Discord chat as plain messages. No bot response.

This only happens when you’re firing prompts in bulk during timezone crossover windows — like right before their server resets render queues. Not documented. Not recoverable unless you’re watching the Discord channel as it happens. Even weirder, the midstream prompts are the most likely to fail silently if you batch them less than 10 seconds apart.

“Uptime looks fine but nothing shows up. We didn’t process it.”

That was the whole message from Midjourney support when I submitted a failed prompt ID. No logs. No fix. Learned to paste three prompts max and add a 20-second delay in between when automating from Airtable through Discord webhooks. Wildly annoying for anything scaled.

4. Stable Diffusion output folders break if temp file isn’t cleared

I’ve got a local Stable Diffusion rig (AUTOMATIC1111 UI) on an old M1 Mac Mini. Great for stuff I don’t want touching the cloud. But once every 200 runs or so, the temp folder inside `outputs/img-samples` stops clearing properly. When that fills up, new images don’t save — and again, no error. They get generated, they flash briefly in the live preview, then poof. Gone.

If I don’t manually go into the folder and delete the 0-byte temp files, it just silently ignores the “save” call. Found out after wondering why file names like `000862.png` were flat-out missing for entire prompt queues.

What finally fixed the crash

Edited the `config.json` to write successful images to a new subfolder per batch. Also added a tiny shell script run hourly via `launchd` to delete all empty files inside `/outputs/img-samples`. Probably wouldn’t need this if I rebooted regularly, but you know how it goes.

5. Firefly’s content consent gets logged inconsistently per workspace

I was generating thumbnails using Adobe Firefly and their Express integration. Worked perfectly on my account. Then handed it off to a teammate with a Business workspace. Half the prompts started auto-failing, no previews, no visible errors. What I missed: Adobe tracks your consent for AI training reuse per account and workspace, but the error triggers don’t show unless you’re the original uploader.

Even with shared brand settings, some generated assets never appeared in the team library. I only figured it out after exporting audit logs and seeing:

reason: "Consent mismatch - Asset not permitted for shared reuse"

The teammate hadn’t set their AI usage preferences, and Firefly defaulted to restricted mode. That setting is buried under four towels in Adobe Admin Console. Only trigger is backend — no UI warning, no red error bar.

Also, when using external embeds from Firefly into Notion, the null images just show a gray bar unless you add a caption — the thumbnail fails to load otherwise.

6. Dream by Wombo image ratios fail silently in mobile API calls

Wombo Dream is fast and gorgeous when it works. But when I tried using it in a mobile-facing workflow (for a community idea generator prototype), it kept returning blank payloads. On desktop it was fine, but from mobile API calls (via Flutter + Firebase Functions), the same prompt returned:

{ "success": true, "artUrl": null }

Turns out you can’t use non-standard aspect ratios (like 1.2:1 or 3:2.8) from certain user-agent strings. Mobile headers get filtered into a different rendering farm that only allows square or predefined ratios. Not documented. Not in their API reference. I literally had to spoof a desktop user-agent to get consistent results from within the app build emulator.

I now hardwire all prompt generations through a proxy function that enforces 1:1 outputs unless explicitly overridable. Not the dream I had in mind, but debugging 50 blank thumbnails took something out of me.

7. Clipdrop batch upscales silently throttle after five concurrent jobs

If you’re using Clipdrop’s upscale feature across multiple assets (say, for consistent icon sets), there’s a hard limit built into their user tiering — but instead of erroring out, it just silently de-prioritizes jobs. You send ten upscale calls at once — five come back instantly, the rest hang. Sometimes forever. Other times two randomly fail with HTTP 200 and no content.

Found this while batch-upscaling 60 mono line icons for a SaaS dashboard. I had assumed concurrency throttling would trigger a clear error, or at least a 429. Nope. Instead:

{ "status": "ok", "resultUrl": null }

That’s it. No failure reported in Make.com, so the scenario completed “successfully.” But half the upscales were empty rectangles. You can manually re-try them with throttling, but there’s no smart retry logic built in. Clipdrop’s queued image job system resolves per-account and per-host, so you also get different results if you’re using multiple Make accounts or bouncing through AWS Lambda.

Learned this tip from a random forum post, not docs. I now run a loop with a 4-job burst followed by a 12-second wait, and the fail rate dropped under 10 percent.

8. Dataset hallucination when combining prompts across platforms

Okay, this one hit like a mirage. I had a prompt library stored in Obsidian — a bunch of markdown snippets with structure like:

Generate a cheerful scene with a raccoon and a stack of books, hyperrealistic lighting, 4k resolution

I copied a dozen of these into a Make batch prompt run using Midjourney and Stable Diffusion side-by-side to compare stylization filtering. But a few of the renders from SD showed books labeled “Raccoon Journal Weekly” in floating block text. That wasn’t in the prompt. Traced it back to an internal cache leak — I’d previously run a dataset test for magazine covers, and the vector embedding appeared to have influenced the diffusion seed. Reprompting with identical inputs (but deleting the project folder cache) fixed it.

Which tells me: some workflows do not fully clear latent vector state when using reused virtualenv sessions locally. You end up with blended suggestions from everything else you’ve tried. Like memory bleed in word form. It doesn’t happen on GPU batch mode — only in multi-threaded local CPU runs with previous runs still loaded in memory.