Designing Prompt Workflows in Make Without Breaking Everything
1. Creating prompt-driven automations without nesting yourself into oblivion
Half the Make scenarios I see online are a nested disaster. Everyone loves a Prompt module now, and yeah, it feels like magic until you try running an OpenAI completion inside an iterator inside a router that’s inside a webhook reply. The moment you try exporting that into another flow: good luck. No context survives the trip.
Last week, I tried pulling form submissions from a Tally form, extracting sentiment via OpenAI, and tagging in Notion. It looked simple enough until I realized the prompt didn’t have any of the form data. Spoiler: you need to explicitly map in the values, but the UI will happily let you overlook that and just pass in a blank prompt. When I checked the run log, the body of the request was just: {"prompt": ""}
. No error, of course. Just an empty reply.
So, rule one: always preview the JSON body that gets sent into the Prompt module. OpenAI won’t complain if you send nothing. It’ll just hallucinate. And Make won’t warn you either. That single mistake wasted 30 minutes and gave a very strange summary of… itself, somehow.
2. Managing prompt tokens inside Make without tipping over the character limit
I ran into this while trying to generate summaries of Airtable records. The input was an array of paragraph fields — notes, meetings, assorted chaos. When I passed it into a ChatGPT prompt and set the model to gpt-4
, the entire prompt intermittently failed. No error in the Make log. Just… stops.
Turns out if the character count exceeds around 8000 characters, OpenAI starts silently rejecting the completion. You only realize this when you log the response and it’s just an empty string.
Here’s the workaround:
- Use a Formatter module to trim any input longer than 1000 characters per field (or whatever makes sense).
- Log the final prompt to a Google Sheet during each run — trust me, seeing what you’re actually sending helps unravel weirdness quickly.
- When using
text-davinci-003
, keep inputs under 3500 characters. ChatGPT-4 can tolerate more, but you still hit the 8k token cap fast. - If variable fields change structure often (e.g. some entries missing fields), always default them to an empty string in the mapping. Otherwise, you’ll get JSON that looks fine until one turns into
null
and breaks everything downstream.
Make won’t alert you to these issues. OpenAI won’t either. It’ll feel like the scenario just stops doing anything — like your automation ghosted you.
3. How routers break prompt context when used too early
I almost gave up reworking a lead qualification flow because my prompt kept losing half the variables. I had a router up front — one path for paid ads, one for referrals, one for unknowns — and each route used a slightly different prompt for OpenAI.
What I didn’t realize: variable mapping into prompts changes depending on router position. If you place the OpenAI module inside the route, but based the variables on an earlier HTTP payload, you have to re-map the fields again. Otherwise, the prompt uses stale or null data, depending on which path got activated. It’s not obvious: the test run shows a working value, the real run shows a blank response.
I finally caught it after comparing two consecutive scenario runs — one that returned a perfect summary, the other complete nonsense. Only difference: one had the routing logic evaluated after the data parse, the other before.
“Data structure mismatch: The module seems to receive different fields than expected.” — the least helpful but truest error Make ever gave me.
The fix? Move your OpenAI prompts after the router if the logic branches. Otherwise you’ll spend hours wondering why only one route ever works.
4. Making reusable text prompts without creating prompt spaghetti
If you’re working with multiple prompt modules — like different summaries, SEO rewrites, tone changes — hardcoding the prompt each time is tempting. It’s also a maintenance nightmare. One tone change and you need to update 12 modules.
I’ve started building reusable prompt templates using a centralized Data Store where each prompt is stored with a key like "summarize-client-call"
. That way, I just pull in the base text with a simple lookup. Now if I want to tweak the prompt wording, I change it once.
Bonus: I can store multiple prompt variants per process. One for formal summaries, one for casual recaps, one for bullet-style exports into Notion. The hardest part was figuring out how to format the line breaks — Make sometimes double-encodes newline characters in the JSON, so a \n
turns into \\n
inside the prompt. You won’t notice until ChatGPT starts inserting literal slashes in the output.
To fix it, I format the prompts using plain text, not JSON-encoded fields, and make sure they pass through a Text > Replace module before hitting OpenAI.
Actual line that started working again after days of output weirdness: Text.replace( prompt_text ; "\\n" ; "\n" )
5. Capturing OpenAI prompt responses when Make randomly drops data
This happened during a client demo. Of course.
The automation took in customer queries, passed them to OpenAI to generate a support article title, and pushed that into a shared Google Sheet. Simple. But suddenly, rows were missing titles — just blank cells — even though the scenario marked as successful.
I poked around the history and saw the OpenAI response field as NULL. But the run before and after were fine. Eventually figured it out: the webhook payload from our helpdesk system sometimes triggered BEFORE the full message payload completed. So OpenAI got something like “Hi, I wanted to kno” as the prompt input. Which, predictably, didn’t generate a usable result.
The fix was adding a five-second delay with a Sleep module before the prompt logic. Not ideal, but it gave enough buffer time for the complete payload to come through. This is nowhere in the docs and technically shouldn’t be necessary — but async webhooks leave ugly gaps in reality.
My lesson: if OpenAI returns mysteriously blank outputs in Make, backtrack and log the input. You might be generating text from a half-finished string.
6. Using variables and context stacking to shape better completions
You can shove a ton of context into prompt modules if you combine variables thoughtfully. What I’ve started doing: instead of building one giant block of text to send in as a prompt, I build mini variables that each hold part of the context — recent messages, user name, preferred tone, etc. Then I inject them into a sort of prompt scaffold:
Write a reply to {{user_name}} based on their request:
{{last_message_text}}
Use the following style guidelines:
{{tone_rules}}
This makes debugging easier, especially when one injection field ends up blank. You can test them independently in Make by logging each. Also, if you ever hit a blank reply from OpenAI, you can now isolate which piece broke — instead of rerunning the entire prompt. Bonus: if you’re making ChatGPT write markdown, splitting out variables this way forces consistency. ChatGPT is way more cooperative in structured prompts than giant text blobs.
I heard someone on a podcast say, “Every prompt is a program, not a sentence.” That stuck. Start treating your Make prompts that way and the weird failures start to make more sense.