Fixing Broken Prompt Automations Inside Notion Buttons

Fixing Broken Prompt Automations Inside Notion Buttons

1. Triggering Notion Buttons with Prompts That Actually Run

Sometimes you hit the Notion button and get… nothing. No error, not even a flicker. The button turns gray for a second and bounces right back like it changed its mind about running the prompt. I thought I’d misconfigured the openai.run block, but even reverting to a super simple “write a haiku” command didn’t work. Turns out, Notion buttons are picky about field references that aren’t filled in—even if the conditionals point elsewhere.

What fixed it (this round) was switching all dynamic field references in my prompts from multi-select properties to plain text. You’d think it would just skip nulls or empty tokens, but in some cases it silently fails. Notion doesn’t handle undefined properties gracefully in prompt inputs—even if that property isn’t used in the final prompt string. If it’s referenced anywhere in earlier prompt steps, it might tank the whole run.

Also, it helps to know that the “Current page” context only works inside a database—anywhere else, and you’ll be waiting forever. I was testing on a dummy page, not a database row. Rookie mistake.

2. Formatting Prompt Inputs to Avoid API Timeouts

I had an automation that took meeting notes from a synced Slack message, dumped them into a Notion database, and ran a prompt that cleaned up the bullets “in the tone of the sender.” It worked great… until I added a summary function with a character-count condition.

The input was getting too long. Notion doesn’t show you token count or API limits anywhere, and unlike ChatGPT, there’s no built-in token estimation. I started seeing weird non-errors: prompts that half-rendered, outputting incomplete sentences, or just hanging until Notion crashed the automation after a timeout.

I eventually added a length check with a separate formula property: length(prop("Slack Message")) and had the prompt conditional check "If length is greater than 2000, skip". Ugly, but it buys time. Honestly, someone should make a token estimator widget for Notion properties. If you’re using GPT-4, the newer model, it handles this better, but GPT-3.5 will choke badly on long markdown or backticks.

3. Connecting Button Prompts to External Workflows Without Zapier

So here’s the thing: not every Notion button needs to stay inside Notion. For one workflow, I needed it to kick off a Make.com scenario that processed markdown, appended an Airtable record, and returned an updated status back into Notion. If you insert a step inside your button like this:

call webhook -> openai.run -> update property

…you’d expect it to wait for the webhook to return. But it doesn’t. The webhook fires asynchronously, and any follow-up blocks relying on its return value just get null.

Workaround That Actually Behaves Synchronously

You can delay the Notion-side actions by inserting a calculation gate. Add a formula property called “Webhook Complete” and have your external workflow update that once it’s done. Then, create a second Notion button that only appears when Webhook Complete = true. It’s janky double-confirmation, but at least it’s functional. Trying to run everything in one button leaves you with half-executed prompts and null inputs.

4. Escaping Quotation Marks Inside User Prompt Fields

I lost an hour the other day because someone wrote the word “it’s” in a Notion property that got passed directly into a GPT prompt. The apostrophe wasn’t escaped, and inside the compound string, it blew up the whole prompt silently. Notion has no built-in escape mechanism when rendering property inline inside prompt text.

Quick fix: wrap all dynamic field references in triple curly brackets, then stringify them manually. Instead of:

Summarize the information from {{Summary}}

Use:

Summarize the information from """{{Summary}}"""

This forces the AI to treat it like a string, even if there’s punctuation that would otherwise cause trouble. It’s not well documented, but this quoting style seems to consistently prevent evaluation errors—and it works better than AI functions trying to autocorrect mid-output.

5. Using Multi-Prompt Buttons to Chain Summaries and Titles

I had a client request a workflow that generated both a meeting title and a summary from dumped transcripts. Initially I had it in one mega prompt, but that made the output eat itself—GPT-3.5 kept interpreting the title instructions as part of the summary. Splitting into two prompts inside a single button (multi-openai.run blocks) worked better, but only if the output of the first was saved into a property used downstream.

Sequence looked like this:

  • Prompt 1: Generate title → write to “Temp Title”
  • Prompt 2: Use “Temp Title” and transcript to write a summary

The catch? Prompt 2 would randomly fail if Prompt 1’s property hadn’t finished writing yet. They don’t always execute synchronously. I thought the order of appearance in the button config mattered—it doesn’t. Notion parallelizes some of those actions behind the scenes.

What finally solved it: add a dummy property update step between Prompt 1 and Prompt 2 (e.g. set “Status” to “working…”). That seems to create enough of a delay for Prompt 1’s output to commit to the database. Otherwise you get placeholder or null values mid-flow.

6. Cleaning Up Unexpected Markdown Characters in AI Output

One of the more annoying quirks: GPT outputs sloppy markdown when run from Notion. I ran a test where GPT would insert bullet points inside a property, which then got rendered weirdly inside the inline preview—sometimes showing actual asterisks, sometimes formatting them visually, sometimes breaking the database block entirely.

What Helped:

  • Post-process outputs to strip all asterisks unless deliberately configured
  • Use a custom “Clean AI Output” formula to remove rogue formatting marks
  • Avoid prompting GPT to return markdown unless rendering outside of Notion
  • Never trust GPT to close out triple backticks without inserting garbage whitespace
  • If using numbered bullets, specify “use plain numbers and periods” in the prompt
  • Unless needed, instruct the model to return in plain text using: Return only unformatted plaintext output

The weirdest one? When AI added a “- – -” separator, Notion turned it into a horizontal line inside the property viewer. Couldn’t click past it unless I edited the entry directly from the database view, not inline.

7. Automatically Archiving Tasks Based on GPT Confidence Score

I tried getting cute and embedded a classifier inside a Notion button (“Is this task done?”) where GPT reviewed the task’s “Notes” field and returned either YES or NO. If YES, it would auto-switch the Status to “Archived.” Felt slick. Looked cool.

But then someone added a sarcastic note like “yeah I totally did this 😬” and GPT said YES.

Turns out, sarcastic confidence looks the same to GPT. I added a second layer—a confidence estimate parse step where GPT had to explain its certainty. Using this prompt:

Based on this note, is the task done? Output:
Answer: [...] 
Confidence: [1-10]
Explain:

Then I added a conditional to only archive if confidence >= 8. That worked… except when the returned confidence was the string “ten.” So I had to convert prop("Confidence") into a number, then wrap it in toNumber(replaceAll(...)) just to compare properly. Not fun, but it did stop the premature archiving.

8. Prompt Visibility Conditions That Break When Fields Are Empty

Notion lets you make buttons show or hide based on field logic—but it evaluates those conditions before field updates propagate. I had a condition like “Only show this prompt if ‘Type’ is filled in.” Except when someone added a new entry and hadn’t selected a type yet, that whole row looked broken—the button never appeared, and there was no visual reason why not.

Even worse: people added empty strings to get around the condition. That technically filled the field, but broke downstream parsing in API-connected automations where the system was expecting a valid type, not a blank string. I fixed it by updating the condition to:

and(prop("Type") != "", prop("Type") != null)

…and eventually added a default dropdown value of “Unassigned” to give users something harmless to select, which worked a lot better than expecting them to choose manually.