Smart Prompt Automations That Behaved Differently in Notion and Trello
1. Prompt-generated task cards behaving differently in Trello mobile
Okay so this one did my head in for a while: I had a Zap that sent ChatGPT prompts to generate Trello card titles and descriptions based on meeting transcript summaries. It worked—cleanly—on desktop. Cards showed up as expected, formatting was readable, due dates preserved. But opening the same board on mobile (iOS Trello app, not web) turned the AI-formatted checklist into a plain body of text. No bullets, just a blob. Which led to my boss calling me from the parking lot confused why all the cards looked like someone pasted from Notepad.
So it turns out: Trello’s mobile client strips formatting on AI-generated card descriptions if the update comes via API and not the UI. I confirmed it by manually editing a card description on desktop (bullets preserved) versus letting OpenAI’s output auto-populate through Zapier (format gone).
The hack that worked was appending a single whitespace before and after the markdown bullets within the GPT output. Not on the whole description—literally on each line:
- Thing one
- Thing two
Became:
- Thing one
- Thing two
It was enough to trick Trello’s mobile renderer into processing it as actual bullets. Still no idea why that whitespace padding fixes it, but it passed the visual test on four phones.
2. OpenAI prompt consistency fails across identical Notion databases
I cloned three Notion databases—identical structures, same templates, same GPT content blocks. In one, AI-generated answers were solid. In the second: weird truncation mid-sentence. On the third, it wouldn’t even finish the prompt. All were using the same shared connection to OpenAI’s assistant under my personal workspace.
The problem? Hidden database-level field permissions. Turns out if one of the columns has restricted editing via team roles, it can silently stop the AI block from writing even though it visually shows as allowed. This doesn’t throw a standard error, just a quiet fail. You see the animation like it’s generating, then it stops. No output. People told me to clear cache. Nope.
The big find: inside each database settings panel under Properties → Property Type, look for any formula, relation, or rollup columns. One of mine referenced a team-only view filter. When removed, the assistant instantly worked. No warning from GPT though, and nothing in Notion’s API logs pointed to access issues. It just didn’t talk back because it wasn’t allowed to.
3. Token cutoff mid-response when passing Trello card text into ChatGPT
I was trying a simple setup: take a Trello card description, pass it to GPT-4 using Zapier, generate a tweet suggestion as reply. Fine in theory. But longer cards returned only half-completed GPT output. Sometimes a sentence with no punctuation. Sometimes it’d just stop mid-phrase like “Make sure the retargeting pixel—”. That was it.
What hurt more: the token count looked way under the limit. I even put in manual character counters. Still dropped midsentence.
The issue was invisible whitespace from copy-pasted bullets in Trello. Zapier passes these in raw as markdown with escape characters, which bloats the token count. GPT sees that structure as context-heavy input. So even if it looks like 600 characters, it was chewing through over 2000 tokens before even getting to the reply.
Fix was dumb but reliable: feed only a stripped plaintext version of descriptions into the prompt. I added a formatter step before OpenAI, built this little hack into Zapier:
{{Description.replace(/\*|\-|\r?\n/g, ' ')}}
That alone cut token counts in half. No more cutoff responses. GPT breathed again.
4. Notion AI blocks ignoring database filters during inline generation
If you ever used Notion’s AI blocks inside a database view and expected it to “respect” the filters on the view—you probably also sat there blinking at broken content that didn’t reference any of the current rows you’re staring at. The AI doesn’t care. It can’t see which entries are visible. It queries the full table, always.
This hit me in a project status tracker. I had a dynamic filtered view showing only items marked in progress, then asked Notion AI to summarize what was being worked on. Instead it included five completed tickets from last month. Because the AI block runs at the database level, not view level.
There’s no fix inside Notion UI today. My workaround:
- Use a relation field to manually tag the records I want summarized.
- Create a new AI block that runs prompt logic over only those tagged items.
- Use a Checkbox column to manually control visibility into the AI scope.
Sucks to micromanage that, but it’s the only reliable way if you need prompt targeting without the assistant stepping outside its sandbox.
5. Trello label IDs change silently when duplicating boards via API
This tripped me because it doesn’t happen consistently: I had a Zap that duplicated master Trello boards weekly, then ran prompts against labeled cards (like “Review”, “Urgent”, etc). These labels were referenced by name—but then the GPT prompts started not triggering as expected. Some cards wouldn’t be picked up at all.
The actual problem: when Trello duplicates a board via the API, it creates new label IDs even if the names and colors are identical. So when you filter by label in Zapier using static IDs, you’re pointing at labels that don’t exist in the new board. No error, just zero matches.
Double-pain: if you duplicate a board manually inside Trello, sometimes the label IDs persist depending on the method (UI button vs. API call). Zero mention of this in any docs.
Zapier log: “Search returned 0 results for label ID 64d1…9bdc” → but board clearly showed 9 cards labeled “Urgent”
Fix: after duplication, run a second step that queries the label names again and dynamically fetches their IDs before feeding them into any conditional prompt logic. Yeah, one more unnecessary lookup, but better than having silent failures.
6. OpenAI replies hallucinate Trello list names during prompt chaining
I tried generating workflow suggestions from Trello activity logs using prompt chaining: first prompt summarized card activity, second prompt proposed a structured improvement (“move low-priority cards off backlog within 3 days”). But the second prompt often suggested lists that didn’t exist—like “Stalled”, or “Review Queue”, when nothing like that was on the board.
Turns out my first prompt was too vague and included paraphrased summaries like “some items stuck in middle stacks”. GPT then invented plausible list names based on standard agile terms. Harmless if you’re just tinkering, but confusing af if you’re piping the result straight back into automation.
What worked: including actual Trello list names from the get-go. I updated the first prompt to feed raw JSON of all board lists:
{"lists":["Backlog","In Progress","QA","Done"]}
Then explicitly told GPT: “Only reference existing list names when suggesting changes.” That squashed the hallucination. But the fact that it invented fake list names just from reading phrasing like “stuck” still makes me cautious about chaining anything where output flows back into action.
7. Double-prompted actions request duplicate API calls in Trello Zaps
There’s an invisible trap in Trello+OpenAI Zap setups where a GPT output with two embedded tasks (e.g. “move to Done and assign to Alice”) causes the Zap to run the Trello action twice—even if it’s supposed to interpret the prompt as a single instruction chain.
Like, yes: the AI response only outputs once. But Zapier’s parser splits based on partial match phrases (“move,” “assign,” etc), and if both actions trigger near-simultaneously, the system views them as separate invocations of the same Zap step.
I noticed this because Alice kept getting notified twice for the same card move. Looking at the logs, the Trello update came through 700ms apart—but same source data. Zapier didn’t deduplicate because the structure of extracted prompt triggers looked different:
Trigger 1:
{"action":"move","card":"#3283","destination":"Done"}
Trigger 2:
{"assign_user":"Alice","card_id":"#3283"}
No overlap in keys = no dedupe. You basically have to parse and group these into a single step upfront using a formatter or custom code block. I now sanitize all GPT outputs into a forced schema before letting Zapier touch Trello:
{
"move_to": "Done",
"assign": "Alice"
}
Anything else risks double-firing unless your prompt is squeaky clean and somehow immune to GPT’s liberal phrasing choices.