The Setting I Missed That Broke Everything Silently

The Setting I Missed That Broke Everything Silently

1. Using Google Calendar Triggers Without Timezone Headaches

Google Calendar spammed my Zap with duplicate events, and I stared at it for twenty minutes before realizing the organizer’s time zone had changed. Triggered on event start… but which start? Turns out, Zapier uses the calendar owner’s time zone, not the participant’s. This gets brutal when you’re managing calendars for clients or contractors in other countries. If someone in Portugal books at 3pm their time and your calendar is set to Eastern, good luck tracing why the reminder fired six hours early.

The workaround: always normalize timezones inside Google Calendar itself. Create a dummy calendar in the organizer’s target timezone and migrate events there. Alternatively, pass the datetime to Formatter with a hardcoded timezone and convert it there — but this breaks if the original time is already in UTC and Zapier doesn’t realize it.

The aha moment? Hidden deep inside the data bundle for a sample event, you can see the timezone string. But it’s not exposed at the top level of the Zap Set Up trigger steps. You have to click into the raw data toggle when testing the trigger. Every time.

2. When Airtable Automations Fail Without Throwing Errors

Airtable didn’t throw an error once — just refused to run a script block that I scheduled to fire on record update. Logs said it ran, but nothing happened. The culprit: a computed field was involved in the trigger condition, and Airtable doesn’t re-evaluate computed fields the way you’d expect on updates from automations. It waits for a manual touch. So the watcher never saw the field technically change, and the automation just sat there smiling.

This is especially rough on solo entrepreneurs batching client data. You trigger an automation like “if Contract Status = Ready” and assume Airtable will do its thing. But if “Ready” came from a formula based on another table’s lookup, the automation doesn’t always recognize the calculated output as a valid change.

Checklist to Prevent Silent Airtable Automations:

  • Use only manual-entry or directly-updated fields in trigger logic
  • Confirm with a test record that updates fire the automation
  • Avoid formulas or rollups as first-level triggers — use them as filters inside action steps instead
  • Check the execution log for flags — you’ll often see “Skipped due to condition not met”
  • Double-tap critical updates with a dummy field change if you must trigger from lookup changes

3. Handling AI Prompt Length Limits in Google Sheets Scripts

I was feeding too much text into a GPT prompt via a custom script in a Google Sheets cell. There’s no official error — it just returns an empty string or, worse, reuses part of a prior prompt. Took me a while to realize Google Apps Script doesn’t give a consistent output when payloads exceed length on the fetch() request.

I cracked it by adding a quick Logger.log(prompt.length), and sure enough, some cells were pushing 3000+ characters, too much for the free-tier OpenAI completions API. Instead of trimming dynamically, I started token-counting with gpt-2-tokenizer in a backend Node script and storing estimated token-to-character ratios directly in another column.

Undocumented corner case: multi-line spreadsheet cells with line breaks (Alt + Enter) mess with string count estimation. The API sees them, Sheets hides them. So you get silently cutoff prompts unless you sanitize that first.

Recovered working flow now counts lines, strips multiples, and slices to 4000 chars max before submission. Not elegant. But stable. Mostly.

4. Why Slack Bot Replies Get Rate Limited Without Notice

The first time I built a workflow to auto-reply in Slack using a custom bot, everything worked — until it didn’t. After maybe twelve people clicked a button in rapid succession, replies stopped landing. No errors in Make. Just… ghost messages.

The issue: Slack enforces weirdly silent rate limits on chat.postMessage WHEN you use “as_user: true” or “thread_ts” inconsistently. Nope, not in the logs. Slack’s API only hints at this in a nested error response, which Make swallows unless you explicitly inspect the bundle output.

Toggle off threaded replies, and it starts working again — which feels like nonsense. But hey. Also, bots running from incoming webhooks are on totally different limits than app-based bots. If you’re building any solo toolkit for client chat routing — assume the default response quota is about one reply per second max, per channel.

Best outcome I got was switching to scheduled replies using a staggered iterator. Delayed by 1.5 seconds between rows with a queue table in Airtable. Ugly but reliable. Slack doesn’t tell you when it’s mad, only when it’s broken.

5. Zapier Formatter Steps That Produce Blank Outputs When Nested

I nested a date formatter inside a Paths branch in Zapier. Worked for days, then suddenly all outputs were blank. No error messages, just empty fields. Drove me nuts until I realized: Formatter steps don’t play nice when the input data is missing — even if the logic path skips that branch, Zapier still evaluates the format step silently in the background, which fails.

This happens even if you’re not using the formatter’s output. If the step is there, and a field in it references a missing input, it fails silently upstream and breaks the automation in weird ways downstream.

Hacky fix: add a conditional step BEFORE the formatter, using Zapier’s built-in Filter by Zapier. Only let it run if the input field exists. This gatekeeping trick often saves maybe eight hours of troubleshooting nonexistent field issues inside branched Zaps.

Things get worse if your formatter uses Regex. A bad pattern doesn’t throw an error — it just fails with a blank string and quietly marks the step “success.” Zapier logs pretend nothing ever happened.

Still amazed at how often the platform acts like, “We ran it! Nothing to see here.”

6. When Notion Database Webhooks Just Ignore Link-Type Fields

I tried syncing Notion with a Google Sheet every time a new row appeared in a shared database. Everything worked — except half the records had missing fields. Turned out: Notion’s webhook system in Make does not send link-type data in the webhook payload. If a field in Notion is a linked relation to another gallery or database, it just vanishes from the JSON.

No error, no null, no empty bracket. Just… unmentioned. Like the field doesn’t exist. If your automation relies on that link to fetch data (e.g. connect a project to its team entry), you have to drop another API call during execution and explicitly fetch the page content again using the record ID. Adds latency and doubles the Ops count if you’re on Make’s free tier.

Even worse — if you built the original Notion database inside the desktop app, sometimes the internal field ID gets camel-cased differently in the API metadata vs. the webhook. So “TaskMembers” becomes “taskmembers” and the filter logic fails silently in Make. Cannot make this stuff up.

You can preview how Notion sends webhook data by triggering a data catch and expanding the raw bundles. Still, it’s trial and error anytime you add a field with a special type.

7. Creating Google Docs PDFs on Demand With Correct Page Breaks

Tried doing a PDF export from Google Docs using Zaps to auto-generate contracts. Problem: the output PDFs have scrambled page breaks when there are long tables or split headers. Looks fine inside Docs, awful in the generated file.

Turns out Google Docs doesn’t preserve CSS-like print formatting. It tries to auto-flow based on content chunks. To force more stable output, you have to insert hard breaks manually using `Insert > Break > Page break` between sections. BUT — if that line is wrapped inside a hidden conditional segment (like with a Docs add-on for chunk logic), Zapier’s export endpoint doesn’t interpret it.

Real-world solution involved using placeholder text like [PAGEBREAK] in a template, then doing a find-and-replace on the .docx version using Google Drive API calls before generating the final PDF. It’s janky, but you get consistent output across client contracts.

“Text isn’t enough to force layout in generated PDFs — Google wants explicit structure.”

8. Automating Stripe New Charges With Metadata Filters Intact

Connected Stripe to Make to pull in new charges that matched certain criteria (e.g. specific products, clients on retainer). Their webhook sends all charges, and you filter downstream. Simple, right? Not quite. Turns out Stripe sometimes doesn’t include custom metadata in the first webhook fire if the charge is attached to a pending invoice.

We lost two automations because the metadata wasn’t present in the initial payload, even though it showed up in the dashboard. The fix was to use a router to check for the presence of metadata key “tier” — and if missing, wait 5 minutes, then re-fetch the object using the charge ID. Nasty loop, but it gave us consistent data to work with.

Also discovered that test mode in Stripe does not mimic real-world metadata latency accurately. Everything looks instant. Production webhooks introduce a delay on the metadata population if anything gets queued.

Capturing new business tiers via Stripe is great for automation — if you treat every webhook like partial data until proven otherwise.