Fixing Automation Errors That Quietly Wreck Your Week
1. Zapier filters stop working after a field is renamed in Airtable
Here’s what happened last Tuesday: I renamed one field in Airtable — just swapped “Client Email” to “Primary Contact” — then forgot about it. Zapier didn’t. My lead routing Zap kept running, but it stopped catching the right records. No error messages, just skipped tasks. Two hours later, a team Slack blew up because a sales lead never got routed. Classic silent failure.
The cause? Zapier filters are tied to internal Airtable field IDs, not just names — but the editor displays those variables by name. So when you rename a field, the step keeps working with the old internal reference, which may now return null. And if your filter logic says, “Only continue if Client Email contains…”, congrats: it’s comparing a missing field.
I confirmed this by clicking into the filter step and re-selecting the input dropdown — which then updated to the new reference. That fixed it instantly, but nowhere does Zapier tell you fields went missing or dropped out.
“Renaming fields should not silently void filters. That’s criminal.”
If you’re dealing with filters post-renaming, open every filter step and re-select variable fields manually. There’s no alert system for this. Even in “Test Trigger” data, the output doesn’t indicate a mismatch — which makes it feel like gaslighting.
2. Webhooks firing twice when Google Sheets adds rows via API
This one took out an entire Notion update flow for a while — Google Sheets added a row via the API, and Make.com picked it up twice within the same scenario run. Twice. How? Sheets sends one event for the new row and another for the edit that fills the rest in. Two rows, same timestamp. If your trigger just says “New row added,” congrats on duplicating every downstream object.
I fixed it by switching from a Make instant trigger to a polling trigger with a deduplication property based on row ID. But here’s the thing: combining API-based inserts with automations and SpreadsheetApp scripts creates a layering problem. You end up with browser edits colliding with script writes, which then queue up webhook pings like impatient customers at a deli counter.
An edge case popped up where adding a row using appendRow()
from Apps Script didn’t even trigger the external webhook, but typed edits did. Wild. I had to totally rework the logic to batch changes and write to a JSON store instead, syncing from there. Fragile.
3. OAuth tokens expiring quietly and Make scenarios still running
Make.com’s OAuth behavior does something sinister. When your token expires (Google Sheets is the most common victim), your scenario doesn’t error until it physically tries to run that module. So if you’ve got a 15-step scenario and only hit Sheets at step 12, everything else still runs — even if Sheets throws a 401.
This matters when your earlier steps are writing to CRMs, triggering emails, or posting to Slack. All of which still happen. You don’t know anything’s broken until someone says “Why didn’t the spreadsheet update?”
The workaround? Insert a “dummy” Google Sheets module up top — a quick read or lookup — to force a token check early. That way, if it dies, it dies fast. Not 5 minutes in, mid-flow, after half the damage is already done.
A recent project I messed up ran for three days before the spreadsheet owner noticed nothing was updating. The logs looked clean because the errors were buried at the bottom and marked as recovered. Felt like watching a security cam of someone stealing your bike three days ago.
4. Unexpected type coercion with JSON paths in n8n
If you’ve used n8n for any non-trivial automation, you’ve seen the weirdness: you pull a dynamic value from JSON like {{$json["amount"]}}
and it suddenly becomes a string — not a number. Which matters when you try comparing it to a literal number like 200. The condition fails without errors, and you sit there clicking run over and over, wondering what you missed.
The worst part is that the comparison preview says “True” during testing, but fails silently in the live workflow depending on input type. This bit me hard in a Stripe reconciliation flow. A float came in as a string with extra decimals, and suddenly invoices over a certain amount weren’t being flagged. No error. Just… skipped.
Fix came down to typecasting inside expressions using the Number()
wrapper:
{{$json["amount"] >= 200}}
became {{Number($json["amount"]) >= 200}}
. N8n doesn’t coerce types the way JS usually does — or at least, not predictably. You have to be aggressive about consistency. Or paranoid.
5. OpenAI function calling leaks prompts across parallel runs
Here’s a cursed little moment: I had a GPT function-calling script running across parallel webhook threads. Each one spun up an OpenAI API call using a distinct payload… or so I thought. But every few runs, one thread returned values meant for another request. Turns out it wasn’t OpenAI — it was how I managed sessions.
I’d spun up Axios requests per trigger inside a single shared Node instance running under Vercel. Without isolating prompt contexts between requests, shared memory components leaked across async calls. A shared prompt string overwritten mid-call by another thread. Ridiculous.
Switched to per-request scoping and isolated the payload completely before the call. Also had to enforce deduplication using a hashed input key. That buried the ghosts. But yeah — running GPT calls in busy serverless contexts needs sandboxing. Otherwise one user’s prompt becomes another’s answer.
6. Hidden run-time field limits in Airtable automations
It wasn’t documented. It wasn’t logged. Airtable automations just failed to update records randomly after I added a long-rich text field. I eventually traced it to the output payload from a “Find Records” step — it stops returning full field data after around 100 fields per record object. Not visible in the builder. Not in the UI. Just vanishes mid-array.
If you reference that record in a later step expecting all fields (especially nested arrays or rich text), you’re in trouble. I only caught it by dumping the full run JSON to a webhook listener and comparing lengths.
My workaround was brutal: I had to refactor all affected tables to split rich content off into smaller tables and relinks. Which completely wrecked the UX for our content reviewers.
Tips if you’re editing large Airtable automations:
- Break complex tables into primary and metadata tables
- Use rollups and lookups instead of direct rich text when possible
- Dump record outputs into webhooks to inspect raw sizes
- Prefer scripting blocks for data updates — they bypass record limits
- Auto-archive long text fields after syncing them out
- Never rely on the UI’s success screen — always check run logs
7. Unexpected behavior from Run JavaScript in Zapier code steps
Simple code step — grab a few fields and reformat a date. I used new Date() on an incoming ISO timestamp. Worked great when testing. Broke in production with “undefined” errors.
Main issue was the test data exposed in Zapier had clean dates like 2024-06-01T12:34:00Z
. But in the real trigger, we occasionally had empty strings or nulls. Not once did the code step throw a proper error — it just returned undefined silently. And then other steps down the chain treated it as a literal string “undefined”.
Zapier’s code step doesn’t enforce data contracts. If your value is null
, ""
, or a malformed date, it just… noop returns. My fix was to wrap all date parsing in a check:
let raw = inputData.timestamp;
let parsed = raw ? new Date(raw) : null;
output = parsed ? parsed.toISOString() : "MISSING_DATE";
That gave me reliable downstream behavior and at least made errors visible when logs popped up.
8. Hidden resets on conditional logic in Make scenario branches
Bumped into this one last month while editing a Make scenario at 2AM: I changed a condition block from “greater than” to “not equal to” — and the attached variable reference reset to default. Didn’t realize until two branches started acting identically the next morning.
This happens silently when you change the logic operator on an existing condition. Any dropdown-bound fields (like a mapped value) get replaced with their default field. There’s no undo or alert, not even when testing. If you’re editing fast, you just miss it completely because the UI doesn’t re-highlight the field.
The fix is annoying: after changing a logical operator, you have to re-open the value dropdown and manually re-pick the field. Otherwise, it’ll revert to the first visible key in your JSON object. That’s usually the first field from your initial module, or whatever variable Make decides is “default”.
More than once this created a scenario where two branches that were supposed to diverge ended up using the same condition. Which made it look like execution trees were broken. Whole morning went to logs I should never have had to read.