Why This Time Tracking Bot Overwrote Every Client Entry

Why This Time Tracking Bot Overwrote Every Client Entry

1. Misconfigured Calendly Webhook Overwrites Local CRM Data

So I had this Calendly-to-Airtable integration that had been humming along for months—client books a call, data zips into the Airtable CRM, tagged and timestamped. Then one Tuesday, a VA messaged me: “Why do all leads have today’s date?” Sure enough, the whole CRM — two hundred-something records — had the same timestamp, and several client names were overwritten entirely.

The culprit: Calendly’s webhook sends event_type and invitee data, but when the event updates (even reschedules), the webhook refires as a new payload. My Make scenario assumed invitee.email was unique—but rescheduled appointments reused the same email with a new event_uuid. Airtable’s upsert logic matched off email and overwrote everything.

Here’s the unspoken edge case: If someone reschedules via Calendly, you get a new GUID but no signal distinguishing reschedule vs. new booking, unless you start logging invites in a separate table and compare hashes.

I ended up adding a hidden “reschedule_flag” using a webhook proxy in n8n. Not because I wanted to — but because every native trigger inflated data unexpectedly, especially when a client changed their time twice in one afternoon.

2. Toggl Project Field Fails Silently Without a Workspace Header

This was during a sprint when I was trialing time-based billing for three fractional design gigs. I’d built a little Zap that grabbed new Google Calendar events tagged “[billable]”, ran them through OpenAI to get a title cleanup (“Catch up call — NOT design time” turns into “Client sync only”), then pushed entries into Toggl. The automation seemed solid—until all entries started defaulting to “No project” under my main workspace.

This is the bug: Toggl’s API lets you pass project, but unless you also set the correct workspace_id, it’ll silently ignore that project value. No error. It just logs the time entry, floating in the void.

“Why am I manually cleaning these up? Isn’t this why I built the damn automation?” – Me, every evening that week

The “aha” moment was when I went into Zapier’s output log and saw the input payload was right—but Toggl’s response had no project mapping. Cross-checked the API docs (which are minimal) and found one reference to expected workspace nesting. Solved it by hardcoding workspace per client job workflow. Ugly but works.

3. Zapier Delay Until Trigger Bottlenecks When Session Is Empty

Here’s a weird one: I use Zapier’s built-in Delay Until step to push accepted proposals from Proposify into ClickUp, but only after the associated Trello card hits “Ready for Dev.” Zapier polls Proposify every 2–15 minutes, but doesn’t always have that context baked in when the trigger hits.

The problem wasn’t the delay. It was that when using Delay Until with a dynamic timestamp that sometimes gets filled in later (i.e., by a follow-up lookup), Zapier holds the flow in “waiting,” then fails silently after 24 hours if the merged value was null. It never errors. It just silently expires, and you’re left debugging ghosts.

Context lost mid-Zap is a real issue here:

  • Zapier doesn’t retry failed delays if the variable was never valid
  • Dynamic date/time fields that don’t resolve evaluate as empty
  • There’s no way to alert on expired Delay steps without polling

Best workaround I’ve landed on: precondition the Zap with a “Filter” step. It makes the workflow slightly more brittle upstream, but at least ensures only fully-formed input gets to the Delay logic. Otherwise you’ll lose hours on delays that never actually happened.

4. OpenAI Summaries Change Format Without Notice in Zapier

A couple of weeks back, I had a Zap that took Loom transcript data, ran it through OpenAI for a concise summary, and then posted that into a client Slack channel. It was fast, clean, and gave PMs just enough context without listening to 7-minute rambles. Then one morning, the summaries came back as nested arrays—instead of the plain bullet list format I’d been getting for weeks.

No code changes. No prompt edits. Literally overnight, the output format changed from:

- Intro to Q3 goals
- Client frustrations around scope creep
- Set new check-in cadence

To:

{
  "summary": ["Intro to Q3 goals", "Client frustrations", "Set new check-in cadence"]
}

Blew up the Slack step. Previous formatter steps expected text. Slack couldn’t send the object. The failure log in Zapier just said “Problem with data sent to Slack.” That’s it.

Turns out, sometimes OpenAI changes how JSON responses resolve depending on usage volume or model version—even in system prompt-isolated environments inside Zapier. It’s not documented, but I hit it twice. Switched the summary extraction over to a regex fallback inside a Python code step to flatten arrays to strings. Stupid fix, but it hasn’t broken again yet.

5. PDF Parsing in Make Drops Tables When Too Many Breaks Exist

I had a referral partner send a batch of signed contracts in PDF—10 pages each, all exported from DocuSign. I needed to auto-extract the contact name, company, and signature date, and push them into Notion. Simple enough, except Make kept outputting blank results for about half of them.

The issue: Make’s native PDF module parses text from structured sections, but breaks down if the PDF has too many horizontal rule breaks (-----) or inconsistent spacing from e-signed tags. It half-parses tables, then drops to “undefined” values for everything after the first or second table row.

I found that if the PDF’s footers and headers don’t match across pages, the module thinks it’s a separate document per page. No warning, no flag. It just returns partial data for page one and skips the rest.

Only fix: run a preprocessing layer using Make’s custom HTTP module to pass the PDF through a third-party service, then parse the response manually. I’m now using PDF.co’s API to normalize all pages before Make touches anything. Output’s reliable. The price is annoying—it’s around five bucks per 100 pages—but better than debugging null values on signature dates again.

6. Airtable Lookup Fields Fail Filters in External API Calls

One agency client had this client intake sheet in Airtable with vendor and project fields linked across multiple tables. I built a Zap to auto-kickoff a project launch in ClickUp whenever a new vendor signed up and their docs were marked as complete. Airtable was sending lookup fields — like “Vendor Primary Contact Email” — pulled from a separate table.

Here’s where things got stupid: Zapier reads those lookup values as arrays—even when it’s just one value. So filtering on, say, email contains @ fails, because it sees ["jane@vendor.com"] instead of jane@vendor.com.

The behavior isn’t documented clearly, and worse, it breaks differently depending on where in the Zap you filter. I had one filter step pass, and the next fail, purely because I duplicated the filter instead of cloning it. Turns out Zapier interprets array values lazily unless explicitly flattened.

The fix: use Formatter → Text → Convert to text before applying any filter logic on Airtable linked records. Otherwise your filter step outputs false-positives or just fails silently.

Also worth noting: when pushing data into third-party apps like Dubsado, this leads to hidden fields populating with bracket notations like [Jane Doe] unless string-flattened first. Not obvious until you hunt down why “Client Name” appears with square brackets in the client-facing proposal.

7. Gmail Send As Alias Randomly Reverts to Primary Address

I set up a Zap to send onboarding emails from a freelancer alias like support@clientbrand.com for white-labeled onboarding flows. Used Zapier’s Gmail “Send Email” step. For two weeks, it worked. Then suddenly, clients started replying to my main email address. Gmail had reverted the sender to “me@myagency.com” mid-flow. No errors in Zapier. No alerts.

The Gmail API lets you specify a sendAsEmail field. If your alias isn’t _actively verified_ in the sending account, Gmail ignores it. No error is thrown via Zapier. Mine had expired due to SMTP password rotation on the alias, but since aliases don’t notify you when disconnected, the system silently fell back.

Got confirmation from someone at Google eventually that aliases can “decay quietly” in backend config if the verification fails silently. So now I run a weekly function via Google Apps Script to ping all aliases and confirm MX validity—if any fail, I get an alert in Slack. Tedious, but after three clients replied to the wrong domain in two days, I gave in and added the check.