How a Single Tool Broke My Newsletter Workflow Twice

How a Single Tool Broke My Newsletter Workflow Twice

1. Syncing subscriber data between Airtable and MailerLite is unreliable

It should be simple: your subscriber lives in Airtable, you update a tag, that info lands in MailerLite, and the right newsletter goes out. Except half the time, the tag doesn’t push. Or it does, but overwrites a custom field instead. I’ve rebuilt this sync at least four times in Make and once in Zapier. It still ghosts certain updates—usually after I’ve batch-edited a bunch of rows in Airtable.

The biggest facepalm happened when I updated about 120 rows to add a “Beta Waitlist” tag. Everything looked good on Airtable. Zap ran. MailerLite showed… 47 people tagged. Nothing in the Make logs explained it. Turns out Airtable throttles batches so hard that anything after 60 updates in short succession just… don’t trigger Make’s webhook correctly.

Undocumented edge case: Airtable views with filters and sorts flip out under load. Try updating through a view with a “not blank” filter → webhook skips rows because Airtable recalculates the view mid-execution.

Temporary workaround:

  • Duplicate the base for bulk tag changes
  • Remove all filters
  • Run the bulk update via script, not UI
  • Manually trigger re-sync jobs afterward

Yes, this breaks the dream of a fully automated flow. But if you’ve got more than 50 records syncing at once, it’s either this or half your audience doesn’t get the sequence.

2. Formatting weirdness ruins custom fields in MailerLite campaigns

You build a snazzy welcome email: “Hey {{name}}, here’s your access code: {{code}}.” Looks great in the preview. Hit send, and suddenly you’ve got emails going out with “Hey , here’s your access code: undefined.” Fun.

The actual issue? MailerLite rejects inserted variables if there’s a trailing space in the field label. Not the value—the label. So “:code ” (with space) fails silently, even though the field is visible in their UI and clickable in the designer.

I only caught this after wasting a solid chunk of time sending test emails to myself. Opened the email source and saw this:

X-MailerLite-Merge-Error: Field 'code ' not found

Had to open DevTools and inspect the variable picker dropdown in MailerLite. The labels with spaces actually show up in the DOM, but fail during injection. It’s not documented anywhere obvious.

Aha moment: Rebuilding the custom fields in MailerLite, this time typing the keys manually (no paste), fixed it. You will never ever notice this until it bites you mid-campaign.

3. Zapier’s send email step sometimes fires twice without visible logs

I was using Gmail → Zapier → Gmail again to simulate a delayed follow-up when someone signed up. Classic play: when new subscriber hits MailerLite, send intro message, wait three days, send follow-up.

But every few weeks, the follow-up would go out twice. Not to everyone. Not on specific dates. Just… randomly. And Zap history showed only one run. Spent 20 minutes thinking I’d triggered it via test mode. I hadn’t.

Eventually figured out:

  • If your Gmail account is logged into multiple browsers/devices
  • And there’s an alias involved on the To or From fields
  • AND auto-replies are enabled on the inbox (short vacations count!)

Zapier‘s Gmail module sometimes retries the send, but the re-send is ghosted in logging. You will not see two tasks billed, but your user gets double the emails.

Also, if the Gmail step is nested inside a path or filter-branch, that duplication only happens if one exact field—usually subject—is duplicated by the alias auto-reply thread. That’s nightmare logic, but it holds.

Zapier support couldn’t replicate because their demo boxes didn’t use Gmail aliases. It took me exporting raw inbox logs to even illustrate the bug.

4. Google Sheets breaks Mailchimp sync when dates are re-parsed

I don’t use Mailchimp often anymore, but sometimes clients do. Their Sheets-based subscriber capture workflow was pushing data to Mailchimp via Zapier. Randomly, some users didn’t make it in. I figured field mismatch.

Nope. Mailchimp was rejecting entries based on date format. Sheets users had typed “3/5/24” in US mode. Zapier read that as “05 Mar 2024” → all fine. THEN, a second internal Zap re-parsed the same rows—probably triggered by a formula-generated column—and converted all short dates into ISO strings. Zapier tried to send that again, but Mailchimp rejected the subscriber as a duplicate.

Only clue: Mailchimp’s debug tools show error: “Address already added”, but don’t specify that additional fields were different. So Zapier thinks it worked, even though nothing got updated.

Most invisible failure: Zap history says “Task successful.” Mailchimp logs say “Duplicate subscriber, ignored update.” No error in Zapier.

How I finally caught it: I logged the raw payload into a Notion database, timestamped every field, and compared by email ID. You will catch this kind of issue only if you manually log and inspect payloads across time.

5. Notion AI fails when used to draft newsletter intros too early

Totally thought I was clever. I’d brainstorm headlines for the weekly newsletter directly into Notion, then type “/ask AI to write preview text.” Worked like 7 times in a row. Great little intro blurbs. Then one morning it spit out a paragraph about product launches… when my note was about PDFs vs HTML email design.

This turns out to be timing-based:

  • If you use Notion AI on an empty block immediately after editing a heading
  • It sometimes picks up a cached suggestion based on earlier content in that database page
  • The autocomplete system prioritizes whatever heading was most recently focused

The fix: Click anywhere outside the block, wait a second, then click back into the block and hit “Ask AI.” Only then does it re-crawl the top 1000 characters before generating text. Won’t find this anywhere in their docs.

I wish I was making that up. I burned three scheduled campaigns assuming the AI had re-read the paragraph I typed five seconds earlier. Instead, it hallucinated based on another page I’d edited that week in the same workspace.

6. Segmenting by click history fails silently in most email tools

I get the intent. Set up a trigger like “If subscriber didn’t click the intro offer, send follow-up B two days later.” You’d think this worked. But A/B tests, email forwards, plain-text ≠ render tracking… all of that torpedoes click chains.

The worst part: forwarded links register as clicks, but not under the original subscriber. That throws off all path-based filtering. So your follow-up to “non-clickers” goes out to the most engaged users instead. Had that happen in Mailchimp, MailerLite, ConvertKit. Doesn’t matter.

Edge case behavior to watch:
If your CTA link redirects via a third-party (like Bitly or share-wise campaign URLs), users who copy-paste it into new windows are flagged by browser fingerprint, not subscriber ID. No email client passes the ID through cleanly in those.

I had one campaign trigger 240 follow-up “reminders” to people who eagerly clicked from forwarded emails. Not one of them was the original recipient. Their friend just forwarded the thing.

If your logic depends on “did not click,” you must assume both false positives and false negatives. Only way to reduce it slightly: limit your segmentation windows to 24 hours and use user-set tags (not inferred open or click behavior) whenever possible.

7. OpenAI’s classification accuracy drops on pre-trimmed inputs

If you’re using GPT to classify incoming newsletter replies—like “interested,” “not interested,” or “out of office”—beware anything under about 400 characters. Especially if you’re trimming quoted signatures and thread history. Without context, GPT guesses weirdly.

Had a response that just said “Thanks I’ll look into it.” Classifier tagged that as “out of office.” Another said “No thanks, not this month.” It tagged “interested.”

The bizarre part: when I re-ran those inputs with the original message thread left intact (so both subject line and first email included), it classified them nearly perfectly.

Why this matters:

The tokenizer prioritizes message body, but classification accuracy improves when prompt starts with a signal-rich subject line. I added this preamble and got way better results:

{
  "subject": "RE: free onboarding audit request",
  "reply": "Thanks I’ll look into it"
}

This combo restored intent detection to about 90%-plus in test cases. GPT-4 needs domain context to tease out tone. Stacking that subject line in with the reply changes everything.