Nine Make.com Features Beginners Notice Too Late

Nine Make.com Features Beginners Notice Too Late

1. Scenario triggers behave differently depending on data structure

The first time I built a Make.com scenario to catch new rows from a Google Sheet, I stared at the execution history and saw… nothing. No bubble. No data. I’d followed the setup wizard exactly — standard watch rows module with a timestamp column. Turns out, the module quietly skips rows where the timestamp cell exists but is blank. Not null — actually blank. And once I added whitespace trimming in Sheets, it suddenly worked.

Triggers that rely on timestamps or updated values are weirdly sensitive. Make doesn’t throw a meaningful error. The scenario just doesn’t run, and unless you drop in a log module after every trigger, you’re left guessing — or rewriting the Sheet entirely. This shows up with Notion too: if the database property isn’t touched, no trigger fires, even if a field was changed by a bot.

Make.com’s triggers are usually polling based, so “watch” means it compares the current state to the last snapshot. But that snapshot behavior is sticky. If one initial run skips a row due to formatting, that row stays invisible. You can force a reindex by copying the whole sheet, yes. But that’s not something you’ll find in their onboarding examples.

2. Webhooks fire more than once if you leave the listener open

I was debugging a Shopify checkout webhook. It hit my Make scenario twice, back-to-back, with identical payloads — even though the order ID was the same. I thought Shopify was the culprit. Nope: it was me. I’d left the Make scenario in listening mode during testing before hitting “Run once”.

Make’s webhook module autogenerates two URLs: the main endpoint, and an internal listen URL. If you hit both — or if apps retry due to long response times — you’ll get duplicate executions. Even worse if you’re using path-style webhooks for routing. Turns out, the system treats every request as new unless you explicitly dedupe using JSON parse + ID filtering inside the flow.

No warnings, no logs saying “same payload received”. You just see stacked bundles. It wasn’t until I added a Note module to print the incoming body + timestamp that I realized the second one hit three seconds later, probably due to Shopify’s retry logic. Deduping isn’t native. You have to build it yourself every single time.

3. Router conditions fail silently unless debug mode reveals the branch

Nested routers are powerful in Make, until they’re not. I had a flow where new Airtable records were routed into three branches. One for priority flagged rows, one for status=stuck, and one catch-all. But for two days, nothing went to the stuck path. Everything just got picked up by the fallback route.

I triple-checked the conditions. The logic looked right. Turns out: the branch condition failed because the incoming data field was lowercase but the condition expected “Stuck” with a capital. No partials, no type coercion, no logs about failed evaluations. Just total silence. I only spotted it by switching the router to debug mode and comparing bundle values across branches. That finally exposed the field mismatch.

Router branches use strict comparisons. Strings must match exactly, and if you use content from optional modules, missing fields evaluate as null — which also fails silently. This makes router debugging nearly impossible without active monitoring. Your only clue is when a route you expect to run doesn’t — and even then, it might be hidden behind a condition that simply returns false without comment.

4. Scheduled runs sometimes skip executions if modules return no data

I hit this in an email parsing scenario. My flow downloaded Gmail attachments daily, then tried to extract PDFs with OCR. Some days, it worked. Other days, the whole thing seemed to skip. I assumed no new emails — but when I checked Gmail directly, there were literally three new threads.

Every time the Search Messages module returned zero results, Make treated the entire execution as successful and short-circuited downstream modules. Meaning: no failure, no red bubble, no logs beyond a timestamp saying “finished in 110ms”. That’s deadly for scheduled automations. If your first module returns nothing, no hint gets passed to the rest of the flow.

There’s a workaround: add a filter after the first module checking if items exist. You can then pass dummy data into a fallback log path when the search fails. Feels clunky, but it’s necessary if you’re building anything that depends on time-based triggers rather than push notifications or webhooks.

5. Scenario history deletes logs when a module is changed after testing

There’s a weird behavior most beginners won’t see until they’re three hours deep. If you test a scenario and then go back to tweak a module (like changing a filter or replacing a field reference), the execution history tries to reload the old run — but all the bundle fields are empty. It looks like the whole thing vanished.

Make doesn’t store hard copies of previous bundles. Instead, it links each execution to the live module specs at the time of reloading. So if you rename a field in the source module (e.g. Airtable → changed “Status” to “Phase”), previous executions break retroactively. You can still see timestamps and step status, but all payload previews show “[Field not found]” or just white space. There are no restore points.

“Bundle content is not available” — actual message when opening test logs after modifying the scenario.

This bites you hard when you’re debugging conditional logic or looking up why a router path got picked. Always screenshot or export bundle JSON if you think you’ll edit the scenario. Otherwise you’re reverse engineering your own intentions from half-deleted test logs.

6. OAuth 2 connections silently expire if browser tab is inactive too long

I lost over an hour rebuilding a Notion integration because I clicked “connect account” in Make.com, got distracted by 12 other tabs, came back and saw it fail with a vague browser redirect error. It turns out: if you don’t finish the OAuth handshake in something like five minutes, Make silently dumps the session, but still lets you click through the auth modal.

The result is a blank window, or sometimes a redirect loop that dumps you back into the module settings with an empty connection name. No error. Just the connection dropdown stays empty. You’ll only fix it by refreshing the entire scenario builder and restarting the flow.

This time decay sounds small, but it wrecks builds where the SaaS tool requires clicking multiple confirmation pages (e.g. Microsoft Graph stuff). Make’s UI doesn’t alert you that the token was dropped between steps. The OAuth state has expired, but the platform doesn’t clarify that — it just fails silently.

7. Inline JSON parsing requires specific formatting or entire flow breaks

I had an HTTP webhook feeding custom JSON directly into a scenario. It looked perfect. Field names lined up, structure validated in Postman, and Make’s data preview even showed clean syntax. But once I tried to reference nested fields using dot notation (like {{1.body.user.email}}), the parser kept returning blanks.

What solved it? Not changing the flow — changing the headers. Make expects application/json *and* a newline-free payload. If your custom webhook sends indented JSON with carriage returns (often default in some email APIs), Make splits the payload into partial strings and can’t resolve subfields. This shows up nowhere — unless you catch it in the raw mode editor.

Tips for JSON field referencing that avoid execution failures:

  • Always set Content-Type to application/json in the sender
  • Double-check payload uses UTF-8 with no BOM markers (some Python libs insert them)
  • Avoid tabs or multi-line formatting in your POST body
  • Test field resolution in the raw editor to catch whitespace issues early
  • Use the built-in JSON module to flatten nonstandard arrays if you’re unsure
  • Don’t assume payloads from Zapier-compatible apps work out of the box in Make

8. Aggregators auto-truncate strings without any character limit warning

Ever noticed that when you aggregate Airtable records and try to feed them into an email module, the output randomly cuts off after a few thousand characters? Yeah, me too. There’s no UI warning. No tooltip that says “this will be trimmed.” The length limit isn’t documented — but it’s real.

When you use Text Aggregator or Array Aggregator and then dump that content into a Notion block or HTML email, Make abruptly slashes the bundle at around a little over 1000 characters (sometimes 800-something, depending on Unicode characters). It doesn’t fail. It just slices and continues.

Only clue is when your Slack message or email goes out half-written and someone responds with “wait, is that it?” Debug logs show a full data set pre-aggregation, but once flattened via handlebars, the output silently cuts.

The workaround is to chunk the content manually or use arrays with join logic inside the target platform if it supports it (e.g. Notion bulk block insertions instead of one large string block).

9. Duplicate scheduling defaults if you re-enable an inactive scenario

Every time I re-enable a scenario that had been off for a while, I forget this trap: the old scheduler still exists — but the UI lets you add another. That means you might have two schedules hitting the same scenario unless you check the Schedule tab manually.

What happens then? Scenarios trigger twice within the minute. Modules like Google Drive fall apart because the result returns conflicting files. One gets an auth ping. The other gets a timeout. And if you’re generating filenames dynamically, forget it — the second run overwrites the first without warning.

There’s no duplicate prevention on schedule triggers. Each one runs independently of the other. Worse, the scenario editing interface doesn’t surface those schedules until you open a dedicated scheduler panel. So most of the time, you won’t spot it until something runs twice and drops half your data.

I’ve resorted to naming schedules aggressively — e.g. “Daily Run 8AM – v2 – only active” — just to avoid replications after archive-reactivate sessions.