What Breaks When You Rely on Pricing to Choose Tools
1. Why free tiers break the moment you push real data
Airtable’s free tier worked great until the marketing team accidentally imported a CSV with 400 records per row. It didn’t error—just silently skipped fields and collapsed lookup columns. No visible error. Base loaded fine even though half the data was gone. Took us hours to realize the views weren’t showing filtered results because the linked records didn’t exist at all.
The issue wasn’t row limits—it was formula evaluation. Fair warning: Airtable’s free plan limits automation runs, but complex rollups and lookups also throttle at scale, even if not documented. And if you add multiple views with filters pointing at dead references—it won’t break, it’ll just stop updating until someone notices something looks off.
Watch for silent throttling
Free tiers almost never fail visibly. They just stop doing things. In Airtable, for example:
- No warning if formulas silently stop updating when reference fields break
- Free plan autolinks fail on large linked record counts—no alert
- Zapier automations triggered off those views silently get skipped or misfired
- Airtable Interfaces don’t show all data if base exceeds some hidden CPU threshold
- Sync between workspaces may drop records without logging when budgets are exceeded
At one point, we were syncing a base with less than 1000 records, but one field used a regex formula that looped through linked records. The formulas just stopped evaluating. No alert. I noticed because the dashboard totals were stuck on the last known value for two full days.
2. Starting with Notion for everything and hitting the database wall
I’d love to say Notion can handle knowledge management at scale. It can’t. I ran an editorial calendar, SOPs, tool documentation, and project timelines in it for three months. Everything looked nice. But by the time our table had 20+ relational views, each filtering and aggregating different properties—it slowed to a crawl.
The worst offender? Using relations + rollups over status tags. You can’t filter by a rollup field’s value without a formula workaround. And if you update the original reference, the rollup often takes 5–10 seconds to catch up. In a shared view, that delay means someone edits the wrong task because the rollup still shows outdated status.
Undocumented edge case: Notion rollup fields do NOT guarantee atomic updates across views. They’re asynchronous.
If you stack calculated fields based on other calculated fields, some formulas evaluate using pre-update data for a few moments. And if you’re using automations like Zapier off Notion triggers, that pre-state bleeds into the trigger execution.
It looks like this:
{
"trigger": {
"rollup_field": "Waiting",
"checkbox_field": false
}
}
Which is valid—but if a user clicks a checkbox before the rollup refreshes, you get a false-positive automation trigger because the rollup still says “Waiting.”
3. When pricing models punish the only thing orgs do well sharing
Multiple teams at one org wanted to use Coda. Great idea in theory: relational docs, flexible formulas, even buttons and automations built-in. It works well—until you hit the pricing model. You pay per “doc maker.” Guess what everyone wants to be? A viewer who occasionally edits something small—but not enough to warrant being a “maker.”
We had people editing just tables—adding rows, updating tags—and triggering automations indirectly. Things broke because Coda quietly changed their trigger permissions. A user not recognized as a “doc maker” could edit a row, but the automation wouldn’t fire—they weren’t allowed to “trigger” it. Zero indication this happened. Just… nothing fired.
Real behavior:
- Row updated — visible in doc
- Automation depended on field match
- Automation didn’t run if updated by viewer-level user
- No error message
- No audit log
The workaround we used: Have a “proxy” button visible to viewers that triggers a script action as a system-level user. But that only works if you build out every interaction as a manual trigger flow instead of data-driven events. Kinda defeats the point of a reactive doc platform.
4. Accidentally testing your own automation quota in Zapier
Built what seemed like a simple task ingestion flow: add a row in Airtable, Zapier parses notes with OpenAI via webhook, then updates a field with the summary. It worked great for about two weeks. Then one day, someone bulk added 12 tasks. Zapier maxed out the OpenAI task quota and silently began delaying API calls. All further tasks entered a retry loop without a clear failure state. We didn’t know until someone asked why their summaries were missing days later.
Quote from Zap history: “Task delayed due to rate limiting on external service.”
But the rate limiting wasn’t shown in real time inside Zapier. Even the task status looked fine until deep into the Task History logs. You have to actually open each history record, scroll down, and check the Status Detail. It’ll say something like:
Task delayed — retrying in 900 seconds
That’s 15 minutes. Per retry. It eats your task count, time budget, and patience. If the API (say, OpenAI or Notion) keeps responding with 429s, the Zap doesn’t fail fully, it just cycles.
A real fix? Wrap calls in code steps that catch rate limit errors and skip or store the fail state explicitly—so you control whether retrying is worth it.
5. Miro’s team plan has zero governance for public links
Miro is deceptively open. You can set sharing to “Anyone with the link can view/edit,” which is fine—until you realize public links from private boards don’t appear in any admin panel. A team member, probably while screen-sharing, once copied a public edit link into Slack. That board had internal hiring roadmap sketches. It was quietly editable by anyone with the link for a good three weeks before someone randomly stumbled on it.
We checked the admin UI—it showed zero public boards. Apparently, only if the board was created in a team folder does it inherit org-level permissions. But if a user creates a personal board and invites others using team emails, it gets indexed in their Miro team but not governed by the org’s admin settings. That loophole meant our offboarding process missed a few rogue boards entirely.
Control fails in screen-level share buttons
The in-board share button offers options that bypass org-wide link restrictions. You have to go to the Admin Console to enforce settings—but individual boards still offer looser options. Behavior mismatch. People assume the org-level toggle applies globally. It doesn’t.
Once a link gets out, changing it doesn’t revoke the old one unless the entire board is duplicated to a new ID. Refreshing the link just regenerates access but doesn’t purge previously cached copies. We tested this by opening in an incognito tab—still editable for half an hour after regen.
6. When clicking a field in Obsidian causes entire workflows to hang
It sounds fake, but a single checkbox-style field in Obsidian’s Dataview plugin locked up our entire automation chain. We had a folder of YAML-tagged markdown notes synced with GitHub, feeding into a Zapier flow. A member accidentally tagged an item with a duplicate field key in frontmatter—something like:
—
tags: [project, urgent]
tags: [duplicate]
—
Obsidian rendered both, but YAML parsing on commit broke. GitHub showed the markdown file just fine, and even previewed it okay. But Zapier was using a webhook to pull file changes and parse them via a Code by Zapier step using JS + YAML. The duplicate field caused the YAML parser to return invalid structured objects—and all downstream parsing failed. Not visibly. It just returned `undefined` values for fields we expected, and followed the Zap’s default condition path.
The worst part? Obsidian never complained. You had to notice that the note looked slightly off in preview. There’s no error for reusing a frontmatter key. Even Dataview ran both as separate entries, e.g. `tags:: project, urgent` and `tags:: duplicate`, which left us debugging the bug wondering why filtering by tag didn’t find anything.
Aha moment: YAML doesn’t prevent duplicate keys, but most parsers only keep the last one. JavaScript YAML parsers like `js-yaml` just drop earlier ones silently.
7. Export formats from paid tools almost always break on import
In theory, switching between knowledge tools should be as easy as export/import. But exporting from tools like Slite, Nuclino, Confluence, even Notion gets weird fast. One real case: Dragged our entire research doc stack into Notion using their Notion Import tool, fed it a series of markdown files exported from Slite. Nearly every embedded image came over as broken links. Not because Slite broke them—because their export format wrapped them in an internal CDN URL that expires a few days after export.
Also, Notion’s importer failed to recognize frontmatter blocks at the top of markdown files—not even wrapping them in code blocks. It merged them into the body text. Chronological meeting notes ended up shuffled into what looked like blog post paragraphs. H4s were rendered as plain bold. It wasn’t wrong—just wrong enough to be useless.
Three things to test before ever trusting an export:
- Do embedded links or images use time-limited or internal asset URLs?
- Are heading levels preserved or flattened?
- Is metadata like date, author, or tags readable by the import target?
The docs never say this. You only find out after the team migrates, clicks on a link, and sees “asset not found” where a diagram should be.