Tagging Systems That Keep Breaking in 2025 and What Still Works

Tagging Systems That Keep Breaking in 2025 and What Still Works

1. Tagging structures fall apart when synced across too many platforms

I had a Notion page tag mysteriously disappear after syncing with my Readwise feed, which was also connected to Obsidian at the time. Checked my backups and… yeah, the tag wasn’t deleted. It was overwritten—silently—because Obsidian’s YAML parser didn’t like the slash in one Quick Add template.

The moment you try to maintain a universal tagging system across more than two note apps, weird stuff happens. The same tag name may render fine in Notion, partially break in Obsidian due to markdown frontmatter quirks, and fully mutate in Airtable if it’s reading the tag as a linked record rather than text. Add Readwise, and you can bet it’ll reformat your underscores into spaces.

The biggest flaw isn’t the software itself—it’s the implicit assumptions each platform makes about tag identifiers. Some treat them as free-form strings, others quote them, others drop casing. A tag like project_Alpha can get turned into Project Alpha in Readwise highlights, project-alpha in Obsidian, and completely stripped in a Notion synced database if the API doesn’t send it correctly.

If you’re syncing tags across multiple platforms, avoid anything with symbols, slashes, or camelCase. Stick to lowercase, dash-separated tags unless you enjoy debugging the invisible.

2. Nested tags behave inconsistently depending on export format and tool

There’s a big difference between hierarchical tags (like #topic/ai/generative) versus tags in buckets (like tags: ["ai", "generative"]). And it’s never obvious which one you’re actually working with.

In a recent export from Notion to markdown (via an unofficial third-party tool), all my tags came out flattened. Tags like #reference/books turned into just #books. Obsidian took this to mean a completely different tag since it’s very literal with slashes. Meanwhile, when I imported the same data into Coda via CSV (don’t ask), the tag column was ignored outright because it thought it was a malformed array.

None of the tools document this—but in Airtable, if you create a multiselect field and paste a tag string like category/business/automation, it will automatically split it on slashes if you imported via CSV, but not if you pasted it manually.

Once, trying to fix this in a Make automation, I noticed the incoming webhook payload had the right tag nesting. But after being routed through a formatter, slashes were URL-encoded inside the string and suddenly treated as characters rather than hierarchy indicators. My “aha” moment came when I logged a raw payload and saw this mess: "tag":"workflow%2Ffailures".

3. Bulk editing tags from mobile devices still breaks quietly

I got lazy and tried to clean up some tags from my iPad during a conference. Two hours later, my tags were erased in Obsidian—because the mobile Files app mangled the frontmatter YAML blocks and didn’t preserve line indents. No warning, just vanished metadata.

This mainly affects any system that stores tags in plaintext—like local markdown—and depends on formatting to parse them correctly. You may think you’re just renaming #inspiration to #ideas, but in reality you’re corrupting the structure if the editor you’re using auto-wraps the line.

I’ve also hit this in Notion: bulk-tagging from mobile creates a bunch of duplicates instead of consolidating existing tags. If you tap a synced database view from the app and hit edit, the inline tag field changes from dropdown to freeform—without any indication. I ended up with Highlight, highlight, and Highlights all treated as separate tags.

Best case, you waste time cleaning; worst case, your filters silently stop matching anything. This only happens from mobile, specifically iOS—not Android. Verified it three times now.

4. Tags embedded in AI prompts behave unpredictably in different LLM platforms

Try this: embed a tag like #goal/sprint1 inside a prompt sent through Zapier to OpenAI and via Make to Claude. Watch what happens.

In Zapier, that prompt passes through mostly untouched—unless you’re using a formatter step, which may escape the hash and slash characters. Claude, though (especially through Make), seems to use internal prompt sanitation that drops anything beginning with a hash unless it’s inside quotes. Took me five tries to realize my tag-based references were never reaching completion.

There’s a silent sanitization layer happening. If your AI automations rely on parsing or echoing input tags, you have to test for hash-stripping behavior. I got stuck on this for most of a day until I changed the tag syntax to angle brackets, e.g. , and noted better reliability across models.

This affects AI agents, too. If you’re feeding tagged prompts into something like AgentGPT or AutoGen-style flows, repeated or malformed tags bubble up and reveal inconsistencies in whether tags are treated as metadata, instructions, or text samples.

5. Tag-based filters in dashboards often reset without visible reason

Last week, I rebuilt a Notion dashboard for someone’s editorial pipeline—and overnight the tag filters stopped working. At first, I thought someone had renamed a tag, but all the inputs were unchanged. The real issue? The advanced filter had reverted to match Any instead of Exact—which isn’t exposed directly in mobile edit mode.

There’s a UI state bug here. If you load a filtered Notion database on mobile, then switch the filter settings—even if you discard them—they sometimes get baked into the cloud-synced version with zero feedback. Then you open it on desktop, and nothing matches, even though the filter looks correct.

I confirmed this by screen recording both views. Exact-match filters became partial string match under the hood, but visually stayed the same. There’s no diff view for filters, so unless you remember the original logic, you’re stuck hunting blind. Obsidian has something similar: a file tag query in Dataview stopped returning files once I did a global tag rename. Turns out it renamed the inline tags, but not the block properties, which don’t get scanned by default unless you configure that manually.

If your filtered dashboard stops returning items, try recreating the filter logic from scratch—don’t trust the apparent UI state.

6. Tag migrations between tools lose metadata and capital casing

During a migration from Bear to Obsidian (yes, still doing those in 2025), I ran into a silent loss of tag casing. Everything went lowercase—so #NextSteps became #nextsteps—which broke some filtered searches but more importantly confused the heck out of icon-based dashboards that used casing to control emojis.

Neither app warned this would happen. Bear exports notes as rich text by default, and during the markdown conversion, their JSON-to-TXT script strips formatting—including emoji tags like #⚡️Quick. To avoid this, you need to export to plain text manually, not via the CLI tool.

There’s no perfect fix, but a partial rescue: before exporting, find and replace your smart tags with bracketed aliases like [tag:Quick]. That way, if your automation or script chokes on emoji or casing, you can always map them back cleanly.

Reminder: Not all tools are case sensitive, and some—like Airtable—will normalize your tags on import unless you wrap them in quotes. Yes, really. I tested a 500-row import and got 23 casing-related duplicates, all invisible unless you sort the field alphabetically.

7. Tips that lowered my blood pressure by at least a little

  • Always test tag-based filters on both desktop and mobile—even if you’re only building for one
  • Stick to lowercase, dash-only naming unless your use case demands nesting
  • Avoid using emoji or complex Unicode inside tag names (even if Obsidian supports it)
  • If building dashboards from tags, screenshot your filter conditions for rollback safety
  • Don’t reuse tag names across different contexts unless you’re absolutely sure casing and delimiter logic will match
  • After any bulk tag edit, test one known-failing filter to confirm behavior before continuing
  • Log raw payloads from any automation step that transforms or forwards tags between tools

8. Silent failures from third party syncing scripts are still brutal

I used an excellent Python script once to sync Roam Research notes to local markdown files with tag support. Everything seemed perfect—until a week later, I noticed the automation just… stopped syncing new tags.

Disabled API key? No. Changed file permissions? Nope. The actual failure: the author updated the script to skip notes that hadn’t changed—and tags were now stored in metadata blocks outside the diff detection. So my tag updates weren’t counted as changes. Again, no warnings, just skipped updates.

This is the kind of failure where a note appears up-to-date, tag looks right, but nothing downstream sees it. I only caught it because my Obsidian canvas stopped showing new items in a tag cluster. No errors, just absence.

After enough of this, I now add a version or checksum value to YAML blocks that gets updated every sync, even if nothing else changes. It tricks the automation into resyncing, but it’s better than silent decay.