What Actually Works When Repurposing Content With AI Tools
1. ChatGPT is fine until custom instructions quietly vanish
So I had this tidy little workflow using ChatGPT with custom instructions to rewrite transcripts into summary bullets. The same 3-sentence prompt, custom system role set to “You are a detail-focused journalist summarizing raw interviews.” It worked like a dream — for two weeks.
Then one morning (Tuesday, I remember the coffee tasted wrong), the output started including clickbaity titles and SEO fluff that I absolutely did not ask for. Turns out ChatGPT has this behavior where if you switch accounts mid-browser session, your custom instructions can silently fail to load — no errors, no fallback message, just garbage tone shifts.
There’s no audit log. If it happens, you just manually remake the prompt and re-paste every time you log into a different account. The response length also capped out early during those broken sessions — around 800 characters instead of the usual 1200-ish.
I now force-refresh the chat page and re-enter prompt content every time I reopen a tab just to stop the ghost of the wrong tone from haunting my bullets.
2. Repurpose.io has solid templates but breaks on silent upload errors
A client wanted their weekly LinkedIn livestream chopped into reels, headlines, and quote cards. Repurpose.io was clean, templated, and honestly the UI convinced me. Drag input, tick outputs, done. Except four weeks in, two episodes never showed up on YouTube Shorts.
Looked perfect in the dashboard; no errors, nothing in the logs. Eventually found the issue by accident: sometimes it uploads videos too fast after rendering, before the MP4 is fully saved to storage. The platform skips retries if the upload API throws a null response — not even a 400, just empty.
I submitted a ticket, but until they fix that, I added a Make.com delay + retry loop through a dummy Zapier webhook that forces a 30-second gap post-render. Brute force, but it’s held so far.
One trick: Repurpose templates let you tag characters based on spoken names, but it doesn’t warn you if it’s using cached speaker detection. If you upload multiple episodes within the same hour, it’ll guess person labels based on the last file. This means two guests with different voices will be labeled as “Dave” until you manually reset session context.
3. Descript’s auto-caption exports are unreliable during batch mode
I like Descript for first-draft captioning because it’s fast, but if you try to export more than three videos in sequence, especially with multiple caption layers (top quote + bottom transcript), the third or fourth MP4 often drops the burned-in SRT layer entirely.
No warning. The video just won’t have the captions — and you only notice after uploading. It’s repeatable. Their batch renderer seems to hit a GPU lockout if you try rendering and exporting without a long enough pause between exports.
Now I queue the first three to export, wait, refresh the project, then resume. Bit of a dance, but it avoids failed overlays. No batch error logging exists unless you individually open each export queue item.
Aha moment: If you trigger exports via the keyboard shortcut (Cmd+E), the exports break far less frequently than using the top-right export button — possibly due to a slightly different internal render handler. I haven’t seen full documentation on this, but behavior is consistent.
4. ElevenLabs voice cloning gets weirdly inconsistent on small audio files
Tried cloning myself from old podcast intros — total audio was about a minute, all clean WAVs. The initial clone was surprisingly good. But when I used it to re-narrate a blog post, something went sideways after a few paragraphs. The tone shifted slightly higher. Like I had become my own excited twin.
This seems to happen when the cloned voice model doesn’t have enough baseline variation across sentence types. It gets locked into one cadence. You can prompt different tone by structuring sentences unusually (e.g., ending mid-thought, inserting clauses), but that only goes so far.
Sending in at least three minutes across varied intonation fixes it. I now record five unused takes on purpose just to pad the training set. Ironically, bad audio with background clicks introduced more realistic breathing patterns.
“Hmm… that version of me is better at podcasting than I am.” — actual Slack comment from a friend who didn’t realize it was a clone
5. Notion’s AI summaries mutate over time even on frozen pages
Here’s something that quietly broke my trust this month: I had used Notion AI to summarize my old blog posts for social outlines. I ran the summarize prompt on a frozen versioned page. Worked fine. Then, 10 days later, I opened one of the AI blocks — and it had changed.
I hadn’t edited the page. Nothing in the history. But the summary block now had different wording. Chatty instead of factual. Turns out Notion silently re-generates AI content when you copy the block into a new page, even if the source block looks static.
This affects templates. It’s especially a problem if you’re auto-publishing these summaries via the API — you’ll see updated AI-generated text without manually triggering anything. I checked with support. They basically said “yeah, that can happen if you re-duplicate AI surfaces.”
Now I convert AI blocks to static text before moving any repurposed summaries. Otherwise, your tone post-to-post might swing from David Attenborough to BuzzFeed quiz dinosaur.
6. Synthesia avatars truncate text if scripts include emoji markup
For a scaled content project, I used Synthesia to generate talking head summaries for blog roundups. Everything went fine until one script — generic mid-length — got totally cut off three sentences in, with no error shown. Just silence.
I finally narrowed it down to the closing line: “That’s it for now! 🚀” If you include an emoji in raw text (not markdown-escaped or set via Synthesia’s UI tool), sometimes the script breaks rendering. There’s evidently a parsing bug with glyph+punctuation combos that causes the TTS engine to hang and silently abort midway.
I now pre-clean inputs through a Make.com scenario that strips emoji and double-checks Google TTS compatibility before I hit render. Bonus discovery: line breaks placed inside double quotes increase stuttering for certain avatars. You have to remove line breaks or wrap them outside speech marks.
7. Tips that survived the chaos of repurposing tools at scale
After running almost a dozen content repurposing tools across nine client accounts, some behaviors just stuck in memory.
- Always trigger final exports (e.g. MP4, PDF) manually after checking previews — triggers break silently in automation flows
- Pre-clean transcript text by stripping smart quotes and dashes; most AI tools mis-render dialogue otherwise
- If auto-generation platforms offer browser and app access, test both — performance often skews hard toward one
- Use Google Sheets as a staging DB for filenames, snippets, and errors — async monitoring is a lifesaver
- Refuse to trust any estimated word counts or token limits published by vendors — real caps vary hourly
- Mute AI voices before distributing draft videos; subjective weirdness only becomes obvious on playback
- Time-stamp every export with the triggering tool’s name in filenames — it’ll save your sanity in shared folders
None of these are listed in setup guides. Most resulted from missed client deadlines or redoing 12 video exports at 2am because a zap ran before the assets finished uploading.
8. Zapier formatter has unspoken limits on JSON character length
This one felt like a quiet betrayal. I built a Zap that takes paragraph text from a CMS update and converts it into a JSON array of lines for feeding into another OpenAI step. Should’ve worked fine.
Except it started failing— intermittently— whenever the input text got near 1000 characters. No warning, just silent step skips. Turned out Zapier’s formatter step has strange soft limits when you use line-item utils or Text → Split in quick succession: nested arrays over a certain length silently truncate if passed through variables, not hard-coded.
The fix was… annoying. Split the paragraph in a Code by Zapier block instead, pre-chunk the array manually, and rebuild using join + line items. Even better, I staged it in Google Sheets and pulled parsed arrays over via Lookup Row instead of calculated fields.
They don’t document this, and support had no public-facing page to link. Their rep admitted, “Yeah formatter steps behave inconsistently when chaining output into maps.”