AI-Powered Press Releases That Keep Breaking The Sandbox

AI-Powered Press Releases That Keep Breaking The Sandbox

1. Setting up a repeatable press release prompt inside ChatGPT

I originally thought this would be simple: throw in a basic prompt template, feed it the product update data, and done. But writing a reusable ChatGPT instruction that sounds like an actual human wrote it? That took way longer than I’d admit to a client.

Here was the initial approach:

"You are an experienced tech PR writer. Use this internal draft doc to write a press release. Maintain a concise, engaging style suitable for trade journalists. Highlight user benefit, quote a fictional executive, and add a CTA at the end. Format with subheadlines, bolded lead sentence."

It worked… once. Next run, ChatGPT bolded every sentence, hallucinated a launch date, and included a real executive quote — from a different company. Classic.

Modifying the tone with system prompts helped a little but didn’t solve the core issue: too much variability in the inputs and not enough guardrails in the prompt.

What finally clicked:

“Only generate content based on the structured input below. Do not add any additional data. If unsure, return [INSUFFICIENT INPUT].”

That line blocked almost all hallucinations. Anytime I forgot it, the AI would get too creative. Turns out it actually needs permission not to invent details, especially when the tone is persuasive.

2. Why Google Docs output formatting falls apart on export

If you try to push direct markdown or HTML output into Google Docs via AppScript or Zapier actions, get ready for misaligned subheads and spacing chaos that looks fine in the Zap preview but melts inside the actual document.

It’s especially bad with AI-generated output that includes bulleted lists nested beneath bolded text. Something in the Google Docs parser just decides to flatten everything or over-indent randomly. This only happened after I added an AI step that generated markdown-like structure.

Workaround that didn’t work: piping it into Docs and then running a cleanup script. That script nuked the CTA every time because it was looking for exact match formatting. Ended up manually using “Paste without formatting” and replacing with Heading 2/3 styles.

Still better than re-styling an entire document blindly based on whatever GPT labeled as a heading.

3. Getting product data from Notion into ChatGPT with fixed structure

I keep the release input data in Notion — three tables: ‘Key Update’, ‘Impact’, and ‘Team Quotes’. Pulling that into ChatGPT reliably without garbling the order or collapsing empty cells took forever to stabilize. The key problem was spacing. Empty line breaks in a Notion database export get eaten during the Zapier handoff into OpenAI’s prompt block.

What finally fixed it (weirdly):

{{Key Update}}
[IMPACT_START]
{{Impact}}
[IMPACT_END]

[QUOTES_SECTION_START]
{{Quotes}}
[QUOTES_SECTION_END]

I literally added these marker tokens to force GPT to parse inputs linearly. Otherwise it would randomly reorder or merge sentences across sections. Sometimes it would turn a VP quote into part of the intro paragraph. Starting the prompt with “Process everything between tags. Treat each section discreetly.” helped, but wasn’t enough without the physical markup above.

Still testing what happens with multiline cells inside Notion though. That’s where alignment still slips.

4. When Claude starts rewriting quotes with stronger adjectives

Tried out Claude to see if it’d outperform GPT-4 on tone. It’s more consistent for clarity, but it has a chronic habit of inflating executive quotes. I paste in: “This release marks an exciting new chapter for our team,” and it comes back with: “We couldn’t be prouder of this groundbreaking milestone that reshapes our industry.”

There’s no way that’s going in front of our PR manager. She flagged it within seconds: “Did someone fake this quote?”

No prompt tweak seemed to fully fix it. What finally worked was giving it a tone enforcement sample like:

“Only adapt quotes for one-pass polish. Do not amplify tone. Avoid intensifiers like ‘transformative’, ‘revolutionary’, or ‘game-changing’.”

This stopped maybe 90% of the inflation, but it still occasionally turns “smart update” into “cleverly engineered enhancement” — which just screams AI-written.

5. Scheduling press releases with Airtable forms and webhook triggers

I built a presser submission form in Airtable, tied to a checkbox field: “Ready to Release”. That triggers a webhook in Make (formerly Integromat), which then launches the generation process using OpenAI and routes the final draft into a Notion page for review.

Took a while to figure out Airtable’s “On Check” logic doesn’t fire reliably if you bulk-edit rows. Learned that the hard way after testing with five in a row — only the last one actually triggered the webhook.

  • Check one row at a time to ensure trigger fires
  • Use a hidden Single Select field as a stage tracker
  • Log input-output pairing in a separate table for debugging
  • Don’t trust the “Last Modified Time” col — it lags during batch changes
  • Confirm data is correctly mapped into the webhook JSON payload

Even then, the webhook sometimes fires twice when editing from mobile. No idea why. Filed that under “don’t edit production Airtable flows at the airport”.

6. One trigger in Make silently skipped over daily scheduled runs

I set up a scenario in Make to run every morning at 9am, pulling any new ‘Ready’ press releases and pushing them through generation + Notion steps. For a week, nothing triggered. Finally looked at the logs: No runs initiated. No errors. No alerts.

Turns out… the trigger module had defaulted to “Run once manually” after the initial setup. Not to “Every day at 9am”. I must’ve skipped the confirm step while testing another path. But there was never a banner or alert telling me that it wasn’t scheduled.

It showed green ✅ in the dashboard — even though it hadn’t run at all.

Lesson: check the Schedule tab inside the scenario, not just the main switch. This bug stole an entire week of posts, and I only realized because someone asked, “Hey were we skipping releases or something?”

7. Getting blocked by GPT rate limits while embedding debug logs

Decided to send the full output + debug log from Make through ChatGPT for analysis on why a few posts kept losing their CTA block. Immediately hit rate limits with even mid-sized logs. GPT-4’s 8K context limit felt fine until I realized the log format had massive timestamp padding. Just pure noise.

I cleaned out all non-error lines and retried:

[DEBUG] Generating section CTA...
[PROMPT] "Include call to action at end"
[RESULT] None found
...

Still too long. What actually helped was inserting big header dividers for log segmentation and letting GPT analyze each segment independently (“Review only the [CTA GENERATION] section”).

Also tried using Anthropic for this, but Claude doesn’t like half-broken JSON and starts cleaning it silently. At one point it “helped” by removing a missing field I was trying to catch. Nope.

8. One misplaced quotation mark broke the regeneration loop

In the Make pipeline I had a basic conditional like:

if [Generated Draft] contains “QUOTE_START” and “QUOTE_END” → proceed

But it silently failed. Logs showed the module ran, found both tokens, and still bailed. Eventually I saw the issue: those weren’t straight quotes. They were smart quotes pasted from a Word doc. Left and right curly things that matched visually, but not as ASCII matches.

This appears to be an edge case where Make’s logic auto-converts visible quotes when you paste them into the builder — but then compares them exactly at runtime. So what you see isn’t what it runs.

Fix was to open VS Code, type the quote marks freshly, and paste from there.