Fixes and Workarounds for Auto-Generated Weekly Reports

Fixes and Workarounds for Auto-Generated Weekly Reports

1. Making GPT auto-write summaries that sound like your team

If you’ve ever tried to generate a Slack update that says “here’s what happened this week” and didn’t sound completely off-brand, you’ve probably run into the same GPT trap I did: too much context kills tone. I fed it Notion page exports, chunks of Jira tickets, even Airtable rollups. The results were always too formal—or worse, weirdly cheerful. Our squad doesn’t say words like “wins” at 9 AM on a Tuesday.

The trick ended up being weirdly simple: stop feeding it clean data. Give it a rough mess. I started pulling in unstructured updates from daily check-ins on Slack and weekly project retro docs. GPT reacts better to noise with personality than sanitized, “structured” bullets. Example system prompt I use in Make every Friday:

Act like a tech lead writing a casual summary for a cross-functional update. Use notes from check-ins. Avoid fake chirpy tone. Keep bullet points short.

This immediately made the tone less LinkedIn motivational and more “here’s what broke on staging again.” GPT won’t magically get your culture right—but you can steer it by deliberately feeding it language your team already uses.

2. How formatting silently breaks context across prompt chains in Make

This had me chasing shadows for half a day. I built a pretty straightforward Make scenario: Slack threads feed into a GPT summarizer which then pushes to Notion. It worked, until I enabled a Markdown cleaner before sending to GPT. Then everything got robotic again. Turns out whitespace was killing it.

If you send data as a flat, Markdown-stripped string (no line breaks, no bullet dashes), GPT doesn’t really hallucinate—it collapses. It defaults into its fallback summarization mode that sounds like tech journalism from 2014. I rechecked logs and saw:

Context length: 598 tokens (acceptable)
Structure: single block of text with no spacing or bullets

The fix? Re-insert your own structure before sending to ChatGPT. I now add \n\n- bullets manually inside a formatter module. Silly, but it works. GPT needs spatial structure or it panics and starts smiling too much.

3. Triggering report generation based on project activity not time

Every automatic report trigger I tried—Friday schedule, fixed deadlines in Airtable, repeating cron jobs—was either too early or too late. People would reply in Slack like: “That shipped Monday; why is it in today’s update?” or worse: “We’re not done, don’t report it yet.”

Using file or record changes as a nudge

I started hacking around with activity-based triggers. My best version so far:

  • When a Notion page’s “Status” moves to “Done” + a new comment is added + no summary exists yet
  • This kicks off a Make webhook to start the summary chain
  • The final Slack message includes contributor names pulled from edit history

This feels way closer to how team brains operate—don’t summarize until work is visibly cooked. But there’s a caveat: Notion’s API batching behavior sometimes collapses multiple edits into one payload. So I had to slow down parts of the Make chain with deliberate sleep blocks to avoid missing recent updates. Took two failed summaries and one passive-aggressive PM comment to figure that out.

4. The hidden Airtable field that breaks Zapier filters silently

This one’s so obnoxious it’s funny. I had a Zap to filter Airtable records where “Report status” equals “Needs review”. I tested it. It worked. Then it didn’t.

The actual bug: if someone renames a single-select option in Airtable, Zapier doesn’t update stored values internally. You’ll see “Needs Review” in the UI, but Zapier still expects the old string—like “needs_review” or “Review pending.” No error. Just… skipped.

I had to go back into the Zap, clear the filter condition, and re-select the matching value. It started working again. So now, in every weekly automation preflight, I re-click every dropdown in Zap filters just to be sure they haven’t reset under the hood. Zero warnings. Not even a log event.

5. Why my report summaries ran twice every Friday at 6 am

Timestamps are liars in Google Sheets. I had a scenario set to trigger when a new row was added to a Sheet populated by a Zapier webhook. It worked fine… until I noticed we were getting duplicate Slack reports—only on Fridays before standup.

I tore apart everything—formatters, Zap triggers, even Google’s timezone settings. Nothing pointed to duplicate rows. Then I exported Sheet version history and saw it: Google counts a sheet load followed by slight cell formatting (e.g., alignment) as a full edit. Since Zapier polls every 15 minutes, it saw column H as modified twice and re-sent the row both times.

The fix isn’t elegant: I had to build a filter that only lets rows through if a separate “last_updated_by” column changed. It’s not visible in the UI but Google Sheets has this metadata under the hood—you can expose it using Apps Script, then feed that into Make as a second condition. Super hacky. But it killed the ghost triggers.

6. Embedding links in AI summaries without getting red-flagged by Slack

Here’s a sneaky one: GPT links pasted into Slack often show up with blank previews or hit Slack’s link filter if you’re using a workspace with sensitive content filtering. When I was auto-generating status updates that included “view in Notion” links, people started reporting them as broken.

Turns out Slack does some internal heuristics to decide if a link looks suspicious based on verb usage. So if GPT writes “Click here to access this”, Slack might treat it like phishing. If it says “See the docs here → [link]” it’s fine.

I rewired the GPT prompt to tighten this up by injecting harmless-sounding context. Current prompt ending says:

Append a link with clear context like: [Review update → https://notion.so/...]

Zero false positives since then. Also helps that I added a small team emoji before the link (just a bullet or icon) to make it look less bot-generated.

7. Overloading a single webhook when batching summaries from multiple sources

Thought I was clever having Jira, Notion, and GitHub all send “work done” events into the same webhook in Make. When those fire in short bursts around sprint closeout, Make retries the payloads in a staggered delay—meaning an event from GitHub may get overshadowed by a second Make scenario chain that consumed the webhook too fast.

The overlapping executions caused mismatched summaries—like a Notion task summary referring to a Jira issue that wasn’t actually related. I was threading together notes from unrelated sprints. Nobody knew how until someone spotted a GitHub issue number mentioned in a completely wrong project wrapup.

Quick practical tips from that mess:

  • Use different webhooks per source even if they flow into the same scenario
  • Add a short source label string at the top of each JSON payload
  • Rate-limit chunks inside Make using sleep + deliberate delays
  • Tag summaries with date-based batch groups using scenario start time
  • Always log all incoming payloads to a buffer like Google Sheets first

And yeah, I now add a “Processing group ID” to every GPT message block. Might be overkill, but I’d rather label too much than miss again.

8. What breaks when you edit prompts mid-run in Make scenarios

This isn’t documented anywhere obvious, but if you pause a Make scenario mid-execution, edit an HTTP or GPT module’s fields, and then resume, Make might—might—use the new field labels but not apply the new config to cached paths.

In my case, I replaced a GPT prompt mid-run to add two new summary points. When I resumed execution, the old prompt ran. But in the UI, it looked like the new prompt had been saved correctly. Logs showed no error, just the previous prompt text re-used. I had to stop the entire execution and re-fire from scratch to get it to stick.

After that, I stopped trusting Make’s save-and-resume. Now I always copy modules before editing during paused runs—let the new one fire fresh instead of gambling.