Automating Daily Standups with Prompt Chains and Fallback Hacks
1. Collecting updates with stable formatting across multiple platforms
We had people posting standup responses in Slack, Notion, and one guy was replying to calendar invites—like actually typing his update into the Notes section. I didn’t even know Google Calendar showed that unless you went out of your way to look. So first battle: gather all this into something scrapeable without breaking weekly.
Pulling in Slack updates was simple enough at first. Set up a Zap on messages within a private channel filtered by a specific emoji (I used :dailycheckin: or whatever the cowboy hat guy is). Except one week it just… stopped triggering. No errors, no logs. Zapier had silently disabled the trigger because someone toggled message retention settings on Slack and the metadata Zapier relied on couldn’t fetch timestamp blocks anymore. Not documented anywhere.
Ended up using a webhook to pipe messages through a short Node script hosted on Replit. It parsed usernames and message blocks, and shoved everything into a Notion page with consistent markdown formatting. For Notion replies, I hit the API directly and filtered by the “standup” tag. Calendar invite notes required polling (gross) but worked through Google Apps Script connected to the Calendar API using `.getDescription()` off the `Event` object. I know.
2. Building a summarization prompt that actually fits inside GPT constraints
The raw updates ballooned to over ten thousand tokens if you didn’t compress them. Every time I piped them into GPT-4 for summarization, it’d fail silently or return generic filler like “Everyone made progress”. Classic model dodge.
I tried chunking updates per teammate, but lost cross-cutting info (e.g., someone blocked someone else). What helped was forcing bullet point compression before the prompt stage. Literally rephrasing each person’s update via a pre-prompt run of text-davinci-003 and truncating responses to “max_tokens”:60. It felt crude, but consistently gave 1-2 useful bullets per person without spilling over quota.
Then my actual summarization chain started like this:
{
"role": "system",
"content": "You are a project manager summarizing today's team check-ins..."
}
The trick wasn’t the prompt style—it was that I had to exclude non-ASCII punctuation. GPT freaked out when someone pasted in em-dashes or fancy quotation marks from macOS Notes. Zero mention of that anywhere in OpenAI’s docs, but once I ran everything through a `normalize-unicode` passthrough, all the misfires stopped cold.
Undocumented edge flop
If any of the updates mentioned URLs with query params, GPT occasionally hallucinated tasks out of them. Like: “Zach is working on ?utm_tracking_campaign=x”. The fix was to strip links before summarization and only reinsert them afterwards.
3. Adding fallback logic when no one submits on time
The real issue was the day no one submitted anything. The automation ran at 10:12am, output a blank summary, and emailed leadership with a beautifully polished report that basically said, “Nothing happened today.” Looked bad.
No errors thrown, because the pipeline ran flawlessly—on empty data.
I patched this by adding a sanity check step before triggering OpenAI. Basically a filter in Zapier that checks if the length of the “collected updates” field is less than 10 characters. If so, it reroutes to a fallback action that pings the team lead in Slack with “Hey, nothing got submitted for today’s standup. Rerun manually?”
Bonus glitch: if a late submission came in right after the check but before the Slack ping executed (the Zap is async and slow), it could trigger both paths. So if you’re doing this, either set a short delay before fallback kicks in—15 seconds worked for me—or debounce with a timestamp lock in Airtable.
4. Logging summaries while avoiding rate limits and token errors
I tried saving all final summaries into a single Notion database with a “date” key as the primary property. Quickly hit problems:
- Notion API hard-throttles if you insert too quickly
- Dates from Google Calendar sometimes had hidden timezone offsets, so my rollups misaligned
- GPT summaries with emojis became unreadable due to broken encoding on re-entry
- Zapier sometimes sent the body as HTML-escaped text even though I gave it Markdown
Swapped things around: each summary triggered a webhook to Make, where I had finer control. Took the raw JSON, reformatted dates to ISO UTC (using JavaScript’s `.toISOString().split(“T”)[0]`) and delayed inserts by 500ms between each write. That stopped both the throttle and the misformatted date blobs.
Then logged summaries not just to Notion, but also emailed them to a Gmail account backed by a filter that auto-tagged each report by sender. This weirdly helped me debug later, because I could see timestamps from Gmail metadata when Notion’s didn’t line up. Visible race conditions.
5. Dealing with team members who reply at random hours
Someone always updated at 2am. Every day. We’re not a global team—it’s just one guy who likes quiet hours. But the automation stopped collecting updates after 10:30am so his never got picked up.
I adjusted the schedule on Zapier to collect responses every 15 minutes between 9am and 1pm, and set a deduplication key per user per day. That way if someone replied late, we’d still pull it in for tomorrow’s report as a “carryover” task—or if they were fast enough, the same-day Zap caught it.
But then Make started queuing up overlapping executions if they took more than 9 minutes. I didn’t know it would parallel-run like that. Result: duplicate records. Resolved it by enforcing a middle step that wrote a run status to Airtable with a “status: running” flag, and refusing new runs if one was already in progress.
“I had to build a locking system in a spreadsheet to stop overlapping Zaps. That’s how we live now.”
6. Skipping updates when someone is out without awkward Slack shaming
Initially, the auto-summary would just say “Alex did not submit an update,” which got weird when Alex was on vacation. Cue accidental guilt trips and a Slack thread about respecting PTO.
I reworked the handling by pulling PTO data from our HR tool (an obscure BambooHR API integration) and combining it with Google Calendar OOO detection. The key was timing—I had to check for absence before collecting updates, or risk flagging people who weren’t even expected to show up.
So now the chain flows like:
- Check who is OOO today
- Exclude those users from the collection prompt
- If a user is not on the OOO list and submitted nothing, note them as “Pending”
Bonus: sometimes Slack usernames don’t match HR records. I had to map them manually the first week. There’s zero API field that reliably links the two—had to build a lookup table in Airtable and maintain it, like a caveman.
7. Rediscovering a broken setting that fixed the GPT prompt timeout
I spent two hours trying to figure out why GPT-4 was returning “This prompt seems incomplete” errors—despite the prompt JSON being fine. Revalidated it via curl, parsed it in jq, everything looked good.
Turns out I had forgotten that Zapier’s OpenAI integration defaults to GPT-3.5 unless you create a new OpenAI account connection after 2023. I was still using one from last summer. Swapped it out, recreated the connection, and suddenly everything worked. No timeout. No cutoff.
The wildcard was a leftover setting on the OpenAI app within Zapier—not visible unless I triggered a fresh account authorization flow. I wouldn’t have found it if I hadn’t seen this quote in the debug logs:
{
"model": "text-davinci-003",
"error": "Token context length exceeded"
}
Except I wasn’t using DaVinci. Or so I thought. Apparently defaults still override platform-side configs if you don’t explicitly set the model every time. Now all my calls include a hard-coded model in the API launch step, just out of spite.