How I Set Up AI Meeting Notes That Actually Made Sense
1. Connecting calendar events to AI summaries without missing metadata
First time I tried syncing my Google Calendar to an OpenAI-powered meeting note generator, it cheerfully ignored 40% of my events. Turns out, if an event doesn’t explicitly have any guests (even if it’s just me hopping into a Zoom call), most AI tools shrug and assume it’s not a real meeting. I had to add myself as a participant to my own meetings just to get them processed.
The plugin I was using (Fireflies.ai) depended on attendees + location data to even recognize a meeting as worth transcribing. MAJOR missing feature: nothing in the interface warned me this was happening. It just silently skipped them. There was no log, no alert, no note in the interface — just missing notes.
What worked longest-term was using Zapier to trigger an OpenAI call whenever a calendar event started — I had to set it based on a calendar filter + make sure the event had the keyword “Zoom” or “Teams” in location or description. I also experimented with using Google Meet’s automatic recording as a trigger, but that only works if your org has that turned on — mine didn’t.
Key workaround: inject a small note like “Send to AI” in the event description. You can then filter just those.
2. Finding the right balance between transcript noise and useful summaries
Let’s talk about what the AI actually generates — because most summaries are either too shallow (“Meeting about pricing”) or bloated with useless chatter (“Everyone said ‘Hi’ and waited 2 minutes for someone to share screen.”) I tried Otter, Fireflies, Rewatch, and even hacked together a Whisper + GPT-3.5 Chain on Make. All of them struggled with this same thing: assuming that repetition means importance.
In a sales call, if someone repeats “We really care about compliance” four times, the AI goes into overdrive and lists it under “Key Objectives” AND “Objections” AND “Next Steps.” It’s not smart about context unless you give it guidance. That’s where a block of system instructions makes an actual difference. I started sending my transcripts into GPT-4-Turbo with a prompt that looked roughly like this:
You are a meeting summary assistant. Summarize this transcript.
Include:
- One short summary of what decisions were made
- Any action items (bullet points)
- Participants who spoke (names only)
- Skip greetings, small talk, and repeated filler
Once I added those lines, the output stopped hallucinating extra speakers and stopped including weird timestamps like “At 00:29:02 someone said… ” which weren’t actually meaningful. That instruction to “skip greetings” does more work than you’d expect — GPT is too polite by default.
3. Auto-start recordings or transcripts when the meeting actually begins
This one was harder than I thought: getting automation to trigger the transcript only when the meeting actually starts — not when it’s on your calendar. Calendars run on hope. People delay and reschedule. I had a week where my AI notes kept summarizing placeholder calls that never happened, because the system assumed anything in the calendar was real.
Eventually, I turned to Zoom’s webhook for meeting.started
. Here’s the weird bug: if a host launches the meeting 10 minutes early, and no one joins, Zapier still processes it as a real meeting. I had to add a second filter later that checked whether the transcript had at least one change in speaker name — this is incredibly hacky, but it mostly worked:
if transcript.speakers.length > 1 then continue
else halt_summary_generation()
Google Meet gives you nothing here. Unless you use a bot account to auto-join (which brings a whole other set of platform issues), you can’t reliably get real-time triggers. So I spent a week logging into all meetings on one device while using a second one to capture videos manually via OBS — just to simulate “bot presence.” Not proud of it. It worked?
4. Training GPT to recognize common phrases and skip repeated fluff
If you spend enough time listening to internal team meetings, you’ll start to compile a Greatest Hits album of fluff. My team says “we can circle back on that” in maybe 80% of our standups. If you feed that into an AI every time, your notes are just a graveyard of vague future promises.
The fix? I started compiling a blocklist of phrases I didn’t want showing up in summaries. Then I injected that list directly into the system prompt. Something like this:
If the following phrases appear, discard the sentence:
["let's circle back", "we'll ping them later", "put a pin in it"]
Some models still sneak a few through, but GPT-4 is shockingly good at chucking empty phrases once it’s told not to glorify them. There’s also an unexpected bonus here: doing this forces you to notice how your team actually communicates. After reading the same non-progress statements in 12 meetings, I finally just brought it up to everyone. Now our meetings are shorter and less passive.
None of the off-the-shelf AI summarizers let you do this cleanly. They assume your job is to filter after — but once the summary is polluted, it’s too late.
5. Capturing speaker names without needing a transcription service login
There’s always someone on the team who doesn’t want to create a new account for yet another transcription tool. The workaround I found was to skip name-training and instead assign pseudo-names via speaker diarization in Whisper. The sneaky workaround is embedding metadata in the calendar event and matching timestamps.
What I did:
- Add “Alex is speaking for Slide 3 to 6” in the event description.
- Use time ranges to match against speaker timestamp segments from the transcript.
- Rename speakers accordingly before feeding into GPT for summary generation.
Yeah, it’s brittle. Yeah, it breaks if people go off-script or someone takes over a slide. But it’s better than “Speaker 1” and “Speaker 2” alternating in a room of five people. Fireflies and Rewatch try to solve this via user accounts, but unless your team’s centralized and compliant — good luck.
6. Splitting one long transcript into useful summaries per agenda section
Most AI summarizers treat the meeting as a single blob of conversation. But if you’ve got a structured agenda (e.g. “Hiring > Budget > Launch Plan”), you want your notes split up accordingly. I initially tried doing this with heading-detection in GPT via prompt hacks, but it kept guessing wrong if people bumped around the order.
The “aha” moment was when I started injecting section markers into the transcript using a Google Doc template + live typed headings from a shared notetaker. So as someone typed “## Hiring Update,” those cues made it into the transcript. Then later, GPT could chunk the content more reliably.
Basically: get your signals into the transcript while people are still talking — not after.
This worked best in Zoom with real-time captions pulled via OBS into Whisper. For Google Meet, there was no API to inject anything, so I added a trigger that posted a comment in a Google Doc when the agenda changed — which Whisper then picked up as part of the audio.
7. The missed setting that fixed my disappearing summaries in Notion
Three weeks in, I suddenly noticed that some meetings were fully transcribed, but no summary was ever logged into Notion. The Zap log showed it got triggered. OpenAI returned a response. Then poof — the record vanishes upstream. No error, no 429, just… nothing.
Buried deep in the Notion integration settings is a detail I fully overlooked: if the Notion page is created via automation, but the database template applies specific property constraints (like a required multi-select tag that wasn’t populated), the page is created but doesn’t surface in views. That means it exists, but ghosted from your interface.
The stupid fix was adding a dummy tag like “unsorted” in every Zap by default. Once I did that, all the missing summaries reappeared in my tracking dashboard.
I probably erased five working summaries just because I assumed they didn’t generate. They were right there. Just invisible.
Also worth noting: Notion databases really don’t like receiving single quotes in AI titles. I had to strip or double-escape quotes in names like “Client’s Budget Review” — otherwise the whole entry would fail quietly.
8. Building in a human review step without slowing down automation
Everyone wants the AI to do the meeting notes until someone’s name is spelled wrong or a client pitch sounds like it was written by Clippy. Best solution I’ve found: automate 90%, but route the summary draft to a Slack channel for quick human review before it gets pushed to the final hub.
This is what that looked like:
- Transcript triggers OpenAI summary →
- Result posted as a thread in #meeting-notes-review
- Custom emoji reaction triggers next Zapier step (e.g. ✅ = approve, 🛑 = cancel)
The clever bit: only managers and PMs could react with the ✅. Everyone else could comment to suggest a change. Once approved, Notion gets populated, emails go out, whatever you want. It added maybe 30 seconds of latency but saved me from a few awkward calls where the AI summary included “[Indistinct speaker jokes about layoffs].”