How AI Checklists Miss Steps and Trigger at the Wrong Time
1. Rebuilding onboarding flows when the trigger barely responds
First attempt: I set up a Notion database, created a new team member record with all the usual fields—name, role, manager, start date—and built a make.com scenario to watch for new entries. Tested fine on Tuesday. Wednesday morning, the webhook wouldn’t trigger, and I hadn’t touched a thing.
Turns out if your Notion source database is filtered with a custom view and that’s what your Make webhook is watching, it’ll sometimes stop catching new entries because Notion’s sync API lags behind the base table update. Removing the view filter helped, but now I get triggered for archived draft employees, too. Not great.
The ugly fix: I added an extra boolean column called “Trigger Ready” and linked the Make trigger to that column being true. Manual, but visible. You tick a box when a teammate’s onboarding is confirmed. It delays automation by a click, but at least it runs consistently.
I still get phantom triggers about once a week: Make says it received the data, but there’s no visible task in my Airtable. Dug through execution logs and once saw the variable employee.email
was actually embedded in a nested array I didn’t expect. Notion schema drift strikes again.
2. Injecting AI into routines without remembering who clicked what

This one burns me regularly: an AI-generated checklist via OpenAI gets dropped into the onboarding doc, but no one remembers which box they already clicked. Middleware saves state—but the team updates a shared Google Doc, so the bot has no idea what’s been changed unless someone re-triggers it manually.
I tried using a persistent store in n8n, mapped to each new hire, with a unique hash ID based on their Slack ID + start date. Every time someone clicks a button labeled “Mark this complete” in Slack, it POSTs to n8n, updates the record, and sends a follow-up response summarizing which steps are done. Problem is, the hash IDs don’t always match—Slack can change user IDs if someone gets re-activated, and the trigger skips over the right person.
Worst case: duplicate checklists appear in ClickUp with slightly different AI-generated language. It actually matters. Managers glance at the checklist, see two similar tasks saying “Notify IT” and “Let IT know”, and assume both have been handled. Nope. Same action, twice, half-done.
Workaround: I started appending a checklistContextToken to each list. It’s literally a JSON blob I insert at the bottom of the comment thread, so the AI can parse it if re-triggered and avoid confusion. It looks dumb, but it means the bot can detect list reuse without external state.
3. Detecting duplicate checklist creation across apps with no memory
Originally I had one Zap that created a checklist in Coda, and another Make script that emailed a welcome note. The problem? Both relied on the same trigger – a new Google Sheet row labeled “New Hire.” When someone fixed a typo later (like changing “Andrw” to “Andrew”) it fired again.
Coda didn’t have deduplication at the time. So it happily stacked two onboarding checklists per hire. Each with its own AI-generated content, which meant the steps weren’t identical. IT once provisioned a MacBook and an iPad for someone who just needed the laptop—because the second list included an extra bullet the other didn’t.
I eventually added a timestamp-based event memory using Data Stores in Zapier. It stores a hash of the full row plus a “last sent” value. If the hash matches and it’s been less than 30 minutes, the Zap cancels itself. Still not foolproof—Google Sheets logs invisible edits as actual updates, like switching number formatting—but close enough that it stopped the duplicate chaos.
Quote from my log that saved me: trigger.context.previous_values.first_name = null
. So yeah, that “update” was literally a no-op. Zapier triggered anyway.
4. Prompting OpenAI to adapt when team roles shift halfway
Mid-onboarding, our content hire suddenly got converted into a contractor. ClickUp checklist was still framed like onboarding a full-time employee—benefits setup, IT badge request, internal buddy assignment.
Immediate request from HR: can the AI-generated checklist adjust based on role? Sure. But I hadn’t passed role-specific context into the prompt before. Everything ran off a static template in Make.
Update: I patched the OpenAI call to include a conditional JSON block like this:
{
"role": "contractor",
"accessNeeds": ["email", "Slack", "clickup"],
"skipSections": ["healthBenefits", "securityBadge"]
}
Then in the prompt: Only include checklist steps applicable to the provided accessNeeds. Ignore sections in skipSections.
Caveat: OpenAI sometimes still hallucinated “contractor welcome call with HR” even though that’s explicitly skipped. Best win was prompting with stronger language like “exclude entirely” instead of “ignore.” GPT-4 sticks closer to JSON structure if you spell it out with strictly follow this data structure
.
Accidental discovery: when I switched from English to German to test locale sensitivity, role handling improved. Prompt said “Vermeiden Sie alle…” (“Avoid all…”), and it got it right. Something about OpenAI’s non-English parsing must lock structure harder, possibly due to fewer idiomatic terms. Wild, but it worked.
5. Aligning task ownership when automations fight over assignments
This is where things get loud. One of the Make scenarios assigns onboarding steps to managers in ClickUp. Another assigns the same tasks based on department via Zapier. Both watched the same hiring webhook. No priority logic—just whoever got there first owned it. Sometimes Jane the new hire got all her onboarding tasks assigned to a PM she never met.
ClickUp’s audit log wasn’t helpful. All it said was that the task “was updated,” no clues on which automation did it. I had to bake in a comment thread with every assignment log like:
Assignment triggered by Make → Manager: {{managerEmail}}
and made Zapier do the same. Now at least we can see which system took a shot at assigning.
But then came the ugly edge case where both systems watched the same Slack command too. HR types “/onboard @jane_doe”, and both automations respond with “Success” — double work. I added a conflict detector using Integromat’s (now Make) flow control module: if a checklist already exists for this user ID in the last 2 hours, fail with a visible Slack alert. Only Zapier sends success now, Make waits for greenlight from a router path.
A small bug I hit last week: Make’s Slack module doesn’t properly escape underscores in user emails when using them as buttons. E.g., jane_doe@company.com
gets eaten by Slack’s markdown parser and turns into italic text. Button fails silently. Button IDs must be hex only; learned that after 30 minutes of wondering where my modal went.
6. Syncing AI-generated documents with manual edits from managers
Back in Notion again. I had AI generating onboarding guides using the employee’s name, department, and a few project-specific context tags (e.g., AI/UX, product marketing, DevOps). Sent straight into a Notion page.
Very slick at first—until managers started editing them. Some cut out sections they didn’t think applied. Some added lines that AI didn’t recognize next time it regenerated. When retriggered, the old edits got wiped.
Solution was messy but repeatable: at the bottom of every AI-generated doc, I inserted a hidden comment block:
<!-- ONBOARD-GUIDE: templateId=basic-v2, lastGen=2024-06-17T13:33Z -->
Then the system could parse the page and check: if there’s a newer version than the one requested, it halts. Instead of overwriting, it drops a Slack message to the manager like “Conflicting edits found. Want to regenerate from scratch or patch on top?”
The part that surprised me: Notion’s API doesn’t surface comment contents unless you’ve authenticated as the person who left them. So my automation bot account had to be the one who adds the comment. If anyone else edited it, we’d lose context. Now we run all page comment inserts through a GroupTools bot user that owns all metadata injection duties.
7. Delaying onboarding steps based on absence status in calendars
One week we onboarded three people who were all technically signed but not starting for ten days. The HR tool marked them as “Active”, which meant all the onboarding tasks and AI checklists kicked off. IT was provisioning laptops two weeks early. Workspace accounts expired before day one.
I built a proof-of-concept that watches the company calendar—if the start date is in the future and it’s more than 2 days away, pause the onboarding steps. That worked well, except when the employee’s calendar wasn’t created yet. Pulling data from Outlook just gives you a 404 in those cases. No fallback.
We added a pre-check: if calendar not found, delay onboarding by 4 hours and retry. But we had to cap retries—one guy never got onboarded because he didn’t set up his calendar until day three. After attempt number five, checklist creation just silently skipped.
Current state: we log all failed onboarding attempts in a Notion Kanban board called “Blocked Onboarding Flows,” with columns based on failure cause—calendar, email bounce, auth issue. Weird trend: multiple employees had their first name spelled slightly differently across systems (“Steven” vs “Steve”), throwing off all identity-based dedupe logic downstream. Had to normalize name mapping in a separate Airtable table now.