Prompts That Actually Work for Client Onboarding Automation
1. Using ChatGPT to Draft Client Welcome Emails on the Fly
There’s a moment, usually five minutes before a Zoom kickoff, where I need a halfway-decent welcome email that sounds like we’re organized adults. I started using ChatGPT for this by keeping a base prompt in a Notion page. Something like:
Generate a friendly onboarding email for a web design client. Use a warm but professional tone. Mention that someone from our team will reach out within 2 business days to schedule kickoff. Include our Calendly link and a short FAQ.
Initially, it worked about 60% of the time. The other 40%? I’d get weird robotic phrasings like “We are most delighted to onboard you.” It took adding very specific style instructions to the prompt (“write in a US-business-casual tone, avoid passive voice, use contractions”) to get something usable consistently.
One glitch: sometimes ChatGPT would pull in “sample data” as if we were a law firm. Entire paragraphs about contract negotiations and clauses — no idea where that came from. Deleted the prompt, rewrote from scratch, kept happening until I switched to GPT-4.
If you want quick auto-generated onboarding messages with human feel, include the following in your base prompt:
- Explicit instruction on writing style (e.g. tone, sentence length, level of formality)
- Key phrases you do want included (team member name, scheduling link, next-step summary)
- A short example email at the bottom so GPT learns the pattern
Bonus: if you paste the prompt into Zapier’s OpenAI action and feed in client name, project type, and kickoff date variables, you can auto-generate intro emails every time a new Airtable record is created.
2. Drafting Prompt Templates for Generic Versus High Touch Clients
High touch clients will reply to a bot-crafted email with “can we hop on a call instead?” But generic ones (like Shopify store owners or quick-turn SaaS contracts) barely read the onboarding message. I ended up with two separate prompt templates:
- Generic version: tighter, bullet-pointed, link-heavy
- High-touch version: reads more like a narrative, with personality cues that imply we “customize everything” even when we don’t
The trick is generating tone variation from the same data. I originally tried using one giant prompt with conditional logic (“If project_tier is enterprise…”), but GPT would ignore half the conditions randomly. Best fix I found was splitting into two separate Zapier OpenAI actions and routing each with Paths
.
Aha moment: adding a one-line variable like sentiment = luxury
or sentiment = efficient
gave GPT enough context to change phrasing without repeating the whole request prompt. That’s what got it from “your onboarding packet is ready” to “we’ve hand-prepared your personalized onboarding journey.”
It’s also worth noting: when using Make.com, GPT prompts behave more predictably if you use the JSON body option, rather than form fields. The plain-text form would strip quote characters from nested instructions — broke things silently.
3. Getting Accurate Info Into Prompts Without Breaking Every Time
This is where I burned an hour trying (and failing) to send Notion checkbox values into a GPT prompt through Zapier. Boolean fields came through as “true” even when the box wasn’t checked — turns out Notion’s API doesn’t send absent values at all if fields aren’t enabled on the page. So GPT would hallucinate based on faulty assumptions.
Solved it by preemptively filling every new Notion row with default values via Make, before the form data entered. That gave me something consistent to pass into the prompt. I was caching things like:
{
"has_budget": "false",
"needs_nda": "true",
"priority": "medium"
}
Then in the GPT prompt, I could write:
When has_budget is false, do not mention payment timeline. When needs_nda is true, append an NDA template link at the end.
Feels simple in hindsight, but the false positives I got before that fix were embarrassing. At least two clients got onboarding messages saying “please sign the NDA again” — before we’d sent it once.
4. Auto Summarizing Pre-Sales Notes Into Onboarding Tasks
During sales calls, I drop notes into Apple Notes or Slack voice memos. I finally rigged a system in Notion: a table for every active client with tags like “needs delivery ETA” or “unclear pricing expectations.” I assumed I could just throw these into GPT with “summarize into onboarding checklist” and be done.
It worked… until it didn’t. If the input note ended with something like “also, he might want design help later,” GPT would generate fake subtasks like “Add timeline for design support – priority high.” It invented confidence out of inconsistency.
The fix was to prepend the input note with:
Summarize the following into actionable onboarding tasks. If uncertainty or assumptions are present in the text, flag them as pending or tentative.
Still not perfect, but now at least my Notion board doesn’t include action items with imaginary due dates.
Tips for more accurate results when summarizing:
- Add date context to the beginning (e.g. “This note was from our discovery call on May 5”)
- Strip filler language before feeding text in (honestly, “he was probably curious about…” just adds confusion)
- Ask GPT to include uncertainty tokens (e.g. “pending clarification” or “uncertain”)
- Use GPT-4, not 3.5 — a hallucinated checklist from 3.5 made it look like I’d ignored three client requests
5. Generating Project Status Briefs From Multiple Sources
I tried stitching together three sources: a Notion timeline view, a Slack thread archive, and client feedback from Typeform. Without prep, GPT-4 just panicked. The merged blob would give a status summary that said things like “No blockers at the moment,” even when the Slack thread clearly said “blocked until we hear back on credentials.”
The cause? Order of inputs. If the last input included a sentence with “things look good,” GPT always favored it. It was essentially weight biased toward recency — not accuracy.
Workaround: in Zapier, I used this sequence:
- Combine logs into structured JSON with numbered fields (note1, note2, note3)
- Prefix each entry with date and category
- Feed full JSON string into GPT prompt with instruction: “Prioritize older warnings if not resolved in later notes”
Once I started structuring things, the summaries got solid. No more made-up “green status.” The nice surprise: GPT even highlighted contradictory content. In one case, it added: “Note: Slack thread on March 4 and form entry from March 6 disagree on timeline.”
6. Auto Generating Internal Onboarding Checklists With Hidden Variables
I needed a shared, internal checklist that wasn’t visible to the client but still triggered automatically once a deal was closed. The trick was getting GPT to output markdown-formatted tasks based on things we didn’t display publicly — e.g., did we upsell them on analytics, or are we auto-creating an Airtable base for them behind the scenes.
The weird part? GPT flat-out ignored hidden fields unless they were phrased as variables in the instruction prompt. Saying “include these if value is true” wasn’t enough. It only responded reliably when each hidden setting was written like: “If enable_airtable_base = true, then include: \”Create initial Airtable structure using template X\”.” Literal logic statements, even in natural language, got better results than fuzzy cues.
Also — for whatever reason — including the word “internal-only” in the prompt reduced hallucinations by half.
Here’s the exact snippet format I now use for internal use cases:
Generate a markdown checklist for our internal onboarding team. Do not include client information or presentation language. Instructions below:
If upsell_analytics = true, add:
- [ ] Set up GA4 and tag manager events
If enable_airtable_base = true, add:
- [ ] Duplicate Airtable from onboarding-template-v2
Format all tasks as standalone items with no grouping.
One day I’ll plug this into Retool for actual task display. But for now, markdown in a Slack code block works fine.
7. Prompt Failures When Using Multiple Automations in Parallel
Here’s where it all broke: I had one Make scenario running onboarding doc prep, another generating Slack messages, and a third hitting the CRM. Didn’t realize all three were firing their OpenAI modules at the same time, and GPT started rate-limiting. That wasn’t the worst part — the bigger issue was prompt bleed.
I was using a shared OpenAI account across scenarios. So OpenAI’s memory (you know, that pseudo-memory that isn’t really memory but sometimes acts like it) must’ve started co-mingling outputs. One client got a checklist for another company. Literal line items from a different thread showed up because Make retried messages after delays — and apparently with partial duplicated input.
The only solution was to stagger the scenarios. I added buffer waits and used scenario-level flags written back to Airtable. Once Airtable field “onboarding_step_status” updated to “sent,” the next scenario would proceed.
The best hidden fix came from an offhand tip in a Make support thread:
Add a module after OpenAI to base64 encode and then decode the output — this collapses invisible characters that can cause duplicate triggers.
Didn’t believe it at first, but after adding that encode-decode step, the nonsense outputs stopped. I never found that trick documented anywhere else.