How I’m Using AI Generated Prompts for Daily Goal Tracking

How I’m Using AI Generated Prompts for Daily Goal Tracking

1. Setting up base goals inside Notion for AI to reference

I keep a table in Notion called daily wins. It’s dumb simple: date, goal, result, notes. I made it mostly to try to guilt myself into being productive via checkbox shame. Originally I tracked manually, then added a Quick Capture form in a separate area, but I’d forget to check it and ended up with weeks of blank logs. So I started feeding the table into an AI prompt generator that nudges me with customized entries based on what I said I wanted to achieve this week.

What’s odd is that AI prompt generation is weirdly good when you give it structure (like a database column), but flaky when it pulls content from varying formats. I tried pulling the data from Notion using the official Zapier integration—worked fine, until I filtered past week entries using a “date is within the past 7 days” filter. For whatever reason, when the Zap ran in the early morning (before 7am), the date filter would start excluding the current date. It’s like the internal timezone shifted behind my back. I fixed it with a rollup property and a manual ISO date comparison in JavaScript in Code by Zapier, which is two steps more than it should be.

Here’s the short snippet that worked better than Notion’s own filters:

const today = new Date().toISOString().split('T')[0];
const isRecent = items.filter(item => item.date >= todayMinus7 && item.date <= today);

The custom prompt now starts with “This week, your goals included…” and dynamically lists items based on the table. Works surprisingly well—except when I leave a goal blank. That causes the AI to invent one. One time it said, “You committed to eating no sugar after 3pm,” which, no I did not, but maybe I should?

2. Matching AI enthusiasm to actual daily capacity with variable system prompts

If you ever plug AI-generated prompts directly into a morning reminder, you already know what happens: it becomes either a coddling robot or a motivational drill sergeant. I burned about three weeks trying to get GPT-4 to sound like an overcaffeinated productivity coach without turning into a Hallmark card.

What actually worked was storing three alternative system prompts (based on energy level) in a database field: low, normal, and high. I could toggle them via a front-end widget (built in Softr).

There’s this undocumented edge case in Make.com: when you dynamically inject a variable into the OpenAI system field, it sometimes cuts off the prompt string at around 950 characters if the variable contains newlines or soft returns. No error, just chopped message, and GPT then generates whatever random message it wants using an empty system role. Filed a bug. No reply.

An actual aha moment

Instead of generating the whole message in the same OpenAI module, I now split it up:

  1. One GPT call to generate a single affirmative statement per goal
  2. One GPT call to wrap those into a tone-matched prompt

This lets me reuse the tone wrappers (“In a calm tone, remind them…” vs “With mild urgency…”) without regenerating goal statements. Faster, and less chaos.

3. Avoiding prompt repetition by de-duping identical phrasing in short-term goals

I noticed by week three: GPT kept phrasing the same goals the same way. “You committed to improving your focus by reducing distractions.” Over and over. It turns out if your daily goal titles don’t change much, the paraphrasing starts to collapse, especially if you give GPT too many examples or show prior JSON-call syntax.

To break the loop, I started injecting a random element drawn from a Notion-linked table of verbs. Not traditional goal verbs, just weirdly specific actions—like “tame notifications” or “shovel through backlog.” They don’t mean much but they shake up the phrasing engine enough.

Also, GPT gets weirdly stuck on certain verbs. For me it kept choosing “optimize” and “declutter” no matter how many synonyms I gave it. Changing the word format—putting verbs inside parentheses or using underscores—seemed to break the pattern. This feels like an encoding thing inside GPT’s token weighting, but no one talks about this that directly.

How I solved it without overengineering

I inserted a simple cache check mid-Zap: compare yesterday’s AI output to today’s. If overlap > 65% (using a similarity metric in a Code block), rewrite with a new verb injection. Otherwise pass-through. Doesn’t always work—GPT still loves “focus on priorities”—but at least it keeps the text fresh without sounding like a copy-paste.

4. Connecting Airtable to Zapier failed silently on multi-select goal tags

One workflow involved using Airtable as the input for goal generation instead of Notion—partially because of Airtable’s better support for multi-relational fields. But Airtable’s Zapier trigger fails in a super specific way. If you’re using a multi-select field and one of the choices includes an ampersand (like “Health & Wellness”), the output object sometimes just omits the key entirely when mapped into a webhook.

No error, no warning, just turns into a blank object. Took me two days and an unnecessary retry loop to even notice—because it only failed when that specific tag showed up. Airtable wouldn’t render it malformed in their UI, but Zapier’s internal webhook schema skipped the whole segment, which meant GPT got fewer context tags and shifted tone mid-message. Occasionally switched from focused productivity to health advice out of nowhere.

“To support your healthy lifestyle, let’s revisit your nutrition goals today.”

Completely unrelated to the actual tasks for the day. I now strip all special characters from goal tags using a RegEx parse step early in the Zap:

goal_tag.replace(/[^a-zA-Z0-9 -]/g, '')

Much more stable now. But yeah, I wasted time building a fallback GPT summarizer for when that failure happened—turns out the failure was in the integration layer. Not AI at all.

5. Setting scheduled AI reminders with offset based on previous execution

The goal was simple: send AI-generated nudges every morning around when I start work. My schedule shifts depending on whether or not I remember breakfast. Zapier’s built-in Scheduler can only do fixed triggers. Make.com lets you schedule relative triggers, but you still have to know what they’re relative to.

I set it up using a hybrid approach:

  • Log timestamp of first Slack message of the day (via Zap triggered by Slack message post)
  • Store that in an Airtable log with a date stamp
  • Use Webhook trigger from Make.com that generates AI reminder ~75 mins later

Works decently unless I send two early messages. In that case, I get two remixed versions of AI messages stacked an hour later. Zapier doesn’t debounce Slack triggers natively unless you build a delay-and-filter combo, which gets messy if your delay hits after a second trigger already queued.

So now I use Key-based Storage in Make.com to memoize the last trigger timestamp. If a new trigger comes in within 30 mins of an existing stored key, it aborts. There’s no built-in TTL, so I cron a midnight cleanout manually using another scenario. Extremely clunky, but it works.

I’m guessing this entire patchwork becomes irrelevant once Slack adds trigger throttling natively, but until then I live in this weird multi-platform debounce limbo.

6. Using OpenAI temperature and presence penalty to reduce morning gibberish

I had a few days where the morning prompt said something like “Conquer your inbox like a samurai slicing through kelp.” Which, I guess, A for effort? But not helpful. The cause was temperature set too high and no penalty variables set—just default { temperature: 0.8 } and max_tokens: 80 . Turns out GPT will get weird fast under those conditions. Especially with short prompts.

By dropping temp down to 0.3 and adding a presence_penalty of 0.6, the outputs became tighter and less metaphor-heavy. I couldn’t find this pattern in any of the docs—usually people tweak top_p, but that didn’t help in this context. It’s repeat phrasing that needed suppression, not word randomness.

Example of temp vs. penalty tuning

// Before
"Today, stare into the savoring winds of possibility."

// After
"Today, follow through on your top two priorities."

// Settings
{ temperature: 0.3, presence_penalty: 0.6, frequency_penalty: 0 }

If you’re using Make.com or Zapier’s OpenAI modules, you have to manually unlock advanced params to input penalties. It’s not surfaced by default. Took me 20 minutes of clicking around a collapsed panel to even see where to do it.

Still debating if I’ll ever let temp back above 0.5.