Prompting GPT for Website UX Copy That Users Actually Click

Prompting GPT for Website UX Copy That Users Actually Click

When I first tried using GPT-4 to write website UX copy for a new landing page, I thought, “Okay, clear prompts, define tone, specify length—easy.” But then it spat out button text like “Engage Feature Now” and breadcrumb links that just said “Parent Category.”  All caps, too. And that’s when I realized this isn’t just about setting the right temperature or giving a structured prompt. You have to think like a user *and* a writer *and* someone debugging a chatbot that just doesn’t understand sarcasm.

So here’s what I’ve learned the long, annoying way.

1. Stop expecting GPT to guess the emotional stakes

You can’t say “Make this sound friendly and helpful” and expect the output to compete with something like: “New? Start here.” GPT needs you to specify why someone is emotionally invested. For example:

Prompt: “Write microcopy for a button that completes sign-up. The user is nervous about giving their email. Reassure them.”

Response (bad):
“Submit Information”

Response (decent):
“Create Account – No Spam, Ever”

GPT doesn’t know that users are worried unless you *state their hesitation*.

One time I asked it to write a tooltip for a disabled button in a checkout flow. It gave me, “This feature is currently unavailable.” That’s copy you’d find on an error log, not a shopping cart. When I rewrote the prompt to say: “The user is confused why they can’t click pay now. Explain that they need to select a shipping method,” the output made way more sense:

“Almost there – choose a shipping option to continue.”

Real people don’t freeze because of broken logic—they freeze because they don’t know what counts as “done.” GPT won’t fix that unless you make it feel like a person with anxiety is going to read the sentence.

2. Use fake screens to inject real-world friction into prompts

Sometimes you have to basically LARP your own product in the prompt. I literally prompt like this now:

“Imagine a modal appears after user clicks ‘Upgrade.’ The modal title is ‘Choose your plan.’ There are two buttons: ‘Monthly’ and ‘Yearly.’ Above this is a one-line explanation that helps user decide. Write copy for that line.”

Without the screen structure, GPT gives me slogans. With it, it gives me:

“Pay monthly for flexibility or yearly to save big.”

It started improving right away when I typed out imagined UI states, like:

– “A disabled checkbox appears above the Submit button if the user checks ‘I’m a developer’ on the previous step.”
– “Write helper text under a dropdown where the only current value is ‘No team assigned.’”

By describing what the user actually clicks or hovers, you give GPT the context it *doesn’t* see during training. GPT models don’t have windowed UIs—they can’t see disabled inputs, small grey text, or what’s two levels above the current widget.

3. Treat content variants as actual A and B flows

A well-organized desk featuring two side-by-side computer screens, one displaying website design version A and the other version B, with a notepad and coffee cup nearby, all under soft natural light.

This is where I went wrong for months. I would write prompts like, “Give me 4 versions of signup page copy,” and all the options would be samey. Like:

– “Get started fast”
– “Start your journey now”
– “Sign up quick”
– “Join us in seconds”

That’s word salad, not UX copy testing.

Now I force it to run through different mental frames:

Prompt: “Give me 2 versions of headline copy. Version A is for people who work at agencies. Version B is for people automating solo projects. They’re both seeing the same page.”

That got me:

Version A: “Client work without the chaos? Automate your intake today.”

Version B: “Wasting evenings on manual follow-ups? Automate emails while you sleep.”

One time I even structured a prompt like an editor’s memo:

“Imagine you’re the creative lead. You’re testing two homepage concepts. A is minimalist and trust-focused. B is playful and copy-heavy. Write subheaders that fit both themes. A should sound like Stripe; B should sound like Mailchimp.”

It nailed the difference in tone—way better than trying to prompt adjectives.

4. Never say something like make it ‘engaging’ or ‘compelling’

Every time I try a vague adjective in a GPT-4 UX copy prompt, I get stock phrases in return. “Make this CTA more compelling” becomes:

“Get started now and unlock possibilities”

GPT has no built-in throttle for semantic cheese

Instead, try referencing known design patterns or reading levels:

– “Write this error message in the tone of Slack help text.”
– “Assume a 10th grade reading level. Avoid any four-syllable words.”
– “Tone should match casual SaaS onboarding flows, like Notion or Loom.”

Or tell the model what’s *bad* rather than what’s good:

Prompt: “Fix this tooltip. Right now it sounds robotic. Also avoid wording that reminds users of failure.”

That nudges GPT into a diagnostic mode. It’ll critique the original first (“Could be more human”) and usually rewrite it in a more grounded tone.

5. Mention platform conventions to trigger copy awareness

Models trained on web text patterns do actually know brand conventions—but only when you call them out.

Prompt: “Write copy for a 404 page in the style of GitHub.”

Prompt: “Design a mobile confirmation toast like Duolingo might use.”

They pull latent tone models when you name real platforms. GPT seems to know that Stripe doesn’t use exclamation points, and Trello uses emoji sparingly—they’re not just trained on generic data. If nothing else, it’s a lot more useful than saying “Make this feel clean and modern.”

Also—and this is a big one—don’t test mobile prompt results just on your laptop. I did that for a week straight and wondered why everything kept sounding longer and weirder. Turns out the toast notifications GPT suggested looked fine on desktop mockups, but fully covered the “Continue” button on mobile. 🙁

6. Work around the worst bug in GPT UX copy prompts

Let me spare you the slow discovery loop: whenever GPT generates a call-to-action *after* showing details about a product, it forgets the action part. I mean that literally. If your prompt includes:

“You’ve just explained that this database lets users connect Notion, Slack, and Outlook faster than other tools” and then you say, “What CTA follows this?” it will often write:

“Experience faster connectivity with confidence”

It sounds like a tagline. It’s not clickable copy.

You have to add a line like: “Now write something that would physically go on a button.” That tiny scrap of reality—what’s *on-screen*—returns better stuff:

“Connect my tools” or even “Link Notion and Slack now.”

GPT has prompt amnesia specifically with buttons and form labels. It will derail into slogans unless dragged by the nose.

7. Interrogate failure messages like you’re testing airline alerts

I once asked GPT to write error text for a failed password reset. It wrote:

“Something went wrong. Try again later.”

Zero helpful signals. I had to learn to prompt like:

“Write an alert message after the user clicks reset password and the backend 500s. They’re on mobile, it was their 2nd attempt, and they’re not sure whether the email actually sent.”

GPT spat out:

“Oops! Something broke on our end. You can safely try again in a few minutes – if your email hasn’t arrived yet, check spam or contact support.”

So much better. Not just because of the words, but because of the internal logic. GPT needed:

– Action taken (clicked password reset)
– Observed outcome (no email received)
– User mindset (confused or uncertain vs angry)
– Tech context (mobile, so maybe slower internet loading feedback)

After that, I never again wrote prompts without specifying context and likely user mood.

Weirdest part: sometimes just saying “the user is annoyed” causes GPT to write clearer, shorter copy. Apparently, fear of bad UX draws out better language. 😛