Writing API Calls with Prompt Help That Actually Works
1. When it makes sense to let prompts write the first draft
There’s a weird relief in not having to write boilerplate fetch wrappers again. I was building a quick Notion database update API call from within Make, and instead of digging through docs for the sixth time this month, I just tossed a system message at GPT-4: “Write an API call in JavaScript to patch a Notion page.” Output was technically correct, structurally fine… but the headers were weirdly ordered, and the JSON payload had an extra wrapping object for no reason. Still faster than writing from scratch.
The win here is using the model like a brutalist starting point. Treat it like those AI-generated meal recipes where it forgets cooking times — you still have to edit. Prompting works best when:
- You clearly know the API structure but forget syntax constantly
- You want to skip copy-pasting bearer token logic for the fiftieth time
- You’re debugging a parameter mismatch and want a fresh pair of eyes (even virtual ones)
That said, I’ve had GPT hallucinate endpoint paths that do not exist in the documented API. For example, with the Webflow API, it returned /sites/[site_id]/collections/[collection_id]
when the actual structure was completely different. Stick to prompting when you either already know the path or when you’re willing to click through the MDN docs anyway to double check.
2. Prompt formats that reliably produce usable curl and fetch code
I’ve gone through maybe a hundred variations of prompt formats over the past few months trying to reduce editing time. Here are the ones that actually produce reliably usable outputs:
Write a curl command to make a POST request to [endpoint] with headers:
- Authorization: Bearer [token]
- Content-Type: application/json
Include this JSON body:
{
"name": "Sample Task",
"due": "2024-06-10"
}
This works almost 100% of the time — especially if you use line breaks and bullet headers. The key is formatting the request like you’re writing documentation. Then, once the curl is solid, I’ll ask:
Now rewrite this in JavaScript using fetch()
The fetch version usually comes out fine, though sometimes you’ll get await fetch()
without the async
wrapper, or it’ll randomly throw in a data
variable that’s undefined. You’ll fix that in 3 seconds, but it happens randomly and with no discernible trigger other than model latency or token count.
3. Edge cases where AI-generated endpoints silently fail with 200 OK
Here’s where things start getting weird. I had a GPT-generated API call to the Airtable API that looked totally legit. Request body made sense. Status return: 200 OK. But nothing updated.
I traced it and found the model had included a { fields: { ... } }
wrapper but had capitalized “Fields” — which the Airtable API just silently ignored. Yeah, they gave me a 200, no errors, no warnings, just… nothing happened. I only noticed when another automation that depended on the updated record didn’t trigger.
This happens more often with permissive APIs — even something like ClickUp will sometimes return success on malformed data as long as required fields aren’t broken. The fix here is to copy the exact successful body from your API console/testing tool (I’ll usually log one manually in a Postman run) and paste it into GPT. Then ask it to reshape that into a code snippet, because the AI seems more accurate when transforming than generating from scratch.
4. Function call wrapping issues inside prompt-engineered outputs
If you’re asking for reusable wrappers — like encapsulating API calls into a helper function — GPT has a nasty habit of overly nesting the logic, or hiding critical info inside arguments you didn’t mean to abstract away.
“Write a reusable JavaScript function to create new tasks in Todoist via API.”
The output was clean… until you realized it auto-inlined the `project_id` and `due_date` into the parameters list, but added no validation inside. Worse, it skipped the `Content-Type` header entirely on the assumption you were using an SDK. False. I wasn’t. The SDK docs weren’t even linked in the response.
This becomes more painful when the code is embedded inside larger workflows — like when using Make.com where you can’t always restructure the module flow cleanly. If a function hardcodes in a token or expects a global variable that doesn’t exist, the whole flow silently skips instead of throwing an error.
5. Avoiding inconsistent parameter key naming across autogenerated snippets
One subtle but infuriating problem: inconsistent parameter key casing. I once prompted GPT for two different variations of a Monday.com mutation — one to create tasks and one to update status. The create call had variables like `column_values`; the update call returned `columnValues`. Same API. Same session. No consistency.
This becomes a failure vector when wiring into tools like Zapier or n8n that expect specific key names to map across data layers. I had to audit all parameter fields to check what was camelCase, what used underscores, and what the model decided to change mid-output. The model itself doesn’t remember previous naming decisions unless you’re copying long contextual chains into every prompt, which burns tokens fast.
Now I pre-paste two things into every GPT session before asking for help with API samples:
- Actual docs snippet showing parameter formats
- My own naming preferences (e.g., “Use snake_case style keys unless told otherwise”)
It’s brittle. But better than rebuilding an entire flow when a status field didn’t sync because it was sent as `Status` instead of `status`.
6. Unexpected bugs when chaining AI-generated API calls in n8n or Make
This one burned me for half a day. I had a Make scenario that chained three HTTP modules: first to fetch a Notion page, second to grab its children blocks, third to update the content. I used GPT to help write all three JSON payloads. Separately, each one worked. Together, the second step randomly failed with a 404 — but only in live runs, not test mode.
Eventually figured it out: the model had inserted a trailing slash in just one of the URLs (/blocks/[id]/
instead of /blocks/[id]
), and Notion’s API routed that differently when using GET. Didn’t throw an error — just returned no data. Make test runs cached the successful result from earlier runs, masking the break. Brutal.
So now I’ve got a pre-run checklist inside each module:
- Manually check every endpoint string for trailing slashes
- Log intermediate responses to Data Stores so I can inspect raw JSON later
- Add retries but only for 404 or 500-series errors, not 200 with no data
Also, GPT-style generated JSON often defaults to quoting every key — which breaks some Make modules that expect unquoted keys. You won’t catch that until the flow either stalls silently or inserts a string version of your JSON into the next module.
7. Quick-win prompt tweaks that reduce hallucination in API responses
Minute-level difference-maker: rewriting prompts to strip ambiguity. Take this example:
“Create Python code to send data to the Slack API.”
Too open-ended. You’ll get weird stuff — could be using requests
, http.client
, or even an unmentioned Slack helper lib. Change it to:
“Use Python requests to POST a message to a Slack channel using a webhook URL. Don’t use external libraries.”
Much cleaner result. Bonus points if you paste in a sample working webhook payload first — then ask GPT to parameterize it step by step. Prompt ordering matters here: give concrete data first, then ask transformation questions. Not the other way around.
Passing actual test output (like a 403 error body) also helps it debug better. One version of a Make webhook call was failing with “Invalid prop: attachments must be an array” — GPT instantly spotted I had used a single object instead of a one-item array, which I had missed because the API accepted both in test mode. No clue why. But the fix only made sense once I phrased the prompt as “Why is this 403 happening with this payload?”
8. What actually sticks when validating prompt-generated code with Postman
When validating AI output, trusting your eyes is not enough — especially with JSON formatting. Postman still saves me multiple times a week. Most commonly, I paste sniff-tested GPT results directly into a Postman POST request tab and tweak as needed. The big win is that Postman shows tiny mistakes that would otherwise pass silently:
- Incorrect header nesting, like
headers: { Authorization: { Bearer: token } }
- Unquoted boolean keys — works once, breaks later
- Using commas instead of semicolons in header values
Also, it reveals when you’re accidentally sending an extra data field that the AI inserted mid-output — like I saw with a Zendesk integration where the JSON had a random version: 1
line that broke it completely.
One odd win — sometimes if you get stuck on escaping issues (like double quotes inside a JSON string), pasting the payload into Postman handles it better than trying to debug it in a stringified fetch wrapper. Save your sanity there.
9. Mapping frequent AI API output failures back to their root cause
I built a reference doc (locally, not public) where I literally just jot down every recurring failure when trying to generate API call helpers with prompts. Here’s a snapshot from last week:
- Status 200 with no changes: Usually malformed fields that were accepted but ignored
- OAuth failures: AI output skipped
grant_type=refresh_token
in POST body - Bad endpoint paths: Hallucinated REST paths for SDK-only APIs
- Invalid header syntax: Bearer tokens placed inside quotes or as objects
- Wrong Content-Type: Defaults to form data when JSON was needed
I reference it now before prompting just to know what kind of mistakes to expect. It’s dumb… but weirdly calming to know that the AI isn’t smarter than basic trial-and-error. Just faster and less annoying than reading SDK changelogs.