Jaypore Labs
Back to journal
Engineering

Prompts are recipes, not spells

Spells belong to the caster. Recipes belong to the team. Here's what changes when your prompts move from chat histories into files.

Yash ShahApril 18, 20264 min read

Two kinds of people write prompts.

The first writes them like spells: carefully chosen incantations, imitated from a thread online, guarded like a family secret. The spell worked once, so you don't change it.

The second writes them like recipes: ingredient lists, measured steps, variants tried and noted, passed from cook to cook. The recipe doesn't always work, but when it fails, you can tell what went wrong.

Every team that takes AI in engineering seriously eventually lands on the second pattern. Here's why — and what changes when you get there.

Why spells don't scale

The spell metaphor is how the industry started. Reddit threads, Twitter screenshots, "what I ask GPT to write my essays." The mysticism was part of the charm. Carefully-worded incantations. Precise order of instructions. "Act as a senior staff engineer and…" invocations that someone's aunt saw work once.

It also felt good. If prompts were spells, then your ability to write them was a magic skill. You could say you were "good at prompting" the way someone claims they're good at Tarot. This made for a great few months of Twitter content and precisely zero production systems.

Because spells don't scale. They don't compose. They have no version control, no tests, no variants. When the spell stops working — because the model updated, or the context grew, or the domain shifted — you have nothing to debug. Just a disappointed caster.

What recipes and prompts share

Four properties that recipes have and spells don't:

Ingredient lists. Recipes tell you what goes in, in what amounts, in what order. Good prompts do the same: system message, user intent, reference material, constraints, output format. Each section declared, named, measurable. A reader (or the next prompt engineer at your company) can see what's present and what's missing.

Reproducibility. A recipe gives consistent results because it specifies the measurable variables. A prompt that produces consistent outputs does the same: temperature pinned, model named, seed set, system prompt versioned. Spells tell you to "trust the process." Recipes tell you what the process is.

Variants and notes. Every real recipe has margin notes: "used 2 tbsp, not 1." "Swapped brown sugar for maple." "Forgot the salt, no one noticed." Good prompt engineering keeps the same notes: this variant for summarisation, this one for extraction, this one when the input is noisy. You write them in a file your team can read.

Cultural transfer. Recipes cross cooks. They leave the original kitchen. Spells belong to the caster. If your prompts are spells, your AI features depend on one person being around. If they're recipes, they belong to the team.

Four shifts when you make the move

Prompts live in files, not chat histories. Not a pasted snippet in someone's notes. A versioned file, in the repo, with an owner. When it changes, the commit says why.

Evals are how you taste the dish. You wouldn't serve a recipe without tasting it. You shouldn't ship a prompt without an eval. A small set of inputs plus expected outputs, run on every change. If the eval passes but the reviewer's palate disagrees, update the eval — then change the prompt.

The variant list is the point. "Our summarisation prompt" is almost never one prompt. It's three or five: short input, long input, noisy input, technical input. Name them. Pick between them in code. Don't build a mega-prompt that tries to handle all cases; that's the software equivalent of one recipe for every occasion.

Write it down so a new cook could follow it. When someone new joins your team, they should be able to read your prompts and understand what each one is for, without asking. That's the final test of "is it a recipe?"

Open recipes feed more people

Every cuisine has its closely-guarded recipes and its open ones. The open ones feed more people.

If your team's AI features are still locked in someone's chat history, it's time to write them down. Put them in a file. Let them be edited. Let them be improved. That's how you get from AI demos to AI products.

Related reading


We build AI-enabled software and help teams put AI to work. If you're systematising prompts in your codebase, we'd love to hear about it. Get in touch.

Tagged
Prompt EngineeringLLMAI DevelopmentTeam Practices
Share