Prompt engineering glossary
Short definitions + a copy-paste example per term. Pick one and open it.
Start here
This glossary covers concepts that commonly break prompts: ambiguity, missing delimiters, inconsistent output, or hallucinations. Each term includes a short definition and a copy-ready example.
If you need strict outputs, start with strict JSON. If you want to tailor prompts to each AI, check prompts by model.
System prompt
The highest-priority instruction layer for a model.
View term →
Few-shot prompting
Giving 1–5 examples to lock format and behavior.
View term →
Delimiters
Markers that separate instructions from context.
View term →
Constraints
Limits that steer the model: length, tone, forbidden claims, stack.
View term →
Temperature
A setting that affects randomness and creativity.
View term →
Hallucinations
When a model outputs plausible but incorrect information.
View term →
Prompt injection
Malicious instructions hidden inside user-provided content.
View term →
JSON schema
A contract for structured output: keys, types, constraints.
View term →
Role prompting
Assigning a role to set expectations (editor, tutor, auditor).
View term →
Evaluation
Testing prompts with fixed inputs and acceptance criteria.
View term →