Large language models

Prompt Engineering

Prompt engineering is the practice of crafting model inputs to elicit better outputs without changing the model itself. Techniques include role assignment, structured formatting, few-shot examples, chain-of-thought triggering, and output format constraints.

Specific techniques worth knowing: few-shot prompting (include 2-5 example input/output pairs), chain-of-thought ("let's think step by step"), role assignment ("you are a senior security reviewer"), output schema ("respond with valid JSON matching this schema"), decomposition (break complex tasks into sequential prompts).

The same techniques work on local models, but smaller models (3B-7B) benefit MORE from explicit prompt structure than frontier cloud models. A well-prompted Qwen 3 8B often beats a poorly-prompted GPT-4 on narrow tasks. As models improve, prompt engineering shifts from "tricks" to "specification" — the prompt IS the program.

Related terms

Reviewed by Fredoline Eruo. See our editorial policy.