Large language models

Chain-of-Thought (CoT)

Chain-of-thought prompting is asking a model to show its reasoning step-by-step before giving the final answer. It dramatically improves accuracy on math, logic, and multi-step problems — frontier models gain 10-30 percentage points on hard benchmarks like GSM8K when CoT is enabled.

Two flavors: prompted CoT ("let's think step by step") works on any sufficiently large model, and trained CoT (also called "reasoning models") where the model is RL-trained to produce visible reasoning by default. DeepSeek R1, OpenAI's o1, QwQ, and Phi-4 Reasoning are reasoning models.

Tradeoffs: CoT outputs are 5-10× longer than direct answers, increasing latency and cost. For tasks that don't need reasoning (code completion, simple lookups), CoT just adds overhead. For tasks that do (math, planning, debugging), it's the difference between right and wrong.

Related terms

See also

Reviewed by Fredoline Eruo. See our editorial policy.