Chain of Thought (CoT)
Prompting that forces an LLM to generate step-by-step reasoning before answering.
Definition
Chain of Thought (CoT) is a prompting technique that instructs a large language model to produce intermediate reasoning steps before arriving at a final answer. By explicitly generating its thought process, the model achieves significantly higher accuracy on tasks requiring logic, arithmetic, multi-step reasoning, and complex analysis.
Key characteristics of Chain of Thought prompting include:
-
Explicit Reasoning Traces: Rather than jumping directly to an answer, the model writes out each step of its reasoning, making the logic visible and auditable. This transparency helps developers identify where reasoning goes wrong.
-
Zero-Shot and Few-Shot Variants: CoT can be triggered with a simple instruction like "Think step by step" (zero-shot CoT) or by providing worked examples that demonstrate the desired reasoning format (few-shot CoT).
-
Measurable Accuracy Gains: Research has shown that CoT prompting dramatically improves performance on math, logic, and common-sense reasoning benchmarks, especially with larger models.
-
Foundation for Advanced Techniques: CoT is the building block for more sophisticated methods such as Tree of Thought, Self-Consistency, and the reasoning patterns used in models like o1 and DeepSeek-R1.
Chain of Thought has become standard practice in prompt engineering and is one of the most reliable techniques for improving LLM output quality on reasoning-heavy tasks.