🚧 This site is under development; its content is not final and may change at any time. 🚧
Interacting with AIGive the model time to think

Give the Model Time to Think

In chain-of-thought prompting, you encourage the AI to “show its work” as it arrives at a conclusion. Rather than jumping straight to an answer, the model follows a series of logical steps or considerations, as if it’s walking you through its thought process.

Chain-of-thought prompting results in responses that are more thorough, logical, and coherent.1 This method is particularly useful for complex tasks, like solving multi-step problems or analyzing nuanced topics, where a structured approach is essential.

Benefits of Chain-of-Thought Prompting

  • Enhanced Clarity: Breaking down complex questions into smaller steps allows the model to consider each part carefully, reducing the chance of misunderstanding or glossing over details.
  • Improved Accuracy: With a step-by-step approach, the AI can “double-check” each stage, which often results in more reliable outcomes for calculations or logic-heavy tasks to reduce hallucinations.2
  • Transparency: Following the model’s reasoning can be as valuable as the final answer. Chain-of-thought responses make the process visible and understandable.

How to Create a Chain-of-Thought Prompt?

  1. Ask for Step-by-Step Reasoning: Start by explicitly requesting that the model explain its thinking, e.g., Walk through each step or Explain your reasoning step-by-step.3
  2. Break Down Complex Tasks: When dealing with multi-layered problems, specify individual components the model should address, like analyzing cause and effect, listing pros and cons, or calculating intermediate values.
  3. Encourage Reflective Thinking: If appropriate, ask the model to evaluate each stage before moving to the next, encouraging it to consider multiple perspectives or possibilities.

Example without Chain-of-Thought prompt

🙋
How many apples does Sarah have left if she starts with 12 apples and gives 7 to her friend?
🤖
If Sarah starts with 12 apples and gives 7 apples to her friend, she has 5 apples left. 🍎

Example with Chain-of-Thought prompt

🙋
How many apples does Sarah have left if she starts with 12 apples and gives 7 to her friend? Explain your reasoning step-by-step.
🤖
To determine how many apples Sarah has left, we follow these steps: 1. Start with the total number of apples Sarah has initially: Sarah starts with 12 apples. 2. Subtract the number of apples Sarah gives away to her friend: Sarah gives 7 apples to her friend. To find out how many apples remain, subtract 7 from 12. 3. State the result: After giving away 7 apples, Sarah has 5 apples left. Final Answer: Sarah has 5 apples left.

Further Reading

References & Footnotes

Footnotes

  1. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q. V., & Zhou, D. (2022). Chain-of-thought prompting elicits reasoning in large language models. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, & A. Oh (Eds.), Advances in Neural Information Processing Systems (Vol. 35, pp. 24824–24837). Curran Associates, Inc. https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf ↩

  2. Ji, Z., Yu, T., Xu, Y., Lee, N., Ishii, E., & Fung, P. (2023). Towards mitigating LLM hallucination via self reflection. Findings of the Association for Computational Linguistics: EMNLP 2023, 1827–1843. https://doi.org/10.18653/v1/2023.findings-emnlp.123 ↩

  3. Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2023). Large language models are zero-shot reasoners. arXiv. https://doi.org/10.48550/arXiv.2205.11916 ↩