Large Language Models are Contrastive Reasoners
CoRR(2024)
摘要
Prompting methods play a crucial role in enhancing the capabilities of
pre-trained large language models (LLMs). We explore how contrastive prompting
(CP) significantly improves the ability of large language models to perform
complex reasoning. We demonstrate that LLMs are decent contrastive reasoners by
simply adding "Let's give a correct and a wrong answer." before LLMs provide
answers. Experiments on two large language models show that zero-shot
contrastive prompting improves performance on a range of arithmetic,
commonsense, and symbolic reasoning tasks without any hand-crafted few-shot
examples, such as increasing the accuracy on GSM8K from 35.9
AQUA-RAT from 41.3
not only surpasses zero-shot CoT and few-shot CoT in most arithmetic and
commonsense reasoning tasks but also can seamlessly integrate with existing
prompting methods, resulting in improved or comparable results when compared to
state-of-the-art methods. Our code is available at
https://github.com/yao8839836/cp
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要