Large Language Models are Capable of Offering Cognitive Reappraisal, if Guided
arxiv(2024)
摘要
Large language models (LLMs) have offered new opportunities for emotional
support, and recent work has shown that they can produce empathic responses to
people in distress. However, long-term mental well-being requires emotional
self-regulation, where a one-time empathic response falls short. This work
takes a first step by engaging with cognitive reappraisals, a strategy from
psychology practitioners that uses language to targetedly change negative
appraisals that an individual makes of the situation; such appraisals is known
to sit at the root of human emotional experience. We hypothesize that
psychologically grounded principles could enable such advanced psychology
capabilities in LLMs, and design RESORT which consists of a series of
reappraisal constitutions across multiple dimensions that can be used as LLM
instructions. We conduct a first-of-its-kind expert evaluation (by clinical
psychologists with M.S. or Ph.D. degrees) of an LLM's zero-shot ability to
generate cognitive reappraisal responses to medium-length social media messages
asking for support. This fine-grained evaluation showed that even LLMs at the
7B scale guided by RESORT are capable of generating empathic responses that can
help users reappraise their situations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要