Learning with Logical Constraints but without Shortcut Satisfaction
ICLR(2024)
摘要
Recent studies in neuro-symbolic learning have explored the integration of
logical knowledge into deep learning via encoding logical constraints as an
additional loss function. However, existing approaches tend to vacuously
satisfy logical constraints through shortcuts, failing to fully exploit the
knowledge. In this paper, we present a new framework for learning with logical
constraints. Specifically, we address the shortcut satisfaction issue by
introducing dual variables for logical connectives, encoding how the constraint
is satisfied. We further propose a variational framework where the encoded
logical constraint is expressed as a distributional loss that is compatible
with the model's original training loss. The theoretical analysis shows that
the proposed approach bears salient properties, and the experimental
evaluations demonstrate its superior performance in both model generalizability
and constraint satisfaction.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要