Neuro-Symbolic Architectures for Combinatorial Problems in Structured Output Spaces

Frontiers in Artificial Intelligence and Applications(2023)

引用 0|浏览4
暂无评分
摘要
Recently there has been a lot of focus on developing deep learning models for symbolic reasoning tasks. One such task involves solving combinatorial problems, which can be viewed as instances of a constraint satisfaction problem, albeit with unknown constraints. The task of the neural model is then to discover the unknown constraints using the training data consisting of many solved instances. There are broadly two approaches for learning the unknown constraints: the first approach creates a purely neural model that maps an input puzzle directly to its solution, thereby representing the constraints implicitly in the model's weights [1, 2, 3], and the second approach invokes a symbolic reasoner, such as an Integer Linear Program (ILP) solver, learning the constraints explicitly in the solver's language, e.g., linear inequalities for an ILP solver [4, 5]. In this chapter, we discuss about both the implicit and explicit approaches. In the implicit approach, we review three neural architectures for solving combinatorial problems, viz., Neural Logic Machines (NLM) [2], Recurrent Relational Networks (RRN) [1], and its extensions proposed by Nandwani et al. [3] to tackle output space invariance (solving 16x16 sudoku after training on 9x9 sudokus). Under the second approach having explicit representation of constraints, we present two methods (CombOptNet [5] and ILP-Loss[4]) for end-to-end training of a Neural-ILP architecture: a neuro-symbolic model where a neural perception layer is followed by an ILP layer to represent reasoning. CombOptNet proposed by Paulus et al. [5] trains slowly owing to a call to an ILP solver in each learning iteration. On the other hand, ILP-Loss proposed by Nandwani et al. [4] is solver-free during training and thus much more scalable. In the end, we present specific experiments from [3] and [4] that compare the different methods discussed in this chapter.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要