Improving Generalization in Semantic Parsing by Increasing Natural Language Variation
CoRR(2024)
摘要
Text-to-SQL semantic parsing has made significant progress in recent years,
with various models demonstrating impressive performance on the challenging
Spider benchmark. However, it has also been shown that these models often
struggle to generalize even when faced with small perturbations of previously
(accurately) parsed expressions. This is mainly due to the linguistic form of
questions in Spider which are overly specific, unnatural, and display limited
variation. In this work, we use data augmentation to enhance the robustness of
text-to-SQL parsers against natural language variations. Existing approaches
generate question reformulations either via models trained on Spider or only
introduce local changes. In contrast, we leverage the capabilities of large
language models to generate more realistic and diverse questions. Using only a
few prompts, we achieve a two-fold increase in the number of questions in
Spider. Training on this augmented dataset yields substantial improvements on a
range of evaluation sets, including robustness benchmarks and out-of-domain
data.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要