On the Effect of (Near) Duplicate Subwords in Language Modelling
arxiv(2024)
摘要
Tokenisation is a core part of language models (LMs). It involves splitting a
character sequence into subwords which are assigned arbitrary indices before
being served to the LM. While typically lossless, however, this process may
lead to less sample efficient LM training: as it removes character-level
information, it could make it harder for LMs to generalise across similar
subwords, such as now and Now. We refer to such subwords as near duplicates. In
this paper, we study the impact of near duplicate subwords on LM training
efficiency. First, we design an experiment that gives us an upper bound to how
much we should expect a model to improve if we could perfectly generalise
across near duplicates. We do this by duplicating each subword in our LM's
vocabulary, creating perfectly equivalent classes of subwords. Experimentally,
we find that LMs need roughly 17
setting. Second, we investigate the impact of naturally occurring near
duplicates on LMs. Here, we see that merging them considerably hurts LM
performance. Therefore, although subword duplication negatively impacts LM
training efficiency, naturally occurring near duplicates may not be as similar
as anticipated, limiting the potential for performance improvements.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要