Transformers as Transducers
arxiv(2024)
摘要
We study the sequence-to-sequence mapping capacity of transformers by
relating them to finite transducers, and find that they can express
surprisingly large classes of transductions. We do so using variants of RASP, a
programming language designed to help people "think like transformers," as an
intermediate representation. We extend the existing Boolean variant B-RASP to
sequence-to-sequence functions and show that it computes exactly the
first-order rational functions (such as string rotation). Then, we introduce
two new extensions. B-RASP[pos] enables calculations on positions (such as
copying the first half of a string) and contains all first-order regular
functions. S-RASP adds prefix sum, which enables additional arithmetic
operations (such as squaring a string) and contains all first-order polyregular
functions. Finally, we show that masked average-hard attention transformers can
simulate S-RASP. A corollary of our results is a new proof that transformer
decoders are Turing-complete.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要