Enhancing Transformer RNNs with Multiple Temporal Perspectives
CoRR(2024)
摘要
We introduce the concept of multiple temporal perspectives, a novel approach
applicable to Recurrent Neural Network (RNN) architectures for enhancing their
understanding of sequential data. This method involves maintaining diverse
temporal views of previously encountered text, significantly enriching the
language models' capacity to interpret context. To show the efficacy of this
approach, we incorporate it into the Receptance Weighted Key Value (RWKV)
architecture, addressing its inherent challenge of retaining all historical
information within a single hidden state. Notably, this improvement is achieved
with a minimal increase in the number of parameters –even as little as
0.04% of the original number of parameters. Further, the additional
parameters necessary for the multiple temporal perspectives are fine-tuned with
minimal computational overhead, avoiding the need for a full pre-training. The
resulting model maintains linear computational complexity during prompt
inference, ensuring consistent efficiency across various sequence lengths. The
empirical results and ablation studies included in our research validate the
effectiveness of our approach, showcasing improved performance across multiple
benchmarks. The code, model weights and datasets are open-sourced at:
https://github.com/RazvanDu/TemporalRNNs.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要