SpeechGPT-Gen: Scaling Chain-of-Information Speech Generation
CoRR(2024)
摘要
Benefiting from effective speech modeling, current Speech Large Language
Models (SLLMs) have demonstrated exceptional capabilities in in-context speech
generation and efficient generalization to unseen speakers. However, the
prevailing information modeling process is encumbered by certain redundancies,
leading to inefficiencies in speech generation. We propose Chain-of-Information
Generation (CoIG), a method for decoupling semantic and perceptual information
in large-scale speech generation. Building on this, we develop SpeechGPT-Gen,
an 8-billion-parameter SLLM efficient in semantic and perceptual information
modeling. It comprises an autoregressive model based on LLM for semantic
information modeling and a non-autoregressive model employing flow matching for
perceptual information modeling. Additionally, we introduce the novel approach
of infusing semantic information into the prior distribution to enhance the
efficiency of flow matching. Extensive experimental results demonstrate that
SpeechGPT-Gen markedly excels in zero-shot text-to-speech, zero-shot voice
conversion, and speech-to-speech dialogue, underscoring CoIG's remarkable
proficiency in capturing and modeling speech's semantic and perceptual
dimensions. Code and models are available at
https://github.com/0nutation/SpeechGPT.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要