SDSR: Optimizing Metaverse Video Streaming via Saliency-Driven Dynamic Super-Resolution

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS(2024)

引用 0|浏览14
暂无评分
摘要
Metaverse (especially 360-degree) video streaming allows broadcasting virtual events in the metaverse to a broad audience. To reduce the huge bandwidth consumption, quite a few super-resolution (SR)-enhanced 360-degree video streaming systems have been proposed. However, there is very limited work to investigate how the granularity of SR model affects the system performance, and how to choose a proper SR model for different video contents under diverse environmental conditions. In this paper, we first conduct a dedicated measurement study to unveil the impact of different granularities of SR models. It is found that the scene of a video largely determines the effectiveness of SR models in different granularities. Based on our observations, we propose a novel 360-degree video streaming framework with saliency-driven dynamic super-resolution, called SDSR. To maximize user QoE, we formally formulate an optimization problem and adopt the model predictive control (MPC) theory for bitrate adaptation and SR model selection. To improve the effectiveness of SR model, we leverage the saliency information, which well reflects users' view interests, for model training. In addition, we reuse an SR model for similar chunks based on temporal redundancy of a video. Finally, we conduct extensive experiments on real traces and the results show that SDSR outperforms the state-of-the-art algorithms with an improvement up to 32.78% in terms of the average QoE.
更多
查看译文
关键词
Metaverse,360-degree video streaming,quality of experience,super-resolution,bitrate adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要