Data Augmentation for Environment Perception with Unmanned Aerial Vehicles

Vivian Chiciudean, Horatiu Florea,Bianca-Cerasela-Zelia Blaga, Radu Beche,Florin Oniga,Sergiu Nedevschi

IEEE Transactions on Intelligent Vehicles(2024)

引用 0|浏览1
暂无评分
摘要
Large and high-quality training datasets are of critical importance for deep learning. In the context of the sematic segmentation challenge for UAV aerial images, we propose a strategy for data augmentation that can significantly reduce the effort of manually annotating a large number of images. The result is a set of semantic, depth and RGB images that can be used to improve the performance of neural networks. The main focus of the method is the generation of semantic images, with depth and texture images also being generated through the process. The proposed method for semantic image generation relies on a 3D semantic mesh representation of the real-world environment. First, we propagate the existing semantic information from a reduced set of manually labeled images into the mesh representation. To deal with errors in the manually labeled images, we propose a specific weighted voting mechanism for the propagation process. Second, we use the semantic mesh to create new images. Both steps use the perspective projection mechanism and the Depth Buffer algorithm. The images can be generated using different camera orientations, allowing novel view perspectives. Our approach is conceptually general and can be used to improve various existing datasets. Experiments with existing datasets (UAVid and WildUAV), augmented with the proposed method, are performed on HRNet. An overall performance improvement of the inference results by up to 5.5% (mIoU) is obtained. The augmented datasets are publicly available on GitHub.
更多
查看译文
关键词
semantic images,UAV,data augmentation,image generation,aerial images,Z-Buffer,Depth Buffer,perspective projection,virtual camera
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要