3D Ground Truth For Simulating Rover Imagery 3D Vision Testing

Gerhard Paar,Christoph Traxler, Piluca Caballo Perucha, Arnold Bauer,Manfred Klopschitz, Laura Fritz, Rebecca Nowak, Jorge Ocón Alonso

semanticscholar(2021)

引用 0|浏览3
暂无评分
摘要

 1      Introduction & Scope

3D vision (mapping, localization, navigation, science target recognition etc.) using Planetary Rover imaging requires high-level test assets, including a “ground truth” against which the processing results (rover locations, 3D maps) can be compared. Whilst end-to-end simulation for functional testing is realized by visualization of a drone-based DTM (Digital Terrain Model), the accuracy and robustness of vision-based navigation and 3D mapping can only be verified by high-fidelity data sets. The approach followed in the EU Horizon-2020 Project ADE [3] used a terrestrial-captured image data set for high- and medium resolution (2mm / 3dm grid size) DTM generation of a representative Mars-analog environment, followed by batch rendering to be presented to the respective 3D vision components (Visual Odometry – VO, and stereovision-based point cloud generation):

  • Capturing terrestrial & drone-based images for photogrammetric reconstruction using ground control points (GCPs)
  • COTS (Commercial-Off-The-Shelf) compilation of 3D textured models in different resolutions using Structure-from-Motion (SfM)
  • Fusion of the gained textured point clouds in the visualization component PRo3D, and batch rendering of simulated stereo images at poses along a Rover trajectory
  • Using these images to validate / evaluate vision-based navigation and mapping frameworks.

更多

查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要