3D Ground Truth For Simulating Rover Imagery 3D Vision Testing
semanticscholar(2021)
1 Introduction & Scope
3D vision (mapping, localization, navigation, science target recognition etc.) using Planetary Rover imaging requires high-level test assets, including a “ground truth” against which the processing results (rover locations, 3D maps) can be compared. Whilst end-to-end simulation for functional testing is realized by visualization of a drone-based DTM (Digital Terrain Model), the accuracy and robustness of vision-based navigation and 3D mapping can only be verified by high-fidelity data sets. The approach followed in the EU Horizon-2020 Project ADE [3] used a terrestrial-captured image data set for high- and medium resolution (2mm / 3dm grid size) DTM generation of a representative Mars-analog environment, followed by batch rendering to be presented to the respective 3D vision components (Visual Odometry – VO, and stereovision-based point cloud generation):
- Capturing terrestrial & drone-based images for photogrammetric reconstruction using ground control points (GCPs)
- COTS (Commercial-Off-The-Shelf) compilation of 3D textured models in different resolutions using Structure-from-Motion (SfM)
- Fusion of the gained textured point clouds in the visualization component PRo3D, and batch rendering of simulated stereo images at poses along a Rover trajectory
- Using these images to validate / evaluate vision-based navigation and mapping frameworks.
更多