On-device Self-supervised Learning of Visual Perception Tasks aboard Hardware-limited Nano-quadrotors
CoRR(2024)
摘要
Sub-50 nano-drones are gaining momentum in both academia and
industry. Their most compelling applications rely on onboard deep learning
models for perception despite severe hardware constraints (sub-100 processor). When deployed in unknown environments not
represented in the training data, these models often underperform due to domain
shift. To cope with this fundamental problem, we propose, for the first time,
on-device learning aboard nano-drones, where the first part of the in-field
mission is dedicated to self-supervised fine-tuning of a pre-trained
convolutional neural network (CNN). Leveraging a real-world vision-based
regression task, we thoroughly explore performance-cost trade-offs of the
fine-tuning phase along three axes: i) dataset size (more data
increases the regression performance but requires more memory and longer
computation); ii) methodologies (fine-tuning all model parameters
vs. only a subset); and iii) self-supervision strategy. Our approach
demonstrates an improvement in mean absolute error up to 30% compared to the
pre-trained baseline, requiring only 22 fine-tuning on an
ultra-low-power GWT GAP9 System-on-Chip. Addressing the domain shift problem
via on-device learning aboard nano-drones not only marks a novel result for
hardware-limited robots but lays the ground for more general advancements for
the entire robotics community.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要