Datasets, Models, and Algorithms for Multi-Sensor, Multi-agent Autonomy Using AVstack
CoRR(2023)
摘要
Recent advancements in assured autonomy have brought autonomous vehicles
(AVs) closer to fruition. Despite strong evidence that multi-sensor,
multi-agent (MSMA) systems can yield substantial improvements in the safety and
security of AVs, there exists no unified framework for developing and testing
representative MSMA configurations. Using the recently-released autonomy
platform, AVstack, this work proposes a new framework for datasets, models, and
algorithms in MSMA autonomy. Instead of releasing a single dataset, we deploy a
dataset generation pipeline capable of generating unlimited volumes of
ground-truth-labeled MSMA perception data. The data derive from cameras
(semantic segmentation, RGB, depth), LiDAR, and radar, and are sourced from
ground-vehicles and, for the first time, infrastructure platforms. Pipelining
generating labeled MSMA data along with AVstack's third-party integrations
defines a model training framework that allows training multi-sensor perception
for vehicle and infrastructure applications. We provide the framework and
pretrained models open-source. Finally, the dataset and model training
pipelines culminate in insightful multi-agent case studies. While previous
works used specific ego-centric multi-agent designs, our framework considers
the collaborative autonomy space as a network of noisy, time-correlated
sensors. Within this environment, we quantify the impact of the network
topology and data fusion pipeline on an agent's situational awareness.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要