GraspXL: Generating Grasping Motions for Diverse Objects at Scale
CoRR(2024)
摘要
Human hands possess the dexterity to interact with diverse objects such as
grasping specific parts of the objects and/or approaching them from desired
directions. More importantly, humans can grasp objects of any shape without
object-specific skills. Recent works synthesize grasping motions following
single objectives such as a desired approach heading direction or a grasping
area. Moreover, they usually rely on expensive 3D hand-object data during
training and inference, which limits their capability to synthesize grasping
motions for unseen objects at scale. In this paper, we unify the generation of
hand-object grasping motions across multiple motion objectives, diverse object
shapes and dexterous hand morphologies in a policy learning framework GraspXL.
The objectives are composed of the graspable area, heading direction during
approach, wrist rotation, and hand position. Without requiring any 3D
hand-object interaction data, our policy trained with 58 objects can robustly
synthesize diverse grasping motions for more than 500k unseen objects with a
success rate of 82.2
which enables the generation of diverse grasps per object. Moreover, we show
that our framework can be deployed to different dexterous hands and work with
reconstructed or generated objects. We quantitatively and qualitatively
evaluate our method to show the efficacy of our approach. Our model and code
will be available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要