Reconstructing Hand-Held Objects in 3D
arxiv(2024)
摘要
Objects manipulated by the hand (i.e., manipulanda) are particularly
challenging to reconstruct from in-the-wild RGB images or videos. Not only does
the hand occlude much of the object, but also the object is often only visible
in a small number of image pixels. At the same time, two strong anchors emerge
in this setting: (1) estimated 3D hands help disambiguate the location and
scale of the object, and (2) the set of manipulanda is small relative to all
possible objects. With these insights in mind, we present a scalable paradigm
for handheld object reconstruction that builds on recent breakthroughs in large
language/vision models and 3D object datasets. Our model, MCC-Hand-Object
(MCC-HO), jointly reconstructs hand and object geometry given a single RGB
image and inferred 3D hand as inputs. Subsequently, we use GPT-4(V) to retrieve
a 3D object model that matches the object in the image and rigidly align the
model to the network-inferred geometry; we call this alignment
Retrieval-Augmented Reconstruction (RAR). Experiments demonstrate that MCC-HO
achieves state-of-the-art performance on lab and Internet datasets, and we show
how RAR can be used to automatically obtain 3D labels for in-the-wild images of
hand-object interactions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要