Comparing Apples to Oranges: LLM-powered Multimodal Intention Prediction in an Object Categorization Task
arxiv(2024)
摘要
Intention-based Human-Robot Interaction (HRI) systems allow robots to
perceive and interpret user actions to proactively interact with humans and
adapt to their behavior. Therefore, intention prediction is pivotal in creating
a natural interactive collaboration between humans and robots. In this paper,
we examine the use of Large Language Models (LLMs) for inferring human
intention during a collaborative object categorization task with a physical
robot. We introduce a hierarchical approach for interpreting user non-verbal
cues, like hand gestures, body poses, and facial expressions and combining them
with environment states and user verbal cues captured using an existing
Automatic Speech Recognition (ASR) system. Our evaluation demonstrates the
potential of LLMs to interpret non-verbal cues and to combine them with their
context-understanding capabilities and real-world knowledge to support
intention prediction during human-robot interaction.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要