I Can See it in Your Eyes: Gaze towards a Robot as an Implicit Cue of Uncanniness and Task Performance in Long-term Interactions

Long-term human-robot interactions have been widely researched; however, there is a lack of methods to evaluate people’s perception and engagement with those systems. Questionnaires and interviews may be biased and decrease human involvement. Hence, a recent study investigates the possibility of using gaze patterns as a suitable metric.

Image credit: Makia Minich via Wikimedia (CC BY-SA 3.0)

During the experiment, participants wearing eye-tracking glasses got involved in an interactive session with a robot and self-reported their perception and engagement with it. The results show that mutual gaze towards a robot was a negative predictor of uncanniness during a social chat.

During joint tasks, which involved tangible artifacts, the best predictor of involvement was the gaze focused on the object of shared attention and not on the robot itself. These findings show that the gaze can be used as an indicator of people’s perception of robots.

Over the past years, extensive research has been dedicated to developing robust platforms and data-driven dialogue models to support long-term human-robot interactions. However, little is known about how people’s perception of robots and engagement with them develop over time and how these can be accurately assessed through implicit and continuous measurement techniques. In this paper, we investigate this by involving participants in three interaction sessions with multiple days of zero exposure in between. Each session consists of a joint task with a robot as well as two short social chats with it before and after the task. We measure participants’ gaze patterns with a wearable eye-tracker and gauge their perception of the robot and engagement with it and the joint task using questionnaires. Results disclose that aversion of gaze in a social chat is an indicator of a robot’s uncanniness and that the more people gaze at the robot in a joint task, the worse they perform. In contrast with most HRI literature, our results show that gaze towards an object of shared attention, rather than gaze towards a robotic partner, is the most meaningful predictor of engagement in a joint task. Furthermore, the analyses of long-term gaze patterns disclose that people’s mutual gaze in a social chat develops congruently with their perceptions of the robot over time. These are key findings for the HRI community as they entail that gaze behavior can be used as an implicit measure of people’s perception of robots in a social chat and of their engagement and task performance in a joint task.

Link: https://arxiv.org/abs/2101.05028

Source

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x