Home > Robotics > When Shall I Be Empathetic? The Utility of Empathetic Parameter Estimation in Multi-Agent Interactions

When Shall I Be Empathetic? The Utility of Empathetic Parameter Estimation in Multi-Agent Interactions

Robots interact with humans when performing tasks in daily assistance, healthcare, or defense. These interactions can be considered as dynamic games with incomplete information, as robots cannot fully understand human intentions.

One of the approaches to solve this problem is to make each agent estimate their common belief about all agents’ parameters. If agents assume that their own parameters are known to their fellows, agents are considered to be not-empathetic. In contrast, if an inconsistency between common belief and true parameters is postulated, agents are empathetic.

Image credit: Franklin Heijnen via Wikimedia (CC BY-SA 2.0)

A recent study shows that the empathetic approach led to more effective parameter estimation. A case study of an interaction between two cars at an uncontrolled intersection was explored. When common beliefs are wrong, empathy leads to choosing higher social values. If beliefs are consistent with the parameters, social values are still conserved.

Human-robot interactions (HRI) can be modeled as dynamic or differential games with incomplete information, where each agent holds private reward parameters. Due to the open challenge in finding perfect Bayesian equilibria of such games, existing studies often consider approximated solutions composed of parameter estimation and motion planning steps, in order to decouple the belief and physical dynamics. In parameter estimation, current approaches often assume that the reward parameters of the robot are known by the humans. We argue that by falsely conditioning on this assumption, the robot performs non-empathetic estimation of the humans’ parameters, leading to undesirable values even in the simplest interactions. We test this argument by studying a two-vehicle uncontrolled intersection case with short reaction time. Results show that when both agents are unknowingly aggressive (or non-aggressive), empathy leads to more effective parameter estimation and higher reward values, suggesting that empathy is necessary when the true parameters of agents mismatch with their common belief. The proposed estimation and planning algorithms are therefore more robust than the existing approaches, by fully acknowledging the nature of information asymmetry in HRI. Lastly, we introduce value approximation techniques for real-time execution of the proposed algorithms.

Link: https://arxiv.org/abs/2011.02047


Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x