Mimicking human facial emotions would encourage stronger engagement in human-robot interactions. Most of the current methods use only pre-programmed facial expressions, allowing robots to select one of them. Such approaches are limited in real situations where human expressions vary a lot.
A recent paper on arXiv.org proposes a general learning-based framework to learn facial mimicry from visual observations. It does not rely on human supervisions.
Emotions. Image credit: RyanMcGuire via Pixabay, CC0 Public Domain
Firstly, a generative model synthesizes a corresponding robot self-image with the same facial expression. Then, an inverse network provides the set of motor commands. An animatronic robotic face with soft skin and flexible control mechanisms was proposed to implement the framework. The method can generate appropriate facial expressions when presented with diverse human subjects. It enables real-time planning and opens new opportunities for practical applications.
Ability to generate intelligent and generalizable facial expressions is essential for building human-like social robots. At present, progress in this field is hindered by the fact that each facial expression needs to be programmed by humans. In order to adapt robot behavior in real time to different situations that arise when interacting with human subjects, robots need to be able to train themselves without requiring human labels, as well as make fast action decisions and generalize the acquired knowledge to diverse and new contexts. We addressed this challenge by designing a physical animatronic robotic face with soft skin and by developing a vision-based self-supervised learning framework for facial mimicry. Our algorithm does not require any knowledge of the robot’s kinematic model, camera calibration or predefined expression set. By decomposing the learning process into a generative model and an inverse model, our framework can be trained using a single motor babbling dataset. Comprehensive evaluations show that our method enables accurate and diverse face mimicry across diverse human subjects. The project website is at this http URL
Research paper: Chen, B., Hu, Y., Li, L., Cummings, S., and Lipson, H., “Smile Like You Mean It: Driving Animatronic Robotic Face with Learned Models”, 2021. Link: https://arxiv.org/abs/2105.12724