Continual Learning from Synthetic Data for a Humanoid Exercise Robot

Despite the obvious advantage of physical exercises, the help of a fitness professional is not always available. In such cases, a humanoid robot could help correct an improper technique and engage with its user.

A recent paper proposes to employ the humanoid robot Pepper as a motivator and feedback giver.

A robot. Image credit: Alex Knight via Unsplash (Unsplash licence)

The robot can learn the pose and movement pattern. Then it detects the pose of the user and compares it to an exercise recalled from memory. A novel version of Grow-When-Required Network (GWR) is developed so that the robot could adapt to many body shapes.

Engaging and easy-to-understand feedback is given for the user. The robot’s tablet mirrors real-time video from a camera, and the wrong joint positions are marked in red. The experiments with virtual avatars showed that the suggested approach outperforms other variants of GWR and is robust enough to perturbations like rotation and translation.

In order to detect and correct physical exercises, a Grow-When-Required Network (GWR) with recurrent connections, episodic memory and a novel subnode mechanism is developed in order to learn spatiotemporal relationships of body movements and poses. Once an exercise is performed, the information of pose and movement per frame is stored in the GWR. For every frame, the current pose and motion pair is compared against a predicted output of the GWR, allowing for feedback not only on the pose but also on the velocity of the motion. In a practical scenario, a physical exercise is performed by an expert like a physiotherapist and then used as a reference for a humanoid robot like Pepper to give feedback on a patient’s execution of the same exercise. This approach, however, comes with two challenges. First, the distance from the humanoid robot and the position of the user in the camera’s view of the humanoid robot have to be considered by the GWR as well, requiring a robustness against the user’s positioning in the field of view of the humanoid robot. Second, since both the pose and motion are dependent on the body measurements of the original performer, the expert’s exercise cannot be easily used as a reference. This paper tackles the first challenge by designing an architecture that allows for tolerances in translation and rotations regarding the center of the field of view. For the second challenge, we allow the GWR to grow online on incremental data. For evaluation, we created a novel exercise dataset with virtual avatars called the Virtual-Squat dataset. Overall, we claim that our novel architecture based on the GWR can use a learned exercise reference for different body variations through continual online learning, while preventing catastrophic forgetting, enabling for an engaging long-term human-robot interaction with a humanoid robot.

Research paper: Duczek, N., Kerzel, M., and Wermter, S., “Continual Learning from Synthetic Data for a Humanoid Exercise Robot”, 2021. Link:


Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x