Sample-efficient Reinforcement Learning in Robotic Table Tennis

Recent advances in reinforcement learning can be used to control real robots and enable them to perform previously unavailable tasks. A recent paper on presents an industrial robot taught to play table tennis.

In order to perform stroke movement, it is necessary to know the position, speed, and orientation of the bat. Instead of using raw images from cameras to determine the position of the ball, a tracking system predicts the trajectory up to the moment of the impact of the bat. Speed and orientation are learned using reinforcement learning using a deterministic actor-critic algorithm.

Image credit: djimenezhdez via Pixabay, free licence

The approach was tested both in simulation and real environments, including a very noisy environment. The robot learned to successfully return in under 200 balls. It outperforms previous table tennis robots, but further improvement is necessary to successfully play with humans.

Reinforcement learning (RL) has recently shown impressive success in various computer games and simulations. Most of these successes are based on numerous episodes to be learned from. For typical robotic applications, however, the number of feasible attempts is very limited. In this paper we present a sample-efficient RL algorithm applied to the example of a table tennis robot. In table tennis every stroke is different, of varying placement, speed and spin. Therefore, an accurate return has be found depending on a high-dimensional continuous state space. To make learning in few trials possible the method is embedded into our robot system. In this way we can use a one-step environment. The state space depends on the ball at hitting time (position, velocity, spin) and the action is the racket state (orientation, velocity) at hitting. An actor-critic based deterministic policy gradient algorithm was developed for accelerated learning. Our approach shows competitive performance in both simulation and on the real robot in different challenging scenarios. Accurate results are always obtained within under 200 episodes of training. A demonstration video is provided as supplementary material.