From Motor Control to Team Play in Simulated Humanoid Football

If you want to have a robotic football team, you need to simulate it first.

Football is a great challenge for the robotics community. This game requires decisions at different levels of abstraction: from fast control of the human body to scoring as a team. A recent paper by DeepMind proposes a simulated football environment that focuses on the problem of movement coordination.

Overview of the simulated humanoid football environment. Image credit: Siqi Liu et al., arXiv:2105.12196

It contains teams of fully articulated humanoid football players moving in a realistically simulated physics environment. The training framework consists of a three-stage procedure during which learning progresses from imitation learning for low-level movement to multi-agent reinforcement learning for full game play.

It is demonstrated in this study that artificial agents can learn to coordinate complex movements in order to interact with objects and achieve long-horizon goals in cooperation with others. The underlying principles of the model are applicable in other domains, including other team sports or collaborative work scenarios.

Intelligent behaviour in the physical world exhibits structure at multiple spatial and temporal scales. Although movements are ultimately executed at the level of instantaneous muscle tensions or joint torques, they must be selected to serve goals defined on much longer timescales, and in terms of relations that extend far beyond the body itself, ultimately involving coordination with other agents. Recent research in artificial intelligence has shown the promise of learning-based approaches to the respective problems of complex movement, longer-term planning and multi-agent coordination. However, there is limited research aimed at their integration. We study this problem by training teams of physically simulated humanoid avatars to play football in a realistic virtual environment. We develop a method that combines imitation learning, single- and multi-agent reinforcement learning and population-based training, and makes use of transferable representations of behaviour for decision making at different levels of abstraction. In a sequence of stages, players first learn to control a fully articulated body to perform realistic, human-like movements such as running and turning; they then acquire mid-level football skills such as dribbling and shooting; finally, they develop awareness of others and play as a team, bridging the gap between low-level motor control at a timescale of milliseconds, and coordinated goal-directed behaviour as a team at the timescale of tens of seconds. We investigate the emergence of behaviours at different levels of abstraction, as well as the representations that underlie these behaviours using several analysis techniques, including statistics from real-world sports analytics. Our work constitutes a complete demonstration of integrated decision-making at multiple scales in a physically embodied multi-agent setting. See project video at

Research paper: Liu, S., “From Motor Control to Team Play in Simulated Humanoid Football”, 2021. Link:


Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x