A recent study proposes a framework which enables practitioners to compose goal-directed mobile robot navigation systems from learned simple behaviours.
Autonomous robot navigation algorithms have many existing and emerging industrial applications, from factory or healthcare robots to self-driving cars.
These algorithms enable robots to drive to any given goal position, while using their sensors (e.g. cameras, laser rangers, etc.) to observe the world in order to make decisions, without any human assistance.
Image credit: Neurotechnology
Current commercial grade autonomous navigation systems usually rely on traditional engineering-based approaches. They geometrically model the surrounding environment, estimate where the robot is within it, and use existing planning algorithms to plan trajectories between current and a given target location. These systems, however, are needed to be programmed explicitly to handle different situations, and therefore their scalability may prove to be expensive.
On the other hand, research on learning-based approaches have also been on the rise in both academia and industry. Their advantage is that they learn from data, and data collection is far more cheap in comparison to software development.
One of such approaches is called imitation learning. The main idea of imitation learning is to demonstrate the required behaviour by giving an example (e.g. manually driving a robot), and then learning a model of association between what the robot senses and how it should act. Afterwards, the robot is expected to repeat the demonstrated behaviour using this learned association model, which translates sensor readings to motor commands.
However, imitation learning algorithms usually are not goal-directed, since it is practically infeasible to demonstrate how to reach each and every possible goal.
Topological Navigation Graph framework
Recent study conducted by robotics researchers from Neurotechnology suggests a solution to this problem. It proposes a framework which extends simple trajectory following behaviours learned by imitation learning algorithms into a special structure, called topological navigation graph (TNG). Each of such simple behaviours corresponds to a trajectory in the environment. Given a visually specified goal, TNG computes a sequence of trajectories towards the goal, and provides a mechanism when to switch corresponding trajectory following behaviours, in such a way that the goal is progressively reached.
Hence, TNG allows practitioners to utilise existing non-goal-directed imitation learning methods for goal-directed navigation in mobile robotics.
The conducted experiments with real and simulated robots reveal that the TNG framework allows composing aforementioned behaviours into a goal-directed navigation system capable of reaching visually specified goals, when applied both to simulated and real environments.