World models represent an agent’s knowledge about its surroundings. The agent can predict the future of a model by ‘imagining’ the consequences of proposed actions. Nonetheless, world models that generate high-dimensional visual observations have been restricted to relatively simple environments.
An example of a robotic platform that could be used for the indoor navigation. Image credit: Neurotechnology
A recent paper aims to develop a generic visual world model for agents navigating in indoor environments.
Given one or more visual observations of an indoor scene, the model synthesizes high-resolution visual observations along a specified trajectory through future viewpoints. In the first stage, depth and semantic segmentations are generated. Then, the segmentations are rendered as realistic RGB images.
The model can generate plausible views for unseen scenes under large viewpoint changes. It also shows strong promise in improving performance on downstream tasks, like Vision-and-Language Navigation.
People navigating in unfamiliar buildings take advantage of myriad visual, spatial and semantic cues to efficiently achieve their navigation goals. Towards equipping computational agents with similar capabilities, we introduce Pathdreamer, a visual world model for agents navigating in novel indoor environments. Given one or more previous visual observations, Pathdreamer generates plausible high-resolution 360 visual observations (RGB, semantic segmentation and depth) for viewpoints that have not been visited, in buildings not seen during training. In regions of high uncertainty (e.g. predicting around corners, imagining the contents of an unseen room), Pathdreamer can predict diverse scenes, allowing an agent to sample multiple realistic outcomes for a given trajectory. We demonstrate that Pathdreamer encodes useful and accessible visual, spatial and semantic knowledge about human environments by using it in the downstream task of Vision-and-Language Navigation (VLN). Specifically, we show that planning ahead with Pathdreamer brings about half the benefit of looking ahead at actual observations from unobserved parts of the environment. We hope that Pathdreamer will help unlock model-based approaches to challenging embodied navigation tasks such as navigating to specified objects and VLN.
Research paper: Koh, J. Y., Lee, H., Yang, Y., Baldridge, J., and Anderson, P., “Pathdreamer: A World Model for Indoor Navigation”, 2021. Link: https://arxiv.org/abs/2105.08756