Capturing a long take with a moving camera is an indispensable task in journalism or documentary; nevertheless, it is difficult to perform because the operator has both adapt to the obstacles in the set and fulfill an artistic vision. A recent paper on arXiv.org suggests an interactive digital assistant for filming long takes.
Image credit: pixnio.com, CC0
It consists of software and 3D printed hardware, which supplement a camera gimbal connected to a backpack computer. The user defines their intentions through a dedicated GUI during the pre-production. On set, the operator holds the camera gimbal in one hand and has a dialog with the controller via microphone. The controller recognizes speech utters, relates them with the script, and provides audio feedback. A visual tracking system is used to detect and track actors in real-time. The system was positively evaluated by filmmakers who had hands-on experience with it.
The job of a camera operator is more challenging, and potentially dangerous, when filming long moving camera shots. Broadly, the operator must keep the actors in-frame while safely navigating around obstacles, and while fulfilling an artistic vision. We propose a unified hardware and software system that distributes some of the camera operator’s burden, freeing them up to focus on safety and aesthetics during a take. Our real-time system provides a solo operator with end-to-end control, so they can balance on-set responsiveness to action vs planned storyboards and framing, while looking where they’re going. By default, we film without a field monitor.
Our LookOut system is built around a lightweight commodity camera gimbal mechanism, with heavy modifications to the controller, which would normally just provide active stabilization. Our control algorithm reacts to speech commands, video, and a pre-made script. Specifically, our automatic monitoring of the live video feed saves the operator from distractions. In pre-production, an artist uses our GUI to design a sequence of high-level camera “behaviors.” Those can be specific, based on a storyboard, or looser objectives, such as “frame both actors.” Then during filming, a machine-readable script, exported from the GUI, ties together with the sensor readings to drive the gimbal. To validate our algorithm, we compared tracking strategies, interfaces, and hardware protocols, and collected impressions from a) film-makers who used all aspects of our system, and b) film-makers who watched footage filmed using LookOut.