Home > Robotics > PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning

PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning

Reasoning and answering questions about visual scenes is an important task for artificial intelligence. However, current datasets lack part-level understanding. A recent paper on arXiv.org presents a large-scale ParT Reasoning dataset (PRT), a benchmark for part-based conceptual, relational and physical reasoning.

A robot. Image credit: Pxfuel, free licence

This benchmark includes ∼ 70k scenes constructed from five object categories with rich geometric and structural variations. The scenes are paired with five types of questions: concept, geometry, analogy, arithmetic, and physics. Scene graph annotations include the ground-truth locations, segmentations, and properties for all objects and parts.

Several state-of-the-art visual reasoning models were analyzed. It is shown that all models struggle with the dataset, especially in relational, analogical, and physical reasoning, and are inferior to human performance by a large margin.

A critical aspect of human visual perception is the ability to parse visual scenes into individual objects and further into object parts, forming part-whole hierarchies. Such composite structures could induce a rich set of semantic concepts and relations, thus playing an important role in the interpretation and organization of visual signals as well as for the generalization of visual perception and reasoning. However, existing visual reasoning benchmarks mostly focus on objects rather than parts. Visual reasoning based on the full part-whole hierarchy is much more challenging than object-centric reasoning due to finer-grained concepts, richer geometry relations, and more complex physics. Therefore, to better serve for part-based conceptual, relational and physical reasoning, we introduce a new large-scale diagnostic visual reasoning dataset named PTR. PTR contains around 70k RGBD synthetic images with ground truth object and part level annotations regarding semantic instance segmentation, color attributes, spatial and geometric relationships, and certain physical properties such as stability. These images are paired with 700k machine-generated questions covering various types of reasoning types, making them a good testbed for visual reasoning models. We examine several state-of-the-art visual reasoning models on this dataset and observe that they still make many surprising mistakes in situations where humans can easily infer the correct answer. We believe this dataset will open up new opportunities for part-based reasoning.

Research paper: Hong, Y., Yi, L., Tenenbaum, J. B., Torralba, A., and Gan, C., “PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning”, 2021. Link to the article: https://arxiv.org/abs/2112.05136
Link to the project site: https://ptr.csail.mit.edu/

Source

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x