Luis Piloto, DeepMind researcher, and his collaborators developed PLATO, an AI that adopts the theory of objects playing a central part in the representation and prediction of the physical world around them.
The video training data was used to show PLATO simple scenes and to improve its performance.
PLATO’s reaction was similar to a baby’s when they saw an impossible event. Learning effects were observed after 28 hours, according to Nature Scientific Journal.
Piloto explained that PLATO makes use of objects in all stages of processing. This includes visual inputs represented as a collection of objects, reasoning about interactions between objects, and producing outputs that can be predicted per-object basis.
He said that PLATO passed the physical concepts test. “But, when we trained flat models as large or larger than PLATO, but without object-based representations, we found that they failed all our tests. This suggests that objects are an important part of physical understanding.”
Susan Hespos (professor of psychology at Northwestern University) and Apoorva Shivaram (of Western Sydney University), Australia, said that the findings confirmed our understanding of how perception develops in humans.
The results indicate that visual animations may be able to account for some intuitive learning in physics, but not enough for infants.
They stated that computational models require some principled knowledge to understand how objects interact and behave to match infant learning.
A wealth of AI applications may benefit from some real-world physics, self-driving cars perhaps? The authors stress that their findings are meant to help other AI research.
Piloto stated to journalists that physical understanding is pervasive. It is difficult to speak about specific applications as we believe it is more widespread than that.
It all depends on what the researchers plan to do with it. This work aims to establish a benchmark to help people understand how their models comprehend the physical world.