Scientists have identified how human brain can determine the properties of a particular object by using purely statistical information: a result that suggests that there is an "inner pocket" in all of us.
The researchers from the University of Cambridge, Central European University and Columbia University found that one of the reasons why successful pocket boxes are so effective is that they can identify objects that they have never seen before just by touching them. Similarly, we can predict what an object in a shop window will feel like just by looking at it.
In both scenarios, we rely on the ability of the brain to break up the continuous flow of information received by our sensory inputs into different pieces. The pocket compartment can interpret the sequence of small depressions on its fingers as a series of well-defined objects in a pocket or purse, while the shopper's visual system can interpret photos as reflections of light from the objects in the window.
Our ability to extract distinct objects from pink scenes by touching or calling alone and accurately predicting how they will feel based on how they look, or how they look based on how they feel is crucial to how we interact with the world.
By performing wise statistical analyzes of past experiences, the brain can immediately identify objects without the need for clear boundaries or other specialized signals and anticipate unknown properties for new objects. The results are reported in open access journal eLife .
"We look at how the brain takes in the continuous flow of information it receives and segments it into objects," said Professor Máté Lengyel of the Cambridge Department of Engineering, who led the research. "The common sight is that the brain gets specialized signals: like edges or occlusions, if one thing ends and another thing begins, but we have found that the brain is a really smart statistical machine: it searches for patterns and finds building blocks to building objects. "
Lengyel and his colleagues designed scenes of several abstract shapes without visible boundaries between them and asked the participants to either observe the shapes on a screen or to" pull "them into a tear line that passed either through or between the objects.
The participants were then tested on their ability to predict the visual (how well-known the real jigsaw pieces appeared compared to abstract pieces constructed from the parts of two different pieces) and haptic properties of these jigsaw pieces (how difficult would it be to physically pull together new scenes in different directions).
The researchers found that the participants could form the correct mental model of the puzzle pieces from either visual or haptic (touch) experience alone and could immediately predict haptic properties from visual and vice versa.
"These results challenge classic views on how we extract and learn about objects in our environment," says Lengyel. "Instead, we have shown that general purpose statistical calculations known to work even if the youngest infants are powerful enough for to achieve such cognitive achievements. The participants in our study did not particularly choose to be professional pocket locks ̵
The research was partly funded by the Wellcome Trust and the European Research Council.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of the press releases sent to EurekAlert! by contributing institutions or for the use of information via the EurekAlert system.