New research may have turned more than 100 years of thinking about the way our brains process visual information on its head.
Until now, a scientific consensus has never been reached on how our brains successfully track objects with our eyes, multiple times every second, with remarkable coordination and seemingly minimal effort.
A new paper, led by cognitive neuroscientist Dr. Will Harrison from the University of the Sunshine Coast, might have an answer. The work is published in the journal Proceedings of the National Academy of Sciences.
“Just like a driverless car must coordinate its movement on the road with regard to objects around it, the brain has to coordinate movement of the eyes, head and body, while also maintaining a coherent understanding of the visual world,” Dr. Harrison said.
“The prevailing hypothesis for more than 100 years has been that the brain achieves this by continually predicting what the world would look like if it executes a particular movement. However, such predictions would require a tremendous amount of computing power.
“Our research shows the answer might be far simpler,” Dr. Harrison said.
Instead, it could be that the brain computes the real-world locations of objects by simply combining information about where the eyes are pointing, and where visual information falls on the retinas.
In monkeys, an animal with a similar visual system to humans, the parts of the brain that first receive visual signals from the eyes also receive information about where the eyes are pointing.
To test this idea, Dr. Harrison and his colleagues conducted an experiment in which participants performed a difficult visual discrimination task while moving their eyes around the display. Using a high-speed eye tracker that measures where a person is looking 1,000 times per second, Dr. Harrison found people tracked the location of objects across eye movements with far greater accuracy, and with much greater temporal resolution, than previously thought.
“We found no evidence that the brain formulated a prediction with each eye movement, but we did find that the speed with which people could track objects across eye movements was very similar to the timing of activity previously observed in the monkey brain,” Dr. Harrison said.
The researchers then developed a mathematical model to simulate how the brain could calculate an object’s real-world location. The effectiveness of the model confirmed that visual stability likely involves far simpler calculations than previously thought.
So, does this mean we’ll see car manufacturers adopting this new visual tracking model, in their driverless cars?
One of the primary challenges preventing these cars entering the mainstream is that engineers have difficulty working out how to process huge volumes of data within the timeframe required for a moving vehicle to operate safely.
“It’s hard to say, but our findings may demonstrate an inefficiency in the way their computers are trying to process visual data,” Dr. Harrison said.
“What this research does change, is decades of conventional wisdom about how our brain processes visual information. We are hopeful that our revised theory could help explain how the brain coordinates other complex actions across with many different senses.”
More information:
William J. Harrison et al, A computational account of transsaccadic attentional allocation based on visual gain fields, Proceedings of the National Academy of Sciences (2024). DOI: 10.1073/pnas.2316608121
Citation:
Driverless cars struggle to track objects while moving: So why don’t our eyes? (2024, July 4)
retrieved 4 July 2024
from https://medicalxpress.com/news/2024-07-driverless-cars-struggle-track-dont.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.