Scientists have successfully taught a small group of blind and sighted people how to navigate their surroundings using echolocation – the sonar-based language of dolphins and bats.
Using sound created by tongue clicks, the group learned how to detect the size of virtual rooms with surprising accuracy – something that researchers had not expected in people who were born with sight.
While blind people have proven successful at echolocation in the past, it’s been unclear if sighted people can develop the same ability, given their almost total dependence on visual perception.
“We thought, ‘If it’s sighted people, it’s not going to be something we’ve ever learned to do, so probably we’re really bad at it,'” one of the team, Virginia Flanagin from the Ludwig Maximilian University of Munich, Germany, told Veronique Greenwood at The Atlantic.
But the results showed the opposite – in an experiment involving 11 sighted people and one blind person, the best-performing sighted person could use echolocation to detect a mere 4 percent difference in the size of a virtual room.
“Even the people who did less well could still often tell apart differences of 6 to 8 percent, with the least skilled bottoming out at a 16 percent difference,” Greenwood reports.
“Overall, that actually is about the same level of acuity – ability to distinguish differences – that you find in some visual tests, says Flanagin.”
To figure this out, the team first trained their subjects in echolocation by placing them in a heavily padded anechoic chamber, and playing recordings of clicks made previously in real-life buildings.
By running the exercise in a room that produces no echoes of its own, the researchers could tell the participants which sounds correlated to larger or smaller rooms, giving them the opportunity to learn the subtle differences between the two.
Once the volunteers had gotten through the initial training, the team hooked them into an MRI machine, which was connected to a virtual, 3D model of a nearby chapel building.
The volunteers would either click their tongue to make a sound, or the machine would make the sound for them – referred to as “active” and “passive” echolocation – and listen for how those sounds echoed through the virtual room.
Based on these echoes, the volunteers would have to judge the size of the virtual room.
The researchers found that the volunteers all did significantly better when they were performing active echolocation – meaning their own clicks were a far more effective tool for them to navigate their virtual surroundings.
That makes sense, seeing as the volunteers were more actively engaged in the exhalation when they were the ones performing it, but what the researchers found strange was that the sound of the echoes activated the sighted volunteers’ motor cortex – the region of the brain responsible for movement.
Even when the team compared MRI scans from active and passive echolocation – allowing them to isolate and remove the brain activity involved in actually making the physical clicking sound – that part of the brain still showed signs of life.
In fact, the motor cortex was found to be most active with large versions of the chapel than smaller ones, Greenwood reports, which suggests a connection between virtually navigating and physically navigating a space.
“It seems like the motor cortex is somehow involved in the sensory processing,” Flanagin told her.
In the blind subject, the echoes activated the unused visual cortex instead, suggesting that they were visualising the echoes as they bounced between the virtual walls.
We should note that the study is extremely small, with a limited sample size of both blind and sighted people, so we can’t read too much into the results until they have been replicated in a much larger, more diverse group.
But based on what we know about human echolocation already, it suggests that sighted people do have the capacity for this purely sound-based form of navigation.
You can watch one of the most famous human echolocation experts, Daniel Kish, demonstrate his ability riding a bike in the video below:
The research has been published in the Journal of Neuroscience.
Source: Science Alert