Wednesday, July 22, 2009

Virtual Reality for Navigation Skills Vision Researchers Test Theory on Visual Orientation

Vision researchers suspect that people who do not need maps to find their way may be remembering visual landmarks. To test this theory, the scientists are having volunteers navigate through a virtual forest to a specific tree. When their peripheral vision is reduced, poor navigators only use what they currently see to guide the way, but good navigators use both their memory of the environment and what they see at the moment.

Are you one of those people who need a map and a compass to travel, but still manage to get lost? Or can you find your way around easily, with little help to guide the way? Now, vision researchers at Johns Hopkins University in Baltimore want to find out why some people are better at navigating than others.

"The hypothesis is that the good navigators are using information that they have stored in their brain to help guide them in their navigation," Kathleen Turano, a vision researcher at Johns Hopkins, tells DBIS.

To put this theory to the test, volunteers first navigate their way through a virtual forest to a specific tree. Then, their side -- or peripheral vision -- is reduced. Researchers found when visual information is taken away, poor navigators only use what they currently see to guide the way, but good navigators use both their memory of the environment and what they see at the moment to get from one point to another more efficiently.

Turano says, "If you start paying attention to different landmarks in the environment, they can actually help you in your navigation skills." Developing and using good mental pictures of the world around you could get you one step closer to better finding your way around.

Researchers will use the information from the virtual reality test to help people with visual impairment diseases, like glaucoma, to train patients to better use stored memories of the environment to help guide their way when they start losing vision.

BACKGROUND: Researchers from the Lions Vision Center at the Wilmer Eye Institute at Johns Hopkins University used a "virtual forest" to identify study participants as either good or poor navigators. The results suggest that poor navigators rely on visual information to solve the task, while good navigators are able to use visual information together with a mental picture of the environment.

HOW IT WORKS: By simulating the loss of peripheral vision during navigation, the researchers were able to create a way to control the amount of external visual information available to participants. This means they could directly test how much the participants relied on this type of information to learn about their environments. Knowing what types of information individuals use when navigating, and how performance gets worse when that information is removed, can not only help us understand human navigation in general, but also lead to the development of rehabilitation protocols for people with impaired vision.

ABOUT PERIPHERAL VISION: Peripheral vision refers to what we can see out of the corners of our eyes. The retina contains light-sensitive cells called rods and cones. The cones sense color and are found mostly in the central region of the retina. When you see something out of the corner of your eye, the image focuses on the periphery of the retina, where there are very few cones, so it's difficult to distinguish the colors of objects. Rods also become less densely packed toward the outer edges of the retina, reducing your ability to resolve the shapes of objects at the periphery. But our peripheral vision is highly sensitive to motion, probably because it was a useful adaptation to spot potential predators in the earlier stages of human evolution.

WHAT IS VIRTUAL REALITY: The term "virtual reality" is often used to describe interactive software programs in which the user responds to visual and hearing cues as he or she navigates a 3D environment on a graphics monitor. But originally, it referred to total virtual environments, in which the user would be immersed in an artificial, three-dimensional computer-generated world, involving not just sight and sound, but touch as well. Devices that simulate the touch experience are called haptic devices. Touch is vital to direct and guide human movement, and the use of haptics in virtual environments simulates how objects and actions feel to the user. The user has a variety of input devices to navigate that world and interact with virtual objects, all of which must be linked together with the rest of the system to produce a fully immersive experience.

No comments: