Wednesday, July 22, 2009

Virtual Reality for Navigation Skills Vision Researchers Test Theory on Visual Orientation

Vision researchers suspect that people who do not need maps to find their way may be remembering visual landmarks. To test this theory, the scientists are having volunteers navigate through a virtual forest to a specific tree. When their peripheral vision is reduced, poor navigators only use what they currently see to guide the way, but good navigators use both their memory of the environment and what they see at the moment.

Are you one of those people who need a map and a compass to travel, but still manage to get lost? Or can you find your way around easily, with little help to guide the way? Now, vision researchers at Johns Hopkins University in Baltimore want to find out why some people are better at navigating than others.

"The hypothesis is that the good navigators are using information that they have stored in their brain to help guide them in their navigation," Kathleen Turano, a vision researcher at Johns Hopkins, tells DBIS.

To put this theory to the test, volunteers first navigate their way through a virtual forest to a specific tree. Then, their side -- or peripheral vision -- is reduced. Researchers found when visual information is taken away, poor navigators only use what they currently see to guide the way, but good navigators use both their memory of the environment and what they see at the moment to get from one point to another more efficiently.

Turano says, "If you start paying attention to different landmarks in the environment, they can actually help you in your navigation skills." Developing and using good mental pictures of the world around you could get you one step closer to better finding your way around.

Researchers will use the information from the virtual reality test to help people with visual impairment diseases, like glaucoma, to train patients to better use stored memories of the environment to help guide their way when they start losing vision.

BACKGROUND: Researchers from the Lions Vision Center at the Wilmer Eye Institute at Johns Hopkins University used a "virtual forest" to identify study participants as either good or poor navigators. The results suggest that poor navigators rely on visual information to solve the task, while good navigators are able to use visual information together with a mental picture of the environment.

HOW IT WORKS: By simulating the loss of peripheral vision during navigation, the researchers were able to create a way to control the amount of external visual information available to participants. This means they could directly test how much the participants relied on this type of information to learn about their environments. Knowing what types of information individuals use when navigating, and how performance gets worse when that information is removed, can not only help us understand human navigation in general, but also lead to the development of rehabilitation protocols for people with impaired vision.

ABOUT PERIPHERAL VISION: Peripheral vision refers to what we can see out of the corners of our eyes. The retina contains light-sensitive cells called rods and cones. The cones sense color and are found mostly in the central region of the retina. When you see something out of the corner of your eye, the image focuses on the periphery of the retina, where there are very few cones, so it's difficult to distinguish the colors of objects. Rods also become less densely packed toward the outer edges of the retina, reducing your ability to resolve the shapes of objects at the periphery. But our peripheral vision is highly sensitive to motion, probably because it was a useful adaptation to spot potential predators in the earlier stages of human evolution.

WHAT IS VIRTUAL REALITY: The term "virtual reality" is often used to describe interactive software programs in which the user responds to visual and hearing cues as he or she navigates a 3D environment on a graphics monitor. But originally, it referred to total virtual environments, in which the user would be immersed in an artificial, three-dimensional computer-generated world, involving not just sight and sound, but touch as well. Devices that simulate the touch experience are called haptic devices. Touch is vital to direct and guide human movement, and the use of haptics in virtual environments simulates how objects and actions feel to the user. The user has a variety of input devices to navigate that world and interact with virtual objects, all of which must be linked together with the rest of the system to produce a fully immersive experience.

Hybrid System Of Human-Machine Interaction Created


ScienceDaily (June 17, 2009) — Scientists at FAU have created a "hybrid" system to examine real-time interactions between humans and machines (virtual partners). By pitting human against machine, they open up the possibility of exploring and understanding a wide variety of interactions between minds and machines, and establishing the first step toward a much friendlier union of man and machine, and perhaps even creating a different kind of machine altogether.

For more than 25 years, scientists in the Center for Complex Systems and Brain Sciences (CCSBS) in Florida Atlantic University’s Charles E. Schmidt College of Science, and others around the world, have been trying to decipher the laws of coordinated behavior called “coordination dynamics”.Unlike the laws of motion of physical bodies, the equations of coordination dynamics describe how the coordination states of a system evolve over time, as observed through special quantities called collective variables. These collective variables typically span the interaction of organism and environment. Imagine a machine whose behavior is based on the very equations that are supposed to govern human coordination. Then imagine a human interacting with such a machine whereby the human can modify the behavior of the machine and the machine can modify the behavior of the human.

In a groundbreaking study published in the June 3 issue of PLoS One and titled “Virtual Partner Interaction (VPI): exploring novel behaviors via coordination dynamics,” an interdisciplinary group of scientists in the CCSBS created VPI, a hybrid system of a human interacting with a machine. These scientists placed the equations of human coordination dynamics into the machine and studied real-time interactions between the human and virtual partners. Their findings open up the possibility of exploring and understanding a wide variety of interactions between minds and machines. VPI may be the first step toward establishing a much friendlier union of man and machine, and perhaps even creating a different kind of machine altogether.

“With VPI, a human and a ‘virtual partner’ are reciprocally coupled in real-time,” said Dr. J. A. Scott Kelso, the Glenwood and Martha Creech Eminent Scholar in Science at FAU and the lead author of the study. “The human acquires information about his partner’s behavior through perception, and the virtual partner continuously detects the human’s behavior through the input of sensors. Our approach is analogous to the dynamic clamp used to study the dynamics of interactions between neurons, but now scaled up to the level of behaving humans.”

In this first ever study of VPI, machine and human behaviors were chosen to be quite simple. Both partners were tasked to coordinate finger movements with one another. The human executed the task with the intention of performing in-phase coordination with the machine, thereby trying to synchronize his/her flexion and extension movements with those of the virtual partner’s.

The machine, on the other hand, executed the task with the competing goal of performing anti-phase coordination with the human, thereby trying to extend its finger when the human flexed and vice versa. Pitting machine against human through opposing task demands was a way the scientists chose to enhance the formation of emergent behavior, and also allowed them to examine each partner’s individual contribution to the coupled behavior. An intriguing outcome of the experiments was that human subjects ascribed intentions to the machine, reporting that it was “messing” with them.

“The symmetry between the human and the machine, and the fact that they carry the same laws of coordination dynamics, is a key to this novel scientific framework,” said co-author Dr. Gonzalo de Guzman, a physicist and research associate professor at the FAU center. “The design of the virtual partner mirrors the equations of motion of the human neurobehavioral system. The laws obtained from accumulated studies describe how the parts of the human body and brain self-organize, and address the issue of self-reference, a condition leading to complexity.”

One ready application of VPI is the study of the dynamics of complex brain processes such as those involved in social behavior. The extended parameter range opens up the possibility of systematically driving functional process of the brain (neuromarkers) to better understand their roles. The scientists in this study anticipate that just as many human skills are acquired by observing other human beings; human and machine will learn novel patterns of behavior by interacting with each other.

“Interactions with ever proliferating technological devices often place high skill demands on users who have little time to develop these skills,” said Kelso. “The opportunity presented through VPI is that equally useful and informative new behaviors may be uncovered despite the built-in asymmetry of the human-machine interaction.”

While stable and intermittent coordination behaviors emerged that had previously been observed in ordinary human social interactions, the scientists also discovered novel behaviors or strategies that have never previously been observed in human social behavior. The emergence of such novel behaviors demonstrates the scientific potential of the VPI human-machine framework.

Modifying the dynamics of the virtual partner with the purpose of inducing a desired human behavior, such as learning a new skill or as a tool for therapy and rehabilitation, are among several applications of VPI.

“The integration of complexity in to the behavioral and neural sciences has just begun,” said Dr. Emmanuelle Tognoli, research assistant professor in FAU’s CCSBS and co-author of the study. “VPI is a move away from simple protocols in which systems are ‘poked’ by virtue of ‘stimuli’ to understanding more complex, reciprocally connected systems where meaningful interactions occur.”

Research for this study was supported by the National Science Foundation program “Human and Social Dynamics,” the National Institute of Mental Health’s “Innovations Award,” “Basic and Translational Research Opportunities in the Social Neuroscience of Mental Health,” and the Office of Naval Research Code 30. Kelso’s research is also supported by the Pierre de Fermat Chaire d’Excellence and Tognoli’s research is supported by the Davimos Family Endowment for Excellence in Science.

Monday, July 20, 2009

Why Japan’s Cellphones Haven’t Gone Global ?


At first glance, Japanese cellphones are a gadget lover’s dream: ready for Internet and e-mail, they double as credit cards, boarding passes and even body-fat calculators.But it is hard to find anyone in Chicago or London using a Japanese phone like a Panasonic, a Sharp or an NEC. Despite years of dabbling in overseas markets, Japan’s handset makers have little presence beyond the country’s shores.

“Japan is years ahead in any innovation. But it hasn’t been able to get business out of it,” said Gerhard Fasol, president of the Tokyo-based IT consulting firm, Eurotechnology Japan.

The Japanese have a name for their problem: Galápagos syndrome.

Japan’s cellphones are like the endemic species that Darwin encountered on the Galápagos Islands — fantastically evolved and divergent from their mainland cousins — explains Takeshi Natsuno, who teaches at Tokyo’s Keio University.

This year, Mr. Natsuno, who developed a popular wireless Internet service called i-Mode, assembled some of the best minds in the field to debate how Japanese cellphones can go global.

“The most amazing thing about Japan is that even the average person out there will have a superadvanced phone,” said Mr. Natsuno. “So we’re asking, can’t Japan build on that advantage?”

The only Japanese handset maker with any meaningful global share is Sony Ericsson, and that company is a London-based joint venture between a Japanese electronics maker and a Swedish telecommunications firm.

And Sony Ericsson has been hit by big losses. Its market share was just 6.3 percent in the first quarter of 2009, behind Nokia of Finland, Samsung Electronics and LG of South Korea, and Motorola of Illinois.

Yet Japan’s lack of global clout is all the more surprising because its cellphones set the pace in almost every industry innovation: e-mail capabilities in 1999, camera phones in 2000, third-generation networks in 2001, full music downloads in 2002, electronic payments in 2004 and digital TV in 2005.

Japan has 100 million users of advanced third-generation smartphones, twice the number used in the United States, a much larger market. Many Japanese rely on their phones, not a PC, for Internet access.

Indeed, Japanese makers thought they had positioned themselves to dominate the age of digital data. But Japanese cellphone makers were a little too clever. The industry turned increasingly inward. In the 1990s, they set a standard for the second-generation network that was rejected everywhere else. Carriers created fenced-in Web services, like i-Mode. Those mobile Web universes fostered huge e-commerce and content markets within Japan, but they have also increased the country’s isolation from the global market.

Then Japan quickly adopted a third-generation standard in 2001. The rest of the world dallied, essentially making Japanese phones too advanced for most markets.

At the same time, the rapid growth of Japan’s cellphone market in the late 1990s and early 2000s gave Japanese companies little incentive to market overseas. But now the market is shrinking significantly, hit by a recession and a graying economy; makers shipped 19 percent fewer handsets in 2008 and expect to ship even fewer in 2009. The industry remains fragmented, with eight cellphone makers vying for part of a market that will be less than 30 million units this year.

Several Japanese companies are now considering a push into overseas markets, including NEC, which pulled the plug on its money-losing international cellphone efforts in 2006. Panasonic, Sharp, Toshiba and Fujitsu are said to be planning similar moves.

“Japanese cellphone makers need to either look overseas, or exit the business,” said Kenshi Tazaki, a managing vice president at the consulting firm Gartner Japan.

At a recent meeting of Mr. Natsuno’s group, 20 men and one woman crowded around a big conference table in a skyscraper in central Tokyo, examining market data, delivering diatribes and frequently shaking their heads.

The discussion then turned to the cellphones themselves. Despite their advanced hardware, handsets here often have primitive, clunky interfaces, some participants said. Most handsets have no way to easily synchronize data with PCs as the iPhone and other smartphones do.

Because each handset model is designed with a customized user interface, development is time-consuming and expensive, said Tetsuzo Matsumoto, senior executive vice president at Softbank Mobile, a leading carrier. “Japan’s phones are all ‘handmade’ from scratch,” he said. “That’s reaching the limit.”

Then there are the peculiarities of the Japanese market, like the almost universal clamshell design, which is not as popular overseas. Recent hardware innovations, like solar-powered batteries or waterproofing, have been incremental rather than groundbreaking.

The emphasis on hardware makes even the newest phones here surprisingly bulky. Some analysts say cellphone carriers stifle innovation by demanding so many peripheral hardware functions for phones.

The Sharp 912SH for Softbank, for example, comes with an LCD screen that swivels 90 degrees, GPS tracking, a bar-code reader, digital TV, credit card functions, video conferencing and a camera and is unlocked by face recognition.

Meanwhile, Japanese developers are jealous of the runaway global popularity of the Apple iPhone and App Store, which have pushed the American and European cellphone industry away from its obsession with hardware specifications to software. “This is the kind of phone I wanted to make,” Mr. Natsuno said, playing with his own iPhone 3G.

The conflict between Japan’s advanced hardware and its primitive software has contributed to some confusion over whether the Japanese find the iPhone cutting edge or boring. One analyst said they just aren’t used to handsets that connect to a computer.

The forum Mr. Natsuno convened to address Galápagos syndrome has come up with a series of recommendations: Japan’s handset makers must focus more on software and must be more aggressive in hiring foreign talent, and the country’s cellphone carriers must also set their sights overseas.

“It’s not too late for Japan’s cellphone industry to look overseas,” said Tetsuro Tsusaka, a telecom analyst at Barclays Capital Japan. “Besides, most phones outside the Galápagos are just so basic.”