12 Jul 2017 |
World innovation news |
Software Systems, Multimedia and Cybersecurity , Health Technologies
A Navigation System for Visually Impaired People


In recent decades, scientists have attempted to manufacture automatic navigation systems that allow visually impaired people to move more easily. However, the systems were not as easy to use or as convenient as the white cane. Nonetheless, the cane has disadvantages that prevent the user from sensing and acting adequately in his or her environment. Indeed, its tip cannot identify objects by touch, so it is not practical in situations like recognizing whether a chair is occupied or not.
Researchers from the Computer Science and Artificial Intelligence Laboratory (CSAIL), of the Massachusetts Institute of Technology (MIT), have developed a new navigation system that communicates relevant information to visually impaired users about their surroundings. This system can be used with or without a white cane.
In a presentation given at the International Conference on Robotics and Automation (ICRA), the researchers described their technology and explained all of the experimental studies conducted with visually impaired people.
Tests performed by the by the CSail laboratory team at MIT during the design of the new navigation system.
Sensory Comfort as a Design Constraint
Robert Katzschmann, a graduate student in Mechanical Engineering from MIT who participated in this study, explained the main challenge of this technology, namely cognitive ergonomics or more precisely sensory ergonomics. The team tried to create a system equipped with perception modules that did not interfere with other senses. This ergonomic constraint has led them to avoid resorting to the use of audio systems for example, or of a module worn on the head or the neck. After thoroughly researching this design constraint, they opted for the abdominal area, the anatomical part that is the least used for other senses.
The navigation system includes a 3D camera worn around the neck, a data processing unit that uses a visual odometry algorithm created by the team, a belt with sensors and five motors vibrating evenly around the part that covers the abdomen, and a dynamic Braille interface hooked to the belt at a convenient location for tactile reading.
The algorithm helps quickly analyze the visual data captured by the 3D camera in order to identify the surfaces and their spatial orientations.
Accurate and Complementary Touch Signals
The algorithm first classifies the pixels into groups of three. The pixels have associated location data that allow each group to define a map. If the map orientations defined by five groups close by are 10 degrees apart, the system concludes that it has found a surface. It is not necessary to determine the extent of the surface or the type of object in question. The algorithm simply identifies an obstacle at this location and controls the associated motor to vibrate when the wearer is within 2 meters of it.
Identifying a chair is a bit more complex. The system must complete three distinct surface identifications, in the same general area. With this triple identification, it is possible to know if the chair is unoccupied. The surfaces must be more or less parallel to the ground and they must be within a range of predefined heights.
The motors can vary the vibration frequency, intensity and duration, as well as their intervals, in order to send the user different types of tactile signals. For example, an increase in frequency and intensity generally indicates that the wearer is approaching an obstacle in the direction indicated by that particular motor. But, for example, when the system is in chair-finding mode, a double pulse indicates the direction in which an empty chair is located.
The Braille interface consists of two rows of five symbol pads. The symbols displayed describe the objects in the user’s environment, for example, the letter “t” to indicate the presence of a table or “c” for chair. The position of the symbol in the row indicates the direction in which it can be found. The column in which it appears indicates its distance. The information provided by the Braille interface completes and confirms the signals sent by the motors.
In tests, the chair-finding system reduced recognition errors by 80%, and the navigation system reduced the number of collisions with people walking in hallways by 86%.
The study entitled “Enabling independent navigation for visually impaired people through a wearable vision-based feedback system,” was presented at the ICRA conference held in Singapore from May 29 to June 3, 2017. It was co-written by Katzschmann, her research director Daniela Rus, Andrew and Erna Viterbi, professors of Electrical Engineering and Computer Science; its first co-author, Hsueh-Cheng Wang, postdoctoral associate at MIT and currently assistant professor of Electrical and Computer Engineering at National Chiao Tung University in Taiwan; Santani Teng, postdoctoral student at CSAIL, Brandon Araki, graduate student in Mechanical Engineering, and Laura Giarré, professor of Electrical Engineering at the University of Modena and Reggio Emilia in Italy.

Hanen Hattab
Hanen Hattab is a PhD student in Semiology at UQAM. Her research focuses on subversive and countercultural arts and design practices such as artistic vandalism, sabotage and cultural diversions in illustration, graphic arts and sculpture.

For a number of years, scientists have been trying to develop aids that can make visually impaired people more independent and aware of their surroundings.