SCIENTIFIC NEWS AND
INNOVATION FROM ÉTS
A Man-Machine Interface that Can Understand Human Thought - By : Hanen Hattab,

A Man-Machine Interface that Can Understand Human Thought


Hanen Hattab
Hanen Hattab Author profile
Hanen Hattab is a PhD student in Semiology at UQAM. Her research focuses on subversive and countercultural arts and design practices such as artistic vandalism, sabotage and cultural diversions in illustration, graphic arts and sculpture.

The field of brain-machine interfaces has recently made a considerable breakthrough. It is now possible to correct machine errors with thought. A team at the Massachusetts Institute of Technology developed an interface to process and transfer brain signals related to error detection, in order to improve the performance of a robot that tracks in real time the decision-making of its operator.

Detection of New Cognitive Activities

How can a robot become an extension of a human being, without resorting to a complex intermediate language? To make this Faustian idea a reality, researchers from the Computer Science and Artificial Intelligence Laboratory (SCAIL), in collaboration with Boston University and led by SCAIL Director Daniela Rus, examined the neurobiological activity controlling cognitive functions. The study, by Andres F. Salazar-Gomez, Joseph DelPreto, Stephanie Gil and Bu Frank H. Guenther, has been approved for presentation at the International Conference on Robotics and Automation (ICRA) to be held in Singapore next May.

Using the data from an electroencephalogram (EEG), which records brain activity, the feedback interface that was developed can detect a person’s brain activity while he or she monitors and controls tasks performed by a robot. While the existing system handles relatively simple binary activities, this interface will one day make it possible to control robots much more intuitively. In their research, team members used a humanoid robot from Rethink Robotics named “Baxter.

Baxter

Error-Related Potentials

In previous works on EEG-controlled robotics, man-machine communication was unidirectional. For example, an operator had to look at LEDs, each one of which would correspond to a different task to be executed by the robot. However, this method had one major inconvenience in its synchronization between the decision and the control act. The act could be complex for operations that required constant rectifications, in particular when supervising tasks during navigation or construction.

That is why Rus and her team want to make this interaction more natural. To do this, they focused on types of brain signals called “Error-related Potentials” (ErrP). These are signals from the cognitive control areas that activate when an operator detects an error. In 2010, a team from the Laboratory of Neurobiology of Cognition of the University of Provence located this human cognitive capacity in the anterior cingulate area of the brain. When the robot is about to act, the operator’s ErrPs indicate whether the action is appropriate or not, and the interface communicates this decision instantly. This brain-machine relationship is spontaneous, requiring no particular mental training, as the ErrPs are sometimes almost unconscious. Furthermore, when the robot is unsure of the decision to make, it can ask the operator to give a more specific answer.

Researchers experimenting with object sorting

New automatic learning algorithms were developed by the team to allow the interface to categorize brain waves within a timespan of 10 to 30 milliseconds. The ErrP signals are extremely weak, which means that the interface needs to be refined enough to be able to classify the signal and incorporate it into the control loop. In addition to monitoring the initial ErrPs, the team also worked on detecting secondary errors that occur when the machine does not notice the first correction by the operator. According to Gil, these signals can greatly improve accuracy, creating a continuous interaction between human and robot in communicating their respective choices. While the system cannot yet recognize secondary errors in real time, Gil expects the model to improve to more than 90% accuracy once this problem is resolved.

Potential Applications

The team believes that future systems could be extended to more complex and multiple tasks. According to Rus, this technology will be used to supervise industrial robots, self-driving cars, and other technologies that have not yet been invented. And according to Wolfram Burgard, Professor of computer science at the University of Fribourg (who did not participate in this study), this work makes it increasingly possible to develop effective tools and brain-controlled prostheses. Given the difficulty of translating human language into a comprehensible signal for robots, this study could have a profound impact on the future of collaboration between humans and robots.

Hanen Hattab

Author's profile

Hanen Hattab is a PhD student in Semiology at UQAM. Her research focuses on subversive and countercultural arts and design practices such as artistic vandalism, sabotage and cultural diversions in illustration, graphic arts and sculpture.

Author profile


comments

    Leave a Reply

    Your email address will not be published. Required fields are marked *