27 Feb 2023 |
Research article |
Health Technologies , Intelligent and Autonomous Systems
The Ear: A Portal to Physical and Mental Health
Neurodegenerative diseases like Parkinson’s or Alzheimer’s are proliferating due to our aging population. New treatments coupled with early detection are essential in maintaining the autonomy and quality of life of a growing portion of our society for as long as possible.
Rachel Bouserhal wearing an advanced in-ear device designed by EERS and the ÉTS NSERC-EERS Industrial Research Chair in In-Ear Technologies (CRITIAS).
Subtle Warning Signs
Many neurodegenerative diseases like Parkinson’s and Alzheimer’s show symptoms that are so subtle in the early stages that they go unnoticed—changes in articulation, swallowing patterns, and vocabulary… A precise diagnosis at this stage would help slow the progression of the disease. Unfortunately, it is often much later that a diagnosis is made, too late for some treatments.
Indeed, the diagnosis is usually made only when the changes observed in an individual are significant enough to make them stand out from the rest of the population. However, each of us is unique, and individual changes should be detected earlier so that health deterioration can be identified more quickly. Another factor in late diagnoses is that we usually look at only one modality, the heartbeat for example, when analyzing several factors would be likely to give us a more comprehensive view of a person’s health status.
Knowing this, how can we detect the early signs of these diseases? Researchers in the RHAD Lab hope to do just that by listening to the sound signals generated in our bodies.
When our ears are occluded with earplugs, the sounds produced in the body can no longer dissipate in the air. These sounds are amplified in the low frequencies (occlusion effect) and can be captured with a microphone. Among the signals that can be detected are breathing, heartbeat, swallowing, speech … and even blinking!
The Chair’s research team intends to capture these physiological signals using an advanced in-ear device designed by EERS and the ÉTS NSERC-EERS Industrial Research Chair in In-Ear Technologies (CRITIAS). This device consists of an earpiece with two miniature microphones and a speaker. The microphone, placed inside the ear canal, picks up sounds generated by the body. The external microphone and the loudspeaker (located inside the ear) are used to retransmit external sounds in order to reduce discomfort created by the occlusion effect.
One of the main challenges that must be addressed is the separation of different physiological signals, which occur concomitantly. Several options are being considered to achieve this, such as source separation, machine learning, and sensor diversification.
Various source separation algorithms for sound signals already exist in the literature. However, since they are based on signals captured by several microphones placed at different distances from the noise source, the success of this method is far from assured, even when using two earpieces, since the space available in the ear canal is very limited. Machine learning will also be considered to separate signals, but any “black box” models will be avoided in order to get an interpretable model.
Finally, the research team is also exploring the possibility of using other types of sensors to facilitate signal separation. A PPG, for example, could detect heartbeats and breathing. An IMU, in addition to detecting breathing, could signal head movements and provide information that would remove from the signal noise created by movements of the wire connected to the in-ear hearing device.
Although recognizing emotions through speech is part of a very active field of research, signals captured in the ear could support the interpretations. Indeed, in addition to intonation and word choice, heartbeats, breathing, and other physiological signals can provide a lot of information about emotional states. Preliminary studies, including a paper currently undergoing peer review, have already shown that stress can be determined through the heartbeat detected in the ear. The research team will attempt to combine speech and sound signals to refine emotion recognition.
The ultimate goal of this research is the implementation of custom models. Measuring the progression of individual signals is one approach that will enable us to detect the onset of many diseases and intervene at the optimal time.
Since this research involves the collection of highly personal data, the team is committed to respecting the principles set out in the Montréal Declaration for a Responsible Development of Artificial Intelligence.
Rachel Bouserhal is a professor in the Department of Electrical Engineering at ÉTS. She specializes in signal processing and machine learning.
Program : Electrical Engineering
Research laboratories : LATIS – Biomedical Information Processing Laboratory GRAM - Acoustics Research Group of Montréal