'Smart' hearing aids filter out chatter in crowded room - Action News
Home WebMail Friday, November 22, 2024, 10:49 AM | Calgary | -10.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Health

'Smart' hearing aids filter out chatter in crowded room

People wearing hearing aids often struggle to tell voices apart in a crowded environment. Brain-controlled assistive hearing devices might help.

'It can be tiring and exhausting to focus' on hearing speech among crowd noise

Hearing aids typically amplify all sounds. (Stu Mills/CBC)

People wearing hearing aids often struggle to differentiate between speakers in a crowded environment, but a small experiment suggests that brain-controlled assistive hearing devices would be able to detect which voice the user is paying attention to, and enhance it.

To do this, the devices would need to separate individual voices in the room, then decode a user's brainwaves to identify the one the user is giving the most attention, the study authors write in the journal Science Advances.

"It comes down to the problem of hearing speech among the noise, which is also a problem for people who have normal hearing. It can be tiring and exhausting to focus," said senior author Nima Mesgarani, a researcher at Columbia University in New York City.

Hearing aids typically amplify all sounds. In a noisy environment, the challenge is separating the different sound sources and identifying the speaker who should be amplified, Mesgarani said. Although some devices have found ways to suppress background noises, they can't yet effectively separate specific speakers during a conversation.

"When you're focusing on one person who's speaking, your brain filters out the other sources and only 'sees' that," he told Reuters Health in a phone interview. "If it's possible to use brainwaves for translational applications, it could change everything."

Mesgarani and colleagues write about the possibilities and challenges around this process, called auditory attention decoding. Importantly, smart hearing aids would need to be able to decode quickly in a nonintrusive way, even if speakers are seated close together.

Some research has focused on techniques that require the user to already be familiar with a known speaker, such as a family member or close friend, the authors note.

The study team proposes a new algorithm that could separate unfamiliar speakers in a multi-talker situation and then compare the spectrogram, or audio pattern, of every speaker with a "reconstructed" spectrogram of the voice to which the listener's brain is giving the most attention.

Improving Siri and Alexa?

Researchers tested the algorithm with three epilepsy patients who were already planning to undergo surgery to implant brain electrodes for measuring neural activity related to their condition. All three volunteers had normal hearing.

During the tests, the volunteers listened to both single-talker and multi-talker sound samples that included four stories lasting about three minutes each. During the multi-talker experiment, they were instructed to focus on one speaker and ignore the other.

The authors found that the matches between a spectrogram of the voice telling the story and the reconstructed pattern from the user's brain responses were not perfect, but they say the differences shouldn't affect the decoding accuracy.

In addition to helping hearing-impaired users, the technology might one day be useful to anyone trying to pick out and amplify a single speaker in a noisy environment, they note.

"The challenge now is being able to record these brainwaves without invasive devices, but researchers are exploring ways to put electrodes on the head, around the ear or even inside the ear," Mesgarani said.

Still, wearable devices tend to have limited computational powers, the study team writes. New hardware has been developed to implement deep neural network models and may provide enough information to decode a listener's focus, but this often happens at lower speeds than preferred.

"As the technology develops, this could go beyond hearing aids and improve the performance of voice-controlled devices such as Siri or Alexa," said Sina Miran of the University of Maryland at College Park, who wasn't involved in the study.

"Challenges still exist, but thanks to recent advances in machine learning, I think we'll see smart hearing aids in the next five years," he said in a phone interview. "Just like we're seeing devices that can monitor sleep, stay tuned for exciting news about hearing."