AI expert Shyam Gollakota from the University of Washington, together with his team of researchers, found a way to use artificial intelligence in real-time active noise cancellation, which can remove specific sounds without altering the headphone's audio. He presented this idea on May 16 at a conference held by the Acoustical Society of America and the Canadian Acoustical Association, where he showed a working prototype.
Gollakota and his team used a smartphone-based neural network to identify, train, and filter 20 different environmental sound categories, such as sirens and alarm clocks, that one would typically hear daily. This requires the user to select a certain category on the smartphone, which then begins the process of filtering out the environmental sound. This would make headphones incredibly useful in many scenarios where avoiding environmental sound is not possible.
"Imagine you are in a park, admiring the sounds of chirping birds, but then you have the loud chatter of a nearby group of people who just can't stop talking," said Gollakota. "Now imagine if your headphones could grant you the ability to focus on the sounds of the birds while the rest of the noise just goes away. That is exactly what we set out to achieve with our system."
The prototype shows microphones attached on both sides of the headphone earcups that are connected via USB to an OrangePi board that also provides audio to the headphones via the audio jack. Based on the board's layout, it is likely to be the OrangePi 5B, which uses the Rockchip RK3588S SoC (an eight-core 64-bit processor with a built-in neural processor with 6Tops computing power, which performs real-time filtering). The phone is likely connected wirelessly to the Pi board, where the user can make specific environmental choices. OrangePi has been actively producing boards with an integrated neural chip and recently worked with Huawei to create a development board.
AI used effectively to enhance user experience
The words 'Artificial Intelligence' are becoming more associated with audio gear, but enabling it for noise cancellation will appeal to many users. This would require a neural network trained to target external sounds and to ensure it does not dampen the actual sound played through the headphones hence having the ability to learn and improve in time.
Since this is performed in real-time in under a hundredth of a second, the processing needed to be done on a connected device rather than by a cloud server, making such Pi boards perfect for such processing. All it needs is to somehow make a PCB with an NPU that can be integrated into the headphones. It also shows this processing can be done by any computing device with an AI accelerator, possibly using existing headphones on a capable system, provided there's a microphone to pick up background sound.
The team believes this technology can be implemented on audio devices and is ready for commercialization. If a new generation of audio headgear recognizes and can integrate this, it could enhance the audio experience, bringing an innovation in noise cancellation. As predicted and published in an IBM blog post, AI will likely play an important role apart from active noise cancellation and equalization. Hence, it is only a matter of time before we hear many such innovations in the audio space.