Leap ears: AI is storming the headphones space
Earphones that tune out voices but not birdsong; traffic but not emergency sirens. Earplugs that are also hearing aids... A hushed revolution is underway.
They folded in a mic, took away the wires, added noise cancellation and some admittedly mystifying controls.
Where will earphones go next? The segment has been rather quietly busy.
There are plans for noise-cancellers to let parts of the audio world back in, but selectively. Plans for buds to act as hearing tests and part-time hearing aids. AI-led programs are aiming to sift through ambient sounds, “as a human brain would”, and take a call on what elements to highlight or suppress.
Twitter update
The big shift underway today involves selectively reversing noise-cancellation.
Experimental new hearables are using deep-learning models and neural networks (which are machine-learning algorithms that seek to mimic the brain) to let listeners seamlessly shift between the real world around them and the virtual worlds they seek to immerse themselves in through their devices.
“As with virtual reality and augmented reality, there is a lot of augmenting going on with audio. The aim, for many companies, is to build augmented-audio products,” says technology analyst Kashyap Kompella.
This would mean, for instance, that a hiker could opt to hear birdsong, but nothing else, above their music; register an ambulance siren sharply over other sounds.
The idea is to be able to teach “noise cancelling systems who and what we want to hear,” says Bandhav Veluri, a PhD student in computer science at University of Washington.
He and other researchers at the universities of Washington and Carnegie Mellon have been working in this field of “semantic hearing”.
“Neural networks are good at categorising data,” Veluri says. “Whether it’s cats mewing or humans speaking, the networks are able to recognise clusters of sound.” This capability can be used to teach them what clusters to suppress, and which kinds of sounds to allow in.
Listeners would potentially be able to customise the settings: someone with a pet may want all cat sounds allowed in; someone else living near a hospital may want all sirens tuned out.
We can expect such features to show up on our devices in the next one or two years, says Kompella.
Now look…
In an allied project, Veluri and his fellow researchers are looking to use movement as a cue too.
An experimental system called Look Once to Hear allows the listener to turn their heads in the direction of a person, in order to hear their voice over other ambient sounds. (There are a number of restaurants we could name where this would be so helpful.)
“With existing techniques, if we want to hear a specific person, we need a voice sample,” says Veluri. “The program then maps voice characteristics in real time, against the fixed set of voice samples in its memory, to identify which one to tune into.”
With the experimental program, the aim is for direction of the wearer’s head to offer the cue instead, with the system able to single out completely unknown speakers too.
What happens in a crowded room? The next step will be to fine-tune the system for such scenarios, Veluri says.
Researchers at the University of Washington are working to teach the system how to, essentially, follow a conversation. It could use turn-taking patterns to recognise which voices are part of the chat, Veluri says.
Stop…
What’s the furthest sound you can hear? Why weren’t you conscious of it a second ago?
Volume is only one of the cues the brain uses, when listening. It is prioritising and working out a hierarchy of sounds, all the time.
Could headphones potentially “listen”, in this way, as humans do?
In February, researchers at Ohio State University developed a machine-learning model that prioritises input sounds based on “judgement”.
The aim is for the model to do make reasonable assumptions, in real time, about what sounds are likely important. Early findings were published this year, in the journal IEEE/ACM Transactions on Audio, Speech and Language Processing.
What makes this so challenging, of course, is that the act of listening is hugely subjective. “It depends on your hearing capabilities and on your hearing experiences,” Donald Williamson, an associate professor in computer science and engineering, said in a statement.
It is possible, in fact, that no one hears your world exactly as you do.
Listen...
Elsewhere, AI-enabled chips are changing how hearing aids work. Deep neural networks are sifting through layers of noise, to represent individual sounds more distinctly.
The Danish company Oticon’s More range of hearing aids is being called the first to incorporate such a network (the company has branded it the BrainHearing system). Prices start at $1,530 (Rs1.3 lakh) for a single ear piece.
Companies such as Orka, Widex, Starkey, Signia and Phonak, meanwhile, are using AI and machine learning to enable their hearing aids to also monitor heart rate, blood pressure, step count, and issue fall alerts. Prices for these range from ₹25,000 to about ₹3 lakh.
Some mainstream products are crossing over into rudimentary-hearing-aid territory too.
Apple’s AirPods now come with a Live Listen feature that helps drown out background noise and amplify nearer sounds, if one places one’s phone near the source of the sound one wishes to hear. This can double as a hearing aid for people with mild hearing loss.
Some AirPods come with a hearing-test feature that can loosely identify levels of hearing ability through an audio test. Some AirPods, of course, famously offer fall detection, with sensors sending an alert to an emergency contact in case of such an event.
Here too, the sensors play a crucial role. They determine, for instance, when an AirPod has fallen while in the ear of its wearer, as opposed to having just bounced off and onto the floor (as happens only too often).