New Delhi -°C
Today in New Delhi, India

Dec 03, 2020-Thursday
-°C

Humidity
-

Wind
-

Select Country
Select city
ADVERTISEMENT
Home / Science / Delhi-born MIT scholar’s AI-headset is one of Time’s 100 Best Inventions of 2020

Delhi-born MIT scholar’s AI-headset is one of Time’s 100 Best Inventions of 2020

Time described Arnav Kapur’s AlterEgo as something which “doesn’t read your thoughts, but it can enable you to communicate with your computer without touching a keyboard or opening your mouth”.

science Updated: Nov 22, 2020, 12:01 IST
hindustantimes.com | Edited by Meenakshi Ray
hindustantimes.com | Edited by Meenakshi Ray
Hindustan Times, New Delhi
Kapur, a 25-year-old post-doctoral scholar at Massachusetts Institute of Technology (MIT), invented the device called AlterEgo at the MIT Media Lab.
Kapur, a 25-year-old post-doctoral scholar at Massachusetts Institute of Technology (MIT), invented the device called AlterEgo at the MIT Media Lab.(MIT website photo)

Delhi-born Arnav Kapur’s Artificial Intelligence-enabled headset, which “augments human cognition and gives voice to those who have lost their ability to speak”, has been named as one of the 100 Best Inventions of 2020 by Time. Kapur, a 25-year-old post-doctoral scholar at Massachusetts Institute of Technology (MIT), invented the device called AlterEgo at the MIT Media Lab. He made it to the list under the experimental category

Time described AlterEgo as something which “doesn’t read your thoughts, but it can enable you to communicate with your computer without touching a keyboard or opening your mouth”. The wearer of AlterEgo has to first formulate the question in their mind to use the headset, “a non-invasive, wearable, peripheral neural interface”, to carry out a simple task like Googling the weather on your laptop. “The headset’s sensors read the signals that formulation sends from your brain to areas you’d trigger if you had said the query aloud, like the back of your tongue and palate,” Time said.

The headset then carries out the task on your laptop through a web connection. The headset uses a bone conduction speaker to inform the wearer of the results of the task that only the wearer can hear. “Researchers found that the device’s prototype was able to understand its wearer 92% of the time. The interface is currently being tested in limited hospital settings, where it helps patients with multiple sclerosis and ALS to communicate,” Time said.

The overview of the project says its primary focus is to help support communication for people with speech disorders, including conditions like amyotrophic lateral sclerosis (ALS) and multiple sclerosis. “Beyond that, the system has the potential to seamlessly integrate humans and computers—such that computing, the Internet, and AI would weave into our daily life as a “second self” and augment our cognition and abilities,” it said.

The system has been demonstrated and tested with a small vocabulary of words or short sentences. “The current system is a research prototype only and will require significantly more research and development before it could be deployed in real-life settings.”

Sign In to continue reading