The AlterEgo, a headset device created by researchers at MIT’s Media Lab, lets you talk without speaking. It uses electrodes to pick up neuromuscular signals in your jaw and face that are triggered by your internal voice, the voice inside your head when you read something. The signals are sent to a machine learning system that associates certain signals with certain words, and replies are sent through an earpiece which transmits vibrations through the bones of the face to the inner ear. The researchers’ goal is to make interacting with artificial intelligence assistants, like the Amazon Alexa, Apple HomePod or Google Home, less embarrassing and more intuitive. But the idea of being able to use an AI assistant without verbalisation is also intriguing from a medical perspective.
One of the symptoms of my neurological disorder, Paraneoplastic Cerebellar Degeneration, is dysarthria, or difficulty speaking. The muscles in my mouth are weak, which makes pronunciation difficult. I can think much faster than I can speak. So for someone like me, being able to think up a text message or email, or even have basic conversations, could be incredibly beneficial. I wouldn’t have to struggle to speak clearly enough for a smart device to hear my words correctly.
My condition also affects my balance and coordination, which makes it difficult for me to walk or even type on a computer (my husband is helping me write this). With the AlterEgo I could turn the lights on and off, or adjust the temperature, without risking a fall. With integrated AI, the possibilities are endless. It’s safety and convenience all in one.
This technology is a potential game-changer and could open up a whole new world to people living with disabilities.