The AlterEgo, a headset device created by researchers at MIT’s Media Lab, lets you talk without speaking. It uses electrodes to pick up neuromuscular signals in your jaw and face that are triggered by your internal voice, the voice inside your head when you read something. The signals are sent to a machine learning system that associates certain signals with certain words, and replies are sent through an earpiece which transmits vibrations through the bones of the face to the inner ear. The researchers’ goal is to make interacting with artificial intelligence assistants, like the Amazon Alexa, Apple HomePod or Google Home, less embarrassing and more intuitive. But the idea of being able to use an AI assistant without verbalisation is also intriguing from a medical perspective.
One of the symptoms of my neurological disorder, Paraneoplastic Cerebellar Degeneration, is dysarthria, or difficulty speaking. The muscles in my mouth are weak, which