The system consists of a wearable headset, which uses electrodes to pick up neuromuscular signals in the jaw and face, triggered by users saying words in their head. The signals are then sent to a machine-learning system designed to correlate particular signals with particular words.
The device also includes a pair of ‘bone-conduction’ headphones, which transmit vibrations through the bones of the face to the inner ear, bypassing the ear canal. This enables the system to convey information to the user without interrupting conversation or “otherwise interfering with the user’s auditory experience.”
There are a variety of potential use cases for the device according to MIT, including use in high noise environments such as on airport runways, as well as for special or covert operations.
The graduate student who led the development of the system, Arnav Kapur, said: “The motivation for this was to build an intelligence automation device. Our idea was, could we have a computing platform that’s more internal, melding human and machine in some ways, and that feels like an internal extension of our own cognition?”
Researchers have conducted a usability study with a prototype wearable interface. This involved ten subjects spending around a quarter of an hour customising the application to their own neurophysiology, and another 90 minutes using it to execute computations. The system had an average transcription accuracy of about 92 percent.