South Korean researchers have developed a silent speech recognition system that can identify words by tracking facial movements.
According to euronews.next, the device intends to assist deaf people, who sometimes find it difficult to interact with others using sign language.
However, it might also be helpful to the military or the police in situations where radio transmission is challenging due to background noise.
Silent speech interfaces use strain sensors to detect a person’s movements as they mouth words.
The new device can track and translate these facial motions into words using a deep learning algorithm, according to the report.
“The strain sensor attached to the face stretches and shrinks according to the skin’s stretchiness when a person speaks. And the electric properties of the strain sensors change accordingly,” Taemin Kim, of Yonsei University School of Electrical and Electronic Engineering, was quoted as saying.
The ultra-thin sensors are resistant to sweat and sebum, and the system recognised a set of 100 words with nearly 88 per cent accuracy – an unprecedented high performance, according to the team’s findings, published in Nature Communications.
While silent speech recognition sensors aren’t new, the team says the ones they’ve designed are hundreds of times smaller than existing ones – under 8 micro metres thick – making their system highly scalable.
In other words, for better tracking of movements and word recognition, they would just need to add more of them, like pixels in an image, said the report.
“To classify and recognise more words, a higher resolution of information is needed. And that is why researchers today are trying to develop a high-resolution silent speech system that combines our wearable strain sensor with a highly integrated circuit that’s normally used in display or semiconductor production,” Kim added.