Smart glasses, understanding just by looking at the shape of the mouth |

news/photo/202304/20690_10721_498.jpg?resize=555%2C312&ssl=1" alt="초음파 탐지기능을 통해 소리 없이 입 모양만으로 명령을 알아듣는 안경 에코스피치 [사진= 코넬대학교]" width="555" height="312" layout="responsive" class="amp_f_img" data-recalc-dims="1"/>
Echo Speech glasses that understand commands only by the shape of the mouth without sound through ultrasonic detection function [사진= 코넬대학교]

The name of these cool glasses is Eco Speech. It is a high-tech device created by researchers at Cornell University. The visual aid ability is not different from other glasses. Instead, it recognizes what the wearer is saying by analyzing the shape of the mouth. This is because the glasses frame has a small speaker and microphone that emits and receives sound waves. It understands commands given by its owner, even if whispered or spoken silently.

The developers plan to present their research results at the ‘Mechanical Computing Society on Human Factors in Computing Systems’ held in Germany this month.

“For those who cannot speak, our voice recognition technology is a great input device to use when connected to a voice synthesizer,” said Luidong Zhang, a researcher in information science at Cornell University. claimed.

In a performance test involving 12 participants, EchoSpeech recognized 31 individually spoken commands and consecutively pronounced digits. The error rate was less than 10%.

Echo Speech is a structure in which a speaker and a microphone are attached to the left and right glasses lenses. It works by projecting sound waves from the speaker, reflecting off the lips, and reaching the microphone on the other side. The speaker emits about 20 kilohertz sound waves, close to ultrasonic. After the wave hits the lips, it is reflected and diffracted to form a pattern. Mike figures out which command each of the echo shapes means. It works like a miniature sonar.

Artificial intelligence machine learning can help infer words from sound waves. AI is trained to understand specific commands. You can also make fine-tuning that improves performance for your needs. It takes about 6-7 minutes.

The acoustic sensor is located on a microcontroller with an audio amplifier. It is usually connected to a laptop via a USB cable. In a real-time demonstration, it also unveiled a low-power version of Echo Speech that communicates with a smartphone via Bluetooth. The glasses wirelessly transmitted commands to the Android smartphone. I was able to perform operations such as playing music, operating a smart device, and using a voice assistant.

Francis Zimbretieri, a professor of information science at Cornell University, said, “Since the data in the glasses is processed on the user’s smartphone instead of being transmitted to an external cloud, sensitive personal information is not leaked.” Also, audio data can generally be transmitted and reproduced with less energy than video or image files.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.