The wearable AI that resists the voice in case of vocal cord problems

The wearable AI that resists the voice in case of vocal cord problems

[ad_1]

Oncological pathologies and many other disorders affecting the vocal cords can cause the loss of the voice, for more or less prolonged periods or even permanently. This is not such a rare occurrence, given that it is estimated that 30% of the world’s population suffers from dysphonia (voice alteration) at some point in their lives. And the solutions, at the moment, are extremely limited. A new study presented on the pages of Nature Communications aims to change things, presenting a brand new wearable device that can be applied to the skin of the throat without discomfort for patients, and is able to recognize the movements of the muscles of the larynx and reproduce for them the words they would have liked to pronounce, taking advantage of the potential of machine learning.

Voice problems

Disorders that affect the voice, as we said, are relatively common. In many cases, these are treatable problems, but even the most modern therapeutic approaches, whether surgery, physiotherapy or speech therapy, take time to produce results. On average, patients must wait between three and 12 months to fully regain their vocal abilities. And in some cases, especially when invasive surgical techniques are necessary, a recovery period must pass in which the use of the voice is prohibited.

“Existing solutions, such as manual laryngophones or tracheoesophageal punctures, are often uncomfortable and invasive,” explains Jun Chen, a bioengineer at the University of California, Los Angeles who led the development of the new device. “This new device offers a wearable, non-invasive option that can help patients communicate in the run-up to treatments, and during the post-operative recovery period, if they have voice disorders.”

The device

The one developed by Chen’s group is a two-part prototype. The first is a self-powered sensor, which collects the signals generated by the movements of the laryngeal muscles and converts them into high-fidelity electrical signals that are translated into words by a machine learning algorithm. The second, an actuator that produces the sound signals corresponding to the words identified in the previous step.

Everything is placed inside a silicone patch a few millimeters thick and weighing just seven grams, which is applied to the throat using biocompatible adhesive tapes, which can be replaced daily to ensure its continued use. In Chen’s experiments, the device was tested on eight adults. In the tests carried out, participants each spoke five complex sentences, achieving 95% accuracy in the device’s performance. Going forward, the researchers plan to work to further expand the algorithm’s vocabulary of words, and to begin the first clinical tests on patients with voice disorders.

[ad_2]

Source link