IIT, AIIMS Jodhpur develop talking gloves for people with speech disability

IIT, AIIMS Jodhpur develop talking gloves for people with speech disability

1 min read37 Views Comment FOLLOW US
New Delhi, Updated on Nov 29, 2021 18:49 IST

The device that costs less than INR 5,000 uses the principles of artificial intelligence (AI) and machine learning (MI) to automatically generate speech that will be language independent. 

Researchers at the Indian Institute of Technology (IIT) Jodhpur and the All India Institute of Medical Science (AIIMS) Jodhpur have developed low-cost ‘talking gloves’ for people with speech disabilities.  The device that costs less than INR 5,000 uses the principles of artificial intelligence (AI) and machine learning (MI) to automatically generate speech that will be language independent and facilitate communication between people with speech disabilities and people without speech disabilities.  

Sumit Kalra from the Department of Computer Science and Engineering, IIT Jodhpur, said, “The language independent speech generation device will bring people back to the mainstream in today’s global era without any language barrier. Users of the device only need to learn once and they will be able to verbally communicate in any language with their knowledge,” he said. 

He further said, “Additionally, the device can be customised to produce a voice similar to the original voice of the patients, which makes it appear more natural while using the device.” 

How does it work? 

In the device, electrical signals are generated via a first set of sensors, wearable on a combination of a thumb, finger and wrist of the first hand of a user. These signals can be produced by the combination of fingers, thumb, hand and wrist movements. Similarly, electrical signals are also generated by a second set of sensors. 

These  signals are received at a signal processing unit. By using AI and ML algorithms, these combinations of signals are then translated into phonetics corresponding to at least one consonant and a vowel. An audio signal is then generated by an audio transmitter corresponding to the assigned phonetic and based on trained data associated with vocal characteristics stored in a machine learning unit. 

According  to phonetics, the  generation of audio signals having a combination of vowels and  consonants leads to the generation of speech and enables speech  disabled people  to audibly communicate with others. 

Read more: 

Follow Shiksha.com for latest education news in detail on Exam Results, Dates, Admit Cards, & Schedules, Colleges & Universities news related to Admissions & Courses, Board exams, Scholarships, Careers, Education Events, New education policies & Regulations.
To get in touch with Shiksha news team, please write to us at news@shiksha.com

About the Author