SlideShare a Scribd company logo
1 of 2
Download to read offline
Automatic Language Translation Software For Aiding Communication Between Indian Sign Language And Spoken
English Using Labview
Yellapu Madhuri*, G.Anitha**
* 2nd year M.Tech, ** Assistant Professor
Department of Biomedical Engineering,
SRM University, Kattankulathur-603203, Tamilnadu, India
www.srmuniv.ac.in
Sign Language (SL) is the natural way of communication of speech and/or
hearing-impaired people. A sign is a movement of one or both hands,
accompanied with facial expression, which corresponds to a specific meaning.
This paper presents SIGN LANGUAGE TRANSLATION software for
automatic translation of Indian sign language into spoken English and vice
versa to assist the communication between speech and/or hearing impaired
people and hearing people. It could be used by deaf community as a translator
to people that do not understand sign language, avoiding by this way the
intervention of an intermediate person for interpretation and allow
communication using their natural way of speaking. The proposed software is
standalone executable interactive application program developed using
LABVIEW software that can be implemented in any standard windows
operating laptop, desktop or an IOS mobile phone to operate with the camera,
processor and audio device. For sign to speech translation, the one handed SL
gestures of the user are captured using camera; vision analysis functions are
performed in the operating system and provide corresponding speech output
through audio device. For speech to SL translation the speech input of the user
is acquired by microphone; speech analysis functions are performed and
provide SL gesture picture display of corresponding speech input. The
experienced lag time for translation is little because of parallel processing and
allows for instantaneous translation from finger and hand movements to speech
and speech inputs to SL gestures. This system is trained to translate one handed
SL representations of alphabets (A-Z), numbers (1-9) to speech and 165 word
phrases to SL gestures The training database of inputs can be easily extended to
expand the system applications. The software does not require the user to use
any special hand gloves. The results are found to be highly consistent,
reproducible, with fairly high precision and accuracy.
AIM :
To develop a mobile interactive application program for automatic
translation of Indian sign language into spoken English and vice-versa to assist
the communication between Deaf people and hearing people. The SL translator
should be able to translate one handed Indian Sign language finger spelling input
of alphabets (A-Z) and numbers (1-9) to spoken English audio output and 165
spoken English word input to Indian Sign language picture display output.
OBJECTIVES:
•To acquire one handed SL finger spelling of alphabets (A to Z) and numbers (1
to 9) to produce spoken English audio output.
•To acquire spoken English word input to produce Indian Sign language picture
display output.
•To create an executable file to make the software a standalone application.
•To implement the software and optimize the parameters to improve the accuracy
of translation.
•To minimize hardware requirements and thus expense while achieving high
precision of translation.
There is a need for monitoring cerebral perfusion
MATERIALS
Software Tools used: National Instruments
LabVIEW and toolkits
•LABVIEW 2012 version
•Vision Development Module
•Vision acquisition module
Hardware tools used
•Laptop inbuilt webcamera- Acer Crystal Eye
•Laptop inbuilt speaker-Acer eAudio
METHOD:
The software is a standalone application. To install the file, follow the
instructions that appear in the executable installer file. After installing the
application, a Graphical user interfacing (GUI) window opens, from which the
full application can be used. The GUI has been created to run the entire
application from a single window. It has four pages, each page corresponds to a
specific application.
PAGE 1 gives a detailed demo of the total software usage.
PAGE 2 is for speech to sign language translation.
When the “start” button is pressed, a command is sent to the Windows 7 inbuilt
Speech Recognizer and it opens a mini window at the top. The first time it is
started, a tutorial session begins which gives instructions to setup the microphone
and recognize the user’s voice input. Configure the speech recognition software.
After the initial training, from the next time the program is executed, it starts
speech recognition automatically. To train the system for a different user or
change the microphone settings, right click on the Speech Recognizer window
and select “Start Speech Tutorial”. To stop the speech recognition software say
“Stop listening”. To start speech recognition again say “Start Listening”. When
the user utters any of the words listed in the “Phrases” it is displayed in the
“Command” indicator. A SL gesture picture corresponding to the speech input is
displayed in the “Sign” picture indicator. The score of speech input correlation
with the trained word is displayed in the “Score” numeric indicator. Use the exit
button to exit the application of speech to SL translation.
PAGE 3 is for template preparation for sign to speech translation.
To execute the template preparation module, press the “Start” button.
Choose the camera to acquire images to be used as templates, from the “Camera
Name” list. The acquired image is displayed on “Image” picture indicator. If the
display image is good to be used for preparing a template, press “Snap frame”.
The snapped image is displayed on “Snap Image” picture display. Draw a region
of interest to prepare the template and press “Learn”. The image region in the
selected portion of the snapped frame is saved to the folder specified for
templates. The saved template image is displayed on “Template Image” picture
display. Press “Stop” button to stop execution of template preparation module.
PAGE 4 is for Sign to speech translation.
Press the “Start” button to start the program. Choose the camera to acquire
images to be used for pattern matching, from the “Camera Name” list. The
captured images are displayed on the “Input Image” picture display. Press the
“Match” button to start comparing the acquired input image with the template
images in the data base. In each iteration the input image is checked for pattern
match with one template. When the input image matches with the template image,
the loop halts. The “Match” LED glows and the matched template is displayed on
the “Template Image” indicator. The loop iteration count is used for triggering a
case structure. Depending on the iteration count value a specific case is selected
and gives a string output. Otherwise the loop continues to next iteration where the
input image is checked for pattern match with a new template. The information
in the string output from case structure is displayed on the “Matched Pattern”
alphanumeric indicator. It also initiates the .NET speech synthesizer to give an
audio output through the speaker.
Figure 1.1 Events involved in hearing Figure 1.2 Speech chain
Figure 1.3 Graphical Abstract
[1]. Yellapu Madhuri, G.Anitha (2013) “VISION-BASED SIGN
LANGUAGE TRANSLATION DEVICE” International Conference on
Information Communication & Embedded systems ICICES 2013 in association
with IEEE, S.A engineering College, Chennai. ISBN No. 978-1-4673-5787-6G.
Tracking Id: 13cse213.
[2]. Yellapu Madhuri, G.Anitha (2013) “Automatic Language Translation
Software for Interpreting Sign Language and Speech in English”, has been
awarded Silver medal in paper presentation Research Day 2013 at SRM
University, Chennai.
[3]. Yellapu Madhuri, G.Anitha (2013) submission entitled "SIGN
LANGUAGE TRANSLATOR" has been assigned the following manuscript.
number: IMAVIS-D-13-00011 by Elsevier Editorial Systems- Image and
Vision Computing journal imavis@elsevier.com.
[4]. Yellapu Madhuri, G.Anitha (2013) submission entitled “VISION-BASED
SIGN LANGUAGE TRANSLATOR” is Accepted for publication
in International Journal of Engineering and Science Invention (IJESI)
www.ijesi.org.Review report of manuscript id: A11023.
[5]. Yellapu Madhuri, G.Anitha (2013) submission entitled “SIGN
LANGUAGE TRANSLATION DEVICE” is Accepted for publication
in The International Journal of Engineering and Science (THE IJES)
www.theijes.com. Review report of manuscript id: 13026.
[6]. Yellapu Madhuri, G.Anitha (2013) submission entitled "Automatic
Language Translation Software for Interpreting Sign Language and Speech in
English" has been assigned a tracking number of NCOMMS-13-02048 by
Nature Communications naturecommunications@nature.com.
[1]. Jose L. Hernandez-Rebollar1, Nicholas Kyriakopoulos1, Robert W.
Lindeman2 ‘A New Instrumented Approach For Translating American
Sign Language Into Sound And Text’, Proceedings of the Sixth IEEE
International Conference on Automatic Face and Gesture Recognition
(FGR’04) 0-7695-2122-3/04 $ 20.00 © 2004 IEEE.
[2]. K. Abe, H. Saito, S. Ozawa: Virtual 3D Interface System via Hand
Motion Recognition From Two Cameras. IEEE Trans. Systems, Man,
and Cybernetics, Vol. 32, No. 4, pp. 536–540, July 2002.
[3]. Paschaloudi N. Vassilia, Margaritis G. Konstantinos "Listening to
deaf': A Greek sign language translator’, 0-7803-9521-2/06/$20.00
§2006IEEE
Name: YELLAPU MADHURI
Reg.No:1651110002
M.Tech (BiomedialEngineering)
Mobile no: 09441571241
E.Mail:ymadhury@rediffmail.com
In this work, a vision based sign language recognition system using LABVIEW for
automatic sign language translation has been presented. This approach uses the
feature vectors which include whole image frames containing all the aspects of the
sign. This project has investigated the different issues of this new approach to SL
recognition to recognize on the hand sign language alphabets and numbers using
appearance based features which are extracted directly from a video stream recorded
with a conventional camera making recognition system more practical. Although
sign language contains many different aspects from manual and non-manual cues,
the position, the orientation and the configuration or shape of the dominant hand of
the signer conveys a large portion of the information of the signs. Therefore, the
geometric features which are extracted from the signers’ dominant hand, improve
the accuracy of the system to a great degree. This project did not focus on facial
expressions although it is well known that facial expressions convey important part
of sign-languages. The facial expressions can e.g. be extracted by tracking the
signers’ face. Then, the most discriminative features can be selected by employing a
dimensionality reduction method and this cue could also be fused into the
recognition system.
The sign language translator is able to translate alphabets (A-Z) and
numbers (1-9). All the signs can be translated real-time. But signs that are
similar in posture and gesture to another sign can be misinterpreted,
resulting in a decrease in accuracy of the system. The current system has
only been trained on a very small database. Since there will always be
variation in either the signers hand posture or motion trajectory, a larger
database accommodating a larger variety of hand posture for each sign is
required. The speech recognition program requires the user to take up a
tutorial of 10 minutes. During the training, the program learns the accent of
the user for speech recognition. It is observed that, the longer the user used
the program , the higher the accuracy of speech recognition.
This paper presents a novel approach for gesture detection. This approach
has two main steps: i) template preparation, and ii) gesture detection. The
template preparation technique presented here has some important features
for gesture recognition including robustness against slight rotation, small
number of required features and device independence. For gesture detection,
a pattern matching technique is used. The gesture recognition technique
presented here can be used with a variety of front-end input systems such as
vision based input , hand and eye tracking, digital tablet, mouse, and digital
glove. Much previous work has focused on isolated sign language
recognition with clear pauses after each sign. These pauses make it a much
easier problem than continuous recognition without pauses between the
individual signs, because explicit segmentation of a continuous input stream
into the individual signs is very difficult. For this reason, and because of co-
articulation effects, work on isolated recognition often does not generalize
easily to continuous recognition. But the proposed software captures the
input images as an AVI sequence of continuous images. This allows for
continuous input image acquisition without pauses. But each image frame is
processed individually and checked for pattern matching. This technique
overcomes the problem of processing continuous images at the same time
having input stream without pauses.
For Speech to SL translation words of similar pronunciation are
sometimes misinterpreted. This problem can be avoided by clearly
pronouncing the words and with extended training and increasing usage. The
speech recognition technique introduced in this article can be used with a
variety of front-end input systems such as computer and video games,
precision surgery, domestic applications and wearable computers.
Figure 1.4 Block diagram of SL Figure 1.5 Block diagram of Speech
to speech translation to sign language translation
Figure 1.6 PAGE 3-GUI of template
preparation
Figure 1.8 PAGE 2-GUI of Speech to
SL translation
Figure 1.9 GUI of windows
speech recognition tutorial
Figure 1.7 PAGE 4-GUI of SL to
speech translation
Figure 1.10 Database of SL finger spelling Alphabets and Numbers

More Related Content

What's hot

IRJET - Eyeblink Controlled Virtual Keyboard using Raspberry Pi
IRJET -  	  Eyeblink Controlled Virtual Keyboard using Raspberry PiIRJET -  	  Eyeblink Controlled Virtual Keyboard using Raspberry Pi
IRJET - Eyeblink Controlled Virtual Keyboard using Raspberry PiIRJET Journal
 
mobile camera based text detection
mobile camera based text detectionmobile camera based text detection
mobile camera based text detectionvallabh potadar
 
Braille Technology
Braille TechnologyBraille Technology
Braille TechnologyKhuloodSaeed
 
Mobile camera based text detection and translation
Mobile camera based text detection and translationMobile camera based text detection and translation
Mobile camera based text detection and translationVivek Bharadwaj
 
IRJET - Digital Notice Board using Raspberry Pi
IRJET - Digital Notice Board using Raspberry PiIRJET - Digital Notice Board using Raspberry Pi
IRJET - Digital Notice Board using Raspberry PiIRJET Journal
 
Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...IRJET Journal
 
IRJET- Wireless Notice Board using Raspberry Pi
IRJET- Wireless Notice Board using Raspberry PiIRJET- Wireless Notice Board using Raspberry Pi
IRJET- Wireless Notice Board using Raspberry PiIRJET Journal
 
Iirdem design and implementation of finger writing in air by using open cv (c...
Iirdem design and implementation of finger writing in air by using open cv (c...Iirdem design and implementation of finger writing in air by using open cv (c...
Iirdem design and implementation of finger writing in air by using open cv (c...Iaetsd Iaetsd
 
IRJET- Hand Gesture Recognition and Voice Conversion for Deaf and Dumb
IRJET- Hand Gesture Recognition and Voice Conversion for Deaf and DumbIRJET- Hand Gesture Recognition and Voice Conversion for Deaf and Dumb
IRJET- Hand Gesture Recognition and Voice Conversion for Deaf and DumbIRJET Journal
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Languageijsrd.com
 
Recognizing of Text and Product Label from Hand Held Entity Intended for Visi...
Recognizing of Text and Product Label from Hand Held Entity Intended for Visi...Recognizing of Text and Product Label from Hand Held Entity Intended for Visi...
Recognizing of Text and Product Label from Hand Held Entity Intended for Visi...YogeshIJTSRD
 
Voice recognition security systems
Voice recognition security systemsVoice recognition security systems
Voice recognition security systemsSandeep Kumar
 
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...IRJET Journal
 
Smart Pen: Future of Handheld Gadget
Smart Pen: Future of Handheld GadgetSmart Pen: Future of Handheld Gadget
Smart Pen: Future of Handheld GadgetMuhammed Anaz PK
 
VOICE BASED SECURITY SYSTEM
VOICE BASED SECURITY SYSTEMVOICE BASED SECURITY SYSTEM
VOICE BASED SECURITY SYSTEMNikhil Ravi
 
Real Time Character Recognition on FPGA for Braille Devices
Real Time Character Recognition on FPGA for Braille DevicesReal Time Character Recognition on FPGA for Braille Devices
Real Time Character Recognition on FPGA for Braille DevicesIRJET Journal
 
IRJET-Raspberry Pi based Reader for Blind People
IRJET-Raspberry Pi based Reader for Blind PeopleIRJET-Raspberry Pi based Reader for Blind People
IRJET-Raspberry Pi based Reader for Blind PeopleIRJET Journal
 
Deep learning seminar report
Deep learning seminar reportDeep learning seminar report
Deep learning seminar reportSKS
 
IRJET- A Smart Voice Controlled Robot Assistant
IRJET- A Smart Voice Controlled Robot AssistantIRJET- A Smart Voice Controlled Robot Assistant
IRJET- A Smart Voice Controlled Robot AssistantIRJET Journal
 

What's hot (20)

IRJET - Eyeblink Controlled Virtual Keyboard using Raspberry Pi
IRJET -  	  Eyeblink Controlled Virtual Keyboard using Raspberry PiIRJET -  	  Eyeblink Controlled Virtual Keyboard using Raspberry Pi
IRJET - Eyeblink Controlled Virtual Keyboard using Raspberry Pi
 
mobile camera based text detection
mobile camera based text detectionmobile camera based text detection
mobile camera based text detection
 
Braille Technology
Braille TechnologyBraille Technology
Braille Technology
 
Mobile camera based text detection and translation
Mobile camera based text detection and translationMobile camera based text detection and translation
Mobile camera based text detection and translation
 
IRJET - Digital Notice Board using Raspberry Pi
IRJET - Digital Notice Board using Raspberry PiIRJET - Digital Notice Board using Raspberry Pi
IRJET - Digital Notice Board using Raspberry Pi
 
Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...Design of a Communication System using Sign Language aid for Differently Able...
Design of a Communication System using Sign Language aid for Differently Able...
 
IRJET- Wireless Notice Board using Raspberry Pi
IRJET- Wireless Notice Board using Raspberry PiIRJET- Wireless Notice Board using Raspberry Pi
IRJET- Wireless Notice Board using Raspberry Pi
 
Iirdem design and implementation of finger writing in air by using open cv (c...
Iirdem design and implementation of finger writing in air by using open cv (c...Iirdem design and implementation of finger writing in air by using open cv (c...
Iirdem design and implementation of finger writing in air by using open cv (c...
 
IRJET- Hand Gesture Recognition and Voice Conversion for Deaf and Dumb
IRJET- Hand Gesture Recognition and Voice Conversion for Deaf and DumbIRJET- Hand Gesture Recognition and Voice Conversion for Deaf and Dumb
IRJET- Hand Gesture Recognition and Voice Conversion for Deaf and Dumb
 
A Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign LanguageA Translation Device for the Vision Based Sign Language
A Translation Device for the Vision Based Sign Language
 
Mob ocr
Mob ocrMob ocr
Mob ocr
 
Recognizing of Text and Product Label from Hand Held Entity Intended for Visi...
Recognizing of Text and Product Label from Hand Held Entity Intended for Visi...Recognizing of Text and Product Label from Hand Held Entity Intended for Visi...
Recognizing of Text and Product Label from Hand Held Entity Intended for Visi...
 
Voice recognition security systems
Voice recognition security systemsVoice recognition security systems
Voice recognition security systems
 
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...
IRJET- Review on Raspberry Pi based Assistive Communication System for Blind,...
 
Smart Pen: Future of Handheld Gadget
Smart Pen: Future of Handheld GadgetSmart Pen: Future of Handheld Gadget
Smart Pen: Future of Handheld Gadget
 
VOICE BASED SECURITY SYSTEM
VOICE BASED SECURITY SYSTEMVOICE BASED SECURITY SYSTEM
VOICE BASED SECURITY SYSTEM
 
Real Time Character Recognition on FPGA for Braille Devices
Real Time Character Recognition on FPGA for Braille DevicesReal Time Character Recognition on FPGA for Braille Devices
Real Time Character Recognition on FPGA for Braille Devices
 
IRJET-Raspberry Pi based Reader for Blind People
IRJET-Raspberry Pi based Reader for Blind PeopleIRJET-Raspberry Pi based Reader for Blind People
IRJET-Raspberry Pi based Reader for Blind People
 
Deep learning seminar report
Deep learning seminar reportDeep learning seminar report
Deep learning seminar report
 
IRJET- A Smart Voice Controlled Robot Assistant
IRJET- A Smart Voice Controlled Robot AssistantIRJET- A Smart Voice Controlled Robot Assistant
IRJET- A Smart Voice Controlled Robot Assistant
 

Viewers also liked

Collaboration Acknowledging Contributions
Collaboration Acknowledging ContributionsCollaboration Acknowledging Contributions
Collaboration Acknowledging ContributionsSunil Betageri
 
Рубцовская осень. г. Артём
Рубцовская осень. г. АртёмРубцовская осень. г. Артём
Рубцовская осень. г. АртёмТатьяна Новых
 
Redes sociales e influencia en jovenes
Redes sociales e influencia en jovenesRedes sociales e influencia en jovenes
Redes sociales e influencia en joveneslivy madrigal
 
San román cristina junio 2016 martín garcía valle
San román cristina junio 2016 martín garcía valleSan román cristina junio 2016 martín garcía valle
San román cristina junio 2016 martín garcía valleMARTINGVALLE
 
Biografía de Ludwig Van Beethoven de Iago Trincado Pinto e Victor Fernández S...
Biografía de Ludwig Van Beethoven de Iago Trincado Pinto e Victor Fernández S...Biografía de Ludwig Van Beethoven de Iago Trincado Pinto e Victor Fernández S...
Biografía de Ludwig Van Beethoven de Iago Trincado Pinto e Victor Fernández S...Carmela Garcia
 
Lenses and the human eye
Lenses and the human eyeLenses and the human eye
Lenses and the human eyeOhMiss
 

Viewers also liked (15)

Convex lenses problems
Convex lenses  problemsConvex lenses  problems
Convex lenses problems
 
Lenses
LensesLenses
Lenses
 
Collaboration Acknowledging Contributions
Collaboration Acknowledging ContributionsCollaboration Acknowledging Contributions
Collaboration Acknowledging Contributions
 
Presentation_NEW.PPTX
Presentation_NEW.PPTXPresentation_NEW.PPTX
Presentation_NEW.PPTX
 
закладка Рубцов
закладка Рубцовзакладка Рубцов
закладка Рубцов
 
My final resume
My final resumeMy final resume
My final resume
 
CV DAMIEN DESLANDES
CV  DAMIEN DESLANDESCV  DAMIEN DESLANDES
CV DAMIEN DESLANDES
 
Рубцовская осень. г. Артём
Рубцовская осень. г. АртёмРубцовская осень. г. Артём
Рубцовская осень. г. Артём
 
Redes sociales e influencia en jovenes
Redes sociales e influencia en jovenesRedes sociales e influencia en jovenes
Redes sociales e influencia en jovenes
 
конкурс чтецов
конкурс чтецовконкурс чтецов
конкурс чтецов
 
Lenses 1
Lenses 1Lenses 1
Lenses 1
 
San román cristina junio 2016 martín garcía valle
San román cristina junio 2016 martín garcía valleSan román cristina junio 2016 martín garcía valle
San román cristina junio 2016 martín garcía valle
 
Biografía de Ludwig Van Beethoven de Iago Trincado Pinto e Victor Fernández S...
Biografía de Ludwig Van Beethoven de Iago Trincado Pinto e Victor Fernández S...Biografía de Ludwig Van Beethoven de Iago Trincado Pinto e Victor Fernández S...
Biografía de Ludwig Van Beethoven de Iago Trincado Pinto e Victor Fernández S...
 
Lenses and the human eye
Lenses and the human eyeLenses and the human eye
Lenses and the human eye
 
LENSES POWER POINT
LENSES POWER POINTLENSES POWER POINT
LENSES POWER POINT
 

Similar to Template abstract_book

INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...
INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...
INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...IRJET Journal
 
IDE Code Compiler for the physically challenged (Deaf, Blind & Mute)
IDE Code Compiler for the physically challenged (Deaf, Blind & Mute)IDE Code Compiler for the physically challenged (Deaf, Blind & Mute)
IDE Code Compiler for the physically challenged (Deaf, Blind & Mute)IRJET Journal
 
IRJET - Sign Language Recognition System
IRJET -  	  Sign Language Recognition SystemIRJET -  	  Sign Language Recognition System
IRJET - Sign Language Recognition SystemIRJET Journal
 
IRJET - Mutecom using Tensorflow-Keras Model
IRJET - Mutecom using Tensorflow-Keras ModelIRJET - Mutecom using Tensorflow-Keras Model
IRJET - Mutecom using Tensorflow-Keras ModelIRJET Journal
 
IRJET - Storytelling App for Children with Hearing Impairment using Natur...
IRJET -  	  Storytelling App for Children with Hearing Impairment using Natur...IRJET -  	  Storytelling App for Children with Hearing Impairment using Natur...
IRJET - Storytelling App for Children with Hearing Impairment using Natur...IRJET Journal
 
Instant speech translation 10BM60080 - VGSOM
Instant speech translation   10BM60080 - VGSOMInstant speech translation   10BM60080 - VGSOM
Instant speech translation 10BM60080 - VGSOMsathiyaseelanm
 
Sign language translator ieee power point
Sign language translator ieee power pointSign language translator ieee power point
Sign language translator ieee power pointMadhuri Yellapu
 
Paper on Speech Recognition
Paper on Speech RecognitionPaper on Speech Recognition
Paper on Speech RecognitionThejus Joby
 
IRJET- Vision Based Sign Language by using Matlab
IRJET- Vision Based Sign Language by using MatlabIRJET- Vision Based Sign Language by using Matlab
IRJET- Vision Based Sign Language by using MatlabIRJET Journal
 
Procedia Computer Science 94 ( 2016 ) 295 – 301 Avail.docx
 Procedia Computer Science   94  ( 2016 )  295 – 301 Avail.docx Procedia Computer Science   94  ( 2016 )  295 – 301 Avail.docx
Procedia Computer Science 94 ( 2016 ) 295 – 301 Avail.docxaryan532920
 
5.smart multilingual sign boards
5.smart multilingual sign boards5.smart multilingual sign boards
5.smart multilingual sign boardsEditorJST
 
IRJET- ASL Language Translation using ML
IRJET- ASL Language Translation using MLIRJET- ASL Language Translation using ML
IRJET- ASL Language Translation using MLIRJET Journal
 
IRJET - Sign Language Text to Speech Converter using Image Processing and...
IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...
IRJET - Sign Language Text to Speech Converter using Image Processing and...IRJET Journal
 
Top 10 Best Speech Recognition Software
Top 10 Best Speech Recognition Software Top 10 Best Speech Recognition Software
Top 10 Best Speech Recognition Software Jame Williamson
 

Similar to Template abstract_book (20)

INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...
INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...
INDIAN SIGN LANGUAGE TRANSLATION FOR HARD-OF-HEARING AND HARD-OF-SPEAKING COM...
 
IDE Code Compiler for the physically challenged (Deaf, Blind & Mute)
IDE Code Compiler for the physically challenged (Deaf, Blind & Mute)IDE Code Compiler for the physically challenged (Deaf, Blind & Mute)
IDE Code Compiler for the physically challenged (Deaf, Blind & Mute)
 
Phase ii frontpage
Phase ii frontpagePhase ii frontpage
Phase ii frontpage
 
IRJET - Sign Language Recognition System
IRJET -  	  Sign Language Recognition SystemIRJET -  	  Sign Language Recognition System
IRJET - Sign Language Recognition System
 
IRJET - Mutecom using Tensorflow-Keras Model
IRJET - Mutecom using Tensorflow-Keras ModelIRJET - Mutecom using Tensorflow-Keras Model
IRJET - Mutecom using Tensorflow-Keras Model
 
IRJET - Storytelling App for Children with Hearing Impairment using Natur...
IRJET -  	  Storytelling App for Children with Hearing Impairment using Natur...IRJET -  	  Storytelling App for Children with Hearing Impairment using Natur...
IRJET - Storytelling App for Children with Hearing Impairment using Natur...
 
Instant speech translation 10BM60080 - VGSOM
Instant speech translation   10BM60080 - VGSOMInstant speech translation   10BM60080 - VGSOM
Instant speech translation 10BM60080 - VGSOM
 
Sign language translator ieee power point
Sign language translator ieee power pointSign language translator ieee power point
Sign language translator ieee power point
 
Paper on Speech Recognition
Paper on Speech RecognitionPaper on Speech Recognition
Paper on Speech Recognition
 
IRJET- Vocal Code
IRJET- Vocal CodeIRJET- Vocal Code
IRJET- Vocal Code
 
30
3030
30
 
IRJET- Vision Based Sign Language by using Matlab
IRJET- Vision Based Sign Language by using MatlabIRJET- Vision Based Sign Language by using Matlab
IRJET- Vision Based Sign Language by using Matlab
 
Procedia Computer Science 94 ( 2016 ) 295 – 301 Avail.docx
 Procedia Computer Science   94  ( 2016 )  295 – 301 Avail.docx Procedia Computer Science   94  ( 2016 )  295 – 301 Avail.docx
Procedia Computer Science 94 ( 2016 ) 295 – 301 Avail.docx
 
5.smart multilingual sign boards
5.smart multilingual sign boards5.smart multilingual sign boards
5.smart multilingual sign boards
 
Google Voice-to-text
Google Voice-to-textGoogle Voice-to-text
Google Voice-to-text
 
Hand gesture recognition
Hand gesture recognitionHand gesture recognition
Hand gesture recognition
 
IRJET- ASL Language Translation using ML
IRJET- ASL Language Translation using MLIRJET- ASL Language Translation using ML
IRJET- ASL Language Translation using ML
 
IRJET - Sign Language Text to Speech Converter using Image Processing and...
IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...IRJET -  	  Sign Language Text to Speech Converter using Image Processing and...
IRJET - Sign Language Text to Speech Converter using Image Processing and...
 
Top 10 Best Speech Recognition Software
Top 10 Best Speech Recognition Software Top 10 Best Speech Recognition Software
Top 10 Best Speech Recognition Software
 
An Application for Performing Real Time Speech Translation in Mobile Environment
An Application for Performing Real Time Speech Translation in Mobile EnvironmentAn Application for Performing Real Time Speech Translation in Mobile Environment
An Application for Performing Real Time Speech Translation in Mobile Environment
 

Template abstract_book

  • 1. Automatic Language Translation Software For Aiding Communication Between Indian Sign Language And Spoken English Using Labview Yellapu Madhuri*, G.Anitha** * 2nd year M.Tech, ** Assistant Professor Department of Biomedical Engineering, SRM University, Kattankulathur-603203, Tamilnadu, India www.srmuniv.ac.in Sign Language (SL) is the natural way of communication of speech and/or hearing-impaired people. A sign is a movement of one or both hands, accompanied with facial expression, which corresponds to a specific meaning. This paper presents SIGN LANGUAGE TRANSLATION software for automatic translation of Indian sign language into spoken English and vice versa to assist the communication between speech and/or hearing impaired people and hearing people. It could be used by deaf community as a translator to people that do not understand sign language, avoiding by this way the intervention of an intermediate person for interpretation and allow communication using their natural way of speaking. The proposed software is standalone executable interactive application program developed using LABVIEW software that can be implemented in any standard windows operating laptop, desktop or an IOS mobile phone to operate with the camera, processor and audio device. For sign to speech translation, the one handed SL gestures of the user are captured using camera; vision analysis functions are performed in the operating system and provide corresponding speech output through audio device. For speech to SL translation the speech input of the user is acquired by microphone; speech analysis functions are performed and provide SL gesture picture display of corresponding speech input. The experienced lag time for translation is little because of parallel processing and allows for instantaneous translation from finger and hand movements to speech and speech inputs to SL gestures. This system is trained to translate one handed SL representations of alphabets (A-Z), numbers (1-9) to speech and 165 word phrases to SL gestures The training database of inputs can be easily extended to expand the system applications. The software does not require the user to use any special hand gloves. The results are found to be highly consistent, reproducible, with fairly high precision and accuracy. AIM : To develop a mobile interactive application program for automatic translation of Indian sign language into spoken English and vice-versa to assist the communication between Deaf people and hearing people. The SL translator should be able to translate one handed Indian Sign language finger spelling input of alphabets (A-Z) and numbers (1-9) to spoken English audio output and 165 spoken English word input to Indian Sign language picture display output. OBJECTIVES: •To acquire one handed SL finger spelling of alphabets (A to Z) and numbers (1 to 9) to produce spoken English audio output. •To acquire spoken English word input to produce Indian Sign language picture display output. •To create an executable file to make the software a standalone application. •To implement the software and optimize the parameters to improve the accuracy of translation. •To minimize hardware requirements and thus expense while achieving high precision of translation. There is a need for monitoring cerebral perfusion MATERIALS Software Tools used: National Instruments LabVIEW and toolkits •LABVIEW 2012 version •Vision Development Module •Vision acquisition module Hardware tools used •Laptop inbuilt webcamera- Acer Crystal Eye •Laptop inbuilt speaker-Acer eAudio METHOD: The software is a standalone application. To install the file, follow the instructions that appear in the executable installer file. After installing the application, a Graphical user interfacing (GUI) window opens, from which the full application can be used. The GUI has been created to run the entire application from a single window. It has four pages, each page corresponds to a specific application. PAGE 1 gives a detailed demo of the total software usage. PAGE 2 is for speech to sign language translation. When the “start” button is pressed, a command is sent to the Windows 7 inbuilt Speech Recognizer and it opens a mini window at the top. The first time it is started, a tutorial session begins which gives instructions to setup the microphone and recognize the user’s voice input. Configure the speech recognition software. After the initial training, from the next time the program is executed, it starts speech recognition automatically. To train the system for a different user or change the microphone settings, right click on the Speech Recognizer window and select “Start Speech Tutorial”. To stop the speech recognition software say “Stop listening”. To start speech recognition again say “Start Listening”. When the user utters any of the words listed in the “Phrases” it is displayed in the “Command” indicator. A SL gesture picture corresponding to the speech input is displayed in the “Sign” picture indicator. The score of speech input correlation with the trained word is displayed in the “Score” numeric indicator. Use the exit button to exit the application of speech to SL translation. PAGE 3 is for template preparation for sign to speech translation. To execute the template preparation module, press the “Start” button. Choose the camera to acquire images to be used as templates, from the “Camera Name” list. The acquired image is displayed on “Image” picture indicator. If the display image is good to be used for preparing a template, press “Snap frame”. The snapped image is displayed on “Snap Image” picture display. Draw a region of interest to prepare the template and press “Learn”. The image region in the selected portion of the snapped frame is saved to the folder specified for templates. The saved template image is displayed on “Template Image” picture display. Press “Stop” button to stop execution of template preparation module. PAGE 4 is for Sign to speech translation. Press the “Start” button to start the program. Choose the camera to acquire images to be used for pattern matching, from the “Camera Name” list. The captured images are displayed on the “Input Image” picture display. Press the “Match” button to start comparing the acquired input image with the template images in the data base. In each iteration the input image is checked for pattern match with one template. When the input image matches with the template image, the loop halts. The “Match” LED glows and the matched template is displayed on the “Template Image” indicator. The loop iteration count is used for triggering a case structure. Depending on the iteration count value a specific case is selected and gives a string output. Otherwise the loop continues to next iteration where the input image is checked for pattern match with a new template. The information in the string output from case structure is displayed on the “Matched Pattern” alphanumeric indicator. It also initiates the .NET speech synthesizer to give an audio output through the speaker. Figure 1.1 Events involved in hearing Figure 1.2 Speech chain Figure 1.3 Graphical Abstract
  • 2. [1]. Yellapu Madhuri, G.Anitha (2013) “VISION-BASED SIGN LANGUAGE TRANSLATION DEVICE” International Conference on Information Communication & Embedded systems ICICES 2013 in association with IEEE, S.A engineering College, Chennai. ISBN No. 978-1-4673-5787-6G. Tracking Id: 13cse213. [2]. Yellapu Madhuri, G.Anitha (2013) “Automatic Language Translation Software for Interpreting Sign Language and Speech in English”, has been awarded Silver medal in paper presentation Research Day 2013 at SRM University, Chennai. [3]. Yellapu Madhuri, G.Anitha (2013) submission entitled "SIGN LANGUAGE TRANSLATOR" has been assigned the following manuscript. number: IMAVIS-D-13-00011 by Elsevier Editorial Systems- Image and Vision Computing journal imavis@elsevier.com. [4]. Yellapu Madhuri, G.Anitha (2013) submission entitled “VISION-BASED SIGN LANGUAGE TRANSLATOR” is Accepted for publication in International Journal of Engineering and Science Invention (IJESI) www.ijesi.org.Review report of manuscript id: A11023. [5]. Yellapu Madhuri, G.Anitha (2013) submission entitled “SIGN LANGUAGE TRANSLATION DEVICE” is Accepted for publication in The International Journal of Engineering and Science (THE IJES) www.theijes.com. Review report of manuscript id: 13026. [6]. Yellapu Madhuri, G.Anitha (2013) submission entitled "Automatic Language Translation Software for Interpreting Sign Language and Speech in English" has been assigned a tracking number of NCOMMS-13-02048 by Nature Communications naturecommunications@nature.com. [1]. Jose L. Hernandez-Rebollar1, Nicholas Kyriakopoulos1, Robert W. Lindeman2 ‘A New Instrumented Approach For Translating American Sign Language Into Sound And Text’, Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition (FGR’04) 0-7695-2122-3/04 $ 20.00 © 2004 IEEE. [2]. K. Abe, H. Saito, S. Ozawa: Virtual 3D Interface System via Hand Motion Recognition From Two Cameras. IEEE Trans. Systems, Man, and Cybernetics, Vol. 32, No. 4, pp. 536–540, July 2002. [3]. Paschaloudi N. Vassilia, Margaritis G. Konstantinos "Listening to deaf': A Greek sign language translator’, 0-7803-9521-2/06/$20.00 §2006IEEE Name: YELLAPU MADHURI Reg.No:1651110002 M.Tech (BiomedialEngineering) Mobile no: 09441571241 E.Mail:ymadhury@rediffmail.com In this work, a vision based sign language recognition system using LABVIEW for automatic sign language translation has been presented. This approach uses the feature vectors which include whole image frames containing all the aspects of the sign. This project has investigated the different issues of this new approach to SL recognition to recognize on the hand sign language alphabets and numbers using appearance based features which are extracted directly from a video stream recorded with a conventional camera making recognition system more practical. Although sign language contains many different aspects from manual and non-manual cues, the position, the orientation and the configuration or shape of the dominant hand of the signer conveys a large portion of the information of the signs. Therefore, the geometric features which are extracted from the signers’ dominant hand, improve the accuracy of the system to a great degree. This project did not focus on facial expressions although it is well known that facial expressions convey important part of sign-languages. The facial expressions can e.g. be extracted by tracking the signers’ face. Then, the most discriminative features can be selected by employing a dimensionality reduction method and this cue could also be fused into the recognition system. The sign language translator is able to translate alphabets (A-Z) and numbers (1-9). All the signs can be translated real-time. But signs that are similar in posture and gesture to another sign can be misinterpreted, resulting in a decrease in accuracy of the system. The current system has only been trained on a very small database. Since there will always be variation in either the signers hand posture or motion trajectory, a larger database accommodating a larger variety of hand posture for each sign is required. The speech recognition program requires the user to take up a tutorial of 10 minutes. During the training, the program learns the accent of the user for speech recognition. It is observed that, the longer the user used the program , the higher the accuracy of speech recognition. This paper presents a novel approach for gesture detection. This approach has two main steps: i) template preparation, and ii) gesture detection. The template preparation technique presented here has some important features for gesture recognition including robustness against slight rotation, small number of required features and device independence. For gesture detection, a pattern matching technique is used. The gesture recognition technique presented here can be used with a variety of front-end input systems such as vision based input , hand and eye tracking, digital tablet, mouse, and digital glove. Much previous work has focused on isolated sign language recognition with clear pauses after each sign. These pauses make it a much easier problem than continuous recognition without pauses between the individual signs, because explicit segmentation of a continuous input stream into the individual signs is very difficult. For this reason, and because of co- articulation effects, work on isolated recognition often does not generalize easily to continuous recognition. But the proposed software captures the input images as an AVI sequence of continuous images. This allows for continuous input image acquisition without pauses. But each image frame is processed individually and checked for pattern matching. This technique overcomes the problem of processing continuous images at the same time having input stream without pauses. For Speech to SL translation words of similar pronunciation are sometimes misinterpreted. This problem can be avoided by clearly pronouncing the words and with extended training and increasing usage. The speech recognition technique introduced in this article can be used with a variety of front-end input systems such as computer and video games, precision surgery, domestic applications and wearable computers. Figure 1.4 Block diagram of SL Figure 1.5 Block diagram of Speech to speech translation to sign language translation Figure 1.6 PAGE 3-GUI of template preparation Figure 1.8 PAGE 2-GUI of Speech to SL translation Figure 1.9 GUI of windows speech recognition tutorial Figure 1.7 PAGE 4-GUI of SL to speech translation Figure 1.10 Database of SL finger spelling Alphabets and Numbers