SlideShare a Scribd company logo
Enviar pesquisa
Carregar
Entrar
Cadastre-se
40120130406016
Denunciar
IAEME Publication
Seguir
Journal Publishing em Journal Publication
16 de Jan de 2014
•
0 gostou
•
444 visualizações
40120130406016
16 de Jan de 2014
•
0 gostou
•
444 visualizações
IAEME Publication
Seguir
Journal Publishing em Journal Publication
Denunciar
Tecnologia
Negócios
40120130406016
1 de 6
Baixar agora
1
de
6
Recomendados
Final Year Project-Gesture Based Interaction and Image Processing
Sabnam Pandey, MBA
6.5K visualizações
•
57 slides
gesture-recognition
Venkat RAGHAVENDRA REDDY
10.1K visualizações
•
49 slides
Human Activity Recognition in Android
Surbhi Jain
9.9K visualizações
•
32 slides
Kinect v1+Processing workshot fabcafe_taipei
Mao Wu
1.5K visualizações
•
19 slides
Scanner
Nikhil Jha
537 visualizações
•
24 slides
IRJET- Hand Gesture Recognition System using Convolutional Neural Networks
IRJET Journal
442 visualizações
•
7 slides
Mais conteúdo relacionado
Mais procurados
input devices By ZAK
Tabsheer Hasan
671 visualizações
•
21 slides
How fingerprint scanners work
RaxTonProduction
117 visualizações
•
5 slides
Implementation of Implantation-Stagger Measuring Unit using Image Processing
Dr. Amarjeet Singh
90 visualizações
•
5 slides
Follow Me Robot Technology
ijsrd.com
258 visualizações
•
3 slides
Hand Gesture Recognition Based on Shape Parameters
Nithinkumar P
1.7K visualizações
•
21 slides
Kinect krishna kumar-itkan
Pat Maher
2.3K visualizações
•
89 slides
Mais procurados
(20)
input devices By ZAK
Tabsheer Hasan
•
671 visualizações
How fingerprint scanners work
RaxTonProduction
•
117 visualizações
Implementation of Implantation-Stagger Measuring Unit using Image Processing
Dr. Amarjeet Singh
•
90 visualizações
Follow Me Robot Technology
ijsrd.com
•
258 visualizações
Hand Gesture Recognition Based on Shape Parameters
Nithinkumar P
•
1.7K visualizações
Kinect krishna kumar-itkan
Pat Maher
•
2.3K visualizações
Complex Weld Seam Detection Using Computer Vision Linked In
glenn_silvers
•
787 visualizações
Final
pauldeng
•
348 visualizações
Sensor pp
Ibrahim Tareq
•
84 visualizações
C05131525
IOSR-JEN
•
375 visualizações
Gaze detection
zeyad algshai
•
118 visualizações
Kinect Sensors as Natural User Interfaces
Rumen Filkov
•
805 visualizações
Nikppt
Nikith Kumar Reddy
•
669 visualizações
CHI'15 - WonderLens: Optical Lenses and Mirrors for Tangible Interactions on ...
Rong-Hao Liang
•
2K visualizações
3 d scanning technology
J. B. Institute of Engineering and Technology
•
278 visualizações
DETECTING ITEMS HIDDEN INSIDE A BODY
Journal For Research
•
526 visualizações
Track o-shoes
ShravikaTodupunuri1
•
15 visualizações
Gesturerecognition
Mariya Khan
•
193 visualizações
[UIST 2015] FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabri...
Rong-Hao Liang
•
1.8K visualizações
How Augment your Reality: Different perspective on the Reality / Virtuality C...
Matteo Valoriani
•
323 visualizações
Destaque
20304050607082
IAEME Publication
580 visualizações
•
15 slides
40220130406002
IAEME Publication
424 visualizações
•
10 slides
50120130405010
IAEME Publication
314 visualizações
•
8 slides
30120130406006
IAEME Publication
462 visualizações
•
12 slides
30120130406027
IAEME Publication
283 visualizações
•
5 slides
20120130407004 2-3-4-5
IAEME Publication
363 visualizações
•
13 slides
Destaque
(8)
20304050607082
IAEME Publication
•
580 visualizações
40220130406002
IAEME Publication
•
424 visualizações
50120130405010
IAEME Publication
•
314 visualizações
30120130406006
IAEME Publication
•
462 visualizações
30120130406027
IAEME Publication
•
283 visualizações
20120130407004 2-3-4-5
IAEME Publication
•
363 visualizações
3. The Synoptic Gospels
Dr. Dieter Thom
•
1.9K visualizações
Music genre research
shaybulms
•
429 visualizações
Similar a 40120130406016
L01117074
IOSR Journals
288 visualizações
•
5 slides
HCI for Real world Applications
IOSR Journals
233 visualizações
•
5 slides
K017655963
IOSR Journals
183 visualizações
•
5 slides
Design of Mobile Robot Navigation system using SLAM and Adaptive Tracking Con...
iosrjce
295 visualizações
•
5 slides
An Approach for Object and Scene Detection for Blind Peoples Using Vocal Vision.
IJERA Editor
212 visualizações
•
3 slides
IRJET- Smart Helmet for Visually Impaired
IRJET Journal
68 visualizações
•
4 slides
Similar a 40120130406016
(20)
L01117074
IOSR Journals
•
288 visualizações
HCI for Real world Applications
IOSR Journals
•
233 visualizações
K017655963
IOSR Journals
•
183 visualizações
Design of Mobile Robot Navigation system using SLAM and Adaptive Tracking Con...
iosrjce
•
295 visualizações
An Approach for Object and Scene Detection for Blind Peoples Using Vocal Vision.
IJERA Editor
•
212 visualizações
IRJET- Smart Helmet for Visually Impaired
IRJET Journal
•
68 visualizações
Intelligent indoor mobile robot navigation using stereo vision
sipij
•
444 visualizações
C0352016019
inventionjournals
•
136 visualizações
IRJET- Recognition of Theft by Gestures using Kinect Sensor in Machine Le...
IRJET Journal
•
6 visualizações
Vision System and its application,Problems
Nikhil Chavda
•
143 visualizações
30 ball
Ankita Dhengale
•
89 visualizações
A350111
International Advance Journal of Engineering Research
•
52 visualizações
Beginners guide to khepera robot soccer
boimiim
•
1.2K visualizações
Smart car
Karthik Muthuchandra
•
640 visualizações
Motion capture document
harini501
•
477 visualizações
Password Based Hand Gesture Controlled Robot
IJERA Editor
•
48 visualizações
Autonomous Path Planning and Navigation of a Mobile Robot with Multi-Sensors ...
CSCJournals
•
135 visualizações
Simultaneous Mapping and Navigation For Rendezvous in Space Applications
Nandakishor Jahagirdar
•
168 visualizações
Design and Development of a Semi-Autonomous Telerobotic Warehouse Management ...
IRJET Journal
•
2 visualizações
Development of image processing based human tracking and control algorithm fo...
adarsa lakshmi
•
536 visualizações
Mais de IAEME Publication
IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME Publication
52 visualizações
•
12 slides
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
IAEME Publication
35 visualizações
•
14 slides
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
IAEME Publication
126 visualizações
•
7 slides
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
IAEME Publication
25 visualizações
•
8 slides
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
IAEME Publication
61 visualizações
•
8 slides
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
IAEME Publication
214 visualizações
•
7 slides
Mais de IAEME Publication
(20)
IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME Publication
•
52 visualizações
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
IAEME Publication
•
35 visualizações
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
IAEME Publication
•
126 visualizações
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
IAEME Publication
•
25 visualizações
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
IAEME Publication
•
61 visualizações
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
IAEME Publication
•
214 visualizações
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
IAEME Publication
•
92 visualizações
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IAEME Publication
•
50 visualizações
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
IAEME Publication
•
9 visualizações
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
IAEME Publication
•
18 visualizações
GANDHI ON NON-VIOLENT POLICE
IAEME Publication
•
10 visualizações
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
IAEME Publication
•
136 visualizações
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
IAEME Publication
•
11 visualizações
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
IAEME Publication
•
55 visualizações
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
IAEME Publication
•
26 visualizações
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
IAEME Publication
•
24 visualizações
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
IAEME Publication
•
186 visualizações
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
IAEME Publication
•
14 visualizações
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
IAEME Publication
•
60 visualizações
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
IAEME Publication
•
75 visualizações
Último
Accelerating Data Science through Feature Platform, Transformers and GenAI
FeatureByte
127 visualizações
•
44 slides
RemeOs science and clinical data 20230926_PViv2 (4).pptx
PetrusViitanen1
22 visualizações
•
14 slides
How resolve Gem dependencies in your code?
Hiroshi SHIBATA
169 visualizações
•
50 slides
Reward Innovation for long-term member satisfaction
Jiangwei Pan
46 visualizações
•
18 slides
"From Orchestration to Choreography and Back", Yevhen Bobrov
Fwdays
55 visualizações
•
57 slides
GIT AND GITHUB (1).pptx
GDSCCVRGUPoweredbyGo
57 visualizações
•
20 slides
Último
(20)
Accelerating Data Science through Feature Platform, Transformers and GenAI
FeatureByte
•
127 visualizações
RemeOs science and clinical data 20230926_PViv2 (4).pptx
PetrusViitanen1
•
22 visualizações
How resolve Gem dependencies in your code?
Hiroshi SHIBATA
•
169 visualizações
Reward Innovation for long-term member satisfaction
Jiangwei Pan
•
46 visualizações
"From Orchestration to Choreography and Back", Yevhen Bobrov
Fwdays
•
55 visualizações
GIT AND GITHUB (1).pptx
GDSCCVRGUPoweredbyGo
•
57 visualizações
"Architecture assessment from classics to details", Dmytro Ovcharenko
Fwdays
•
55 visualizações
Omada Pitch Deck
sjcobrien
•
31 visualizações
Mastering Automation Quality: Exploring UiPath's Test Suite for Seamless Test...
DianaGray10
•
44 visualizações
UiPath Tips and Techniques for Debugging - Session 3
DianaGray10
•
57 visualizações
Product Research Presentation-Maidy Veloso.pptx
MaidyVeloso
•
41 visualizações
How is AI changing journalism? Strategic considerations for publishers and ne...
Damian Radcliffe
•
125 visualizações
Dennis Wendland_The i4Trust Collaboration Programme.pptx
FIWARE
•
16 visualizações
Mule Meetup Calgary- API Governance & Conformance.pdf
NithaJoseph4
•
69 visualizações
Brisbane MuleSoft Meetup 13 MuleSoft Maven and Managing Dependencies Part 1.pptx
BrianFraser29
•
17 visualizações
"Software Architecture for Humans!", Eberhard Wolff
Fwdays
•
22 visualizações
"Intro to Stateful Services or How to get 1 million RPS from a single node", ...
Fwdays
•
19 visualizações
Dev Dives: Mastering AI-powered Document Understanding
UiPathCommunity
•
549 visualizações
EuroBSDCon 2023 - (auto)Installing BSD Systems - Cases using pfSense, TrueNAS...
Vinícius Zavam
•
81 visualizações
The Flutter Job Market At The Moment
Ahmed Abu Eldahab
•
44 visualizações
40120130406016
1.
International Journal of
Electronics and JOURNALEngineering & Technology (IJECET), ISSN 0976 – INTERNATIONAL Communication OF ELECTRONICS AND 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 6, November - December (2013), © IAEME COMMUNICATION ENGINEERING & TECHNOLOGY (IJECET) ISSN 0976 – 6464(Print) ISSN 0976 – 6472(Online) Volume 4, Issue 6, November - December, 2013, pp. 134-139 © IAEME: www.iaeme.com/ijecet.asp Journal Impact Factor (2013): 5.8896 (Calculated by GISI) www.jifactor.com IJECET ©IAEME HOME BUTLER- A VOICE CONTROLLED ROBOT TO ASSIST THE HANDICAP GauravJha1, Addala Sai SubhaCharan2, Chalasani Rama Krishna Prasad3, AkashMantry4 1, 2, 3 4 ECE, SRM University, India ICE, SRM University, India ABSTRACT Our system works under voice control. The robot takes in the voice input from the user and decodes it using LabView. The robot first tells the user whether it can locate the object or not. If it can, it locates the object and then using another database it gets the image of the object. This is the reference image and is processed using MATLAB. The histogram of the image is processed. After finding the specified location of the object and image processing, it uses SLAM (Simultaneous Localization and Mapping) and goes to the searched location of the object. Using camera vision and image processing by MATLAB, it tries to find a match with reference image. After getting the perfect match, it then grips the object by the help of limit switches and photoelectric sensors. Then it brings the object back to the user by tracing the same path it took to locate it. Keywords: Voice Control and Decoding, Image Processing, SLAM. OBJECTIVES The following are the objectives of our study, which include: • To get the voice command by the user and decode it effectively. • To search for the object in our own created database and then locate it and get the information (histogram) related to it. • Using Simultaneous Localistaion and Mapping and Light Detection and Ranging sensors, get to the location of the VR • Using video camera vision, image processing and comparison by Matlab, find the closest match of the object. • Get the distance of the object by using Area Threshold method. • Move to the object and then grab it using limit switch and photoelectric sensors. • Bring the object back to the user. 134
2.
International Journal of
Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 6, November - December (2013), © IAEME The user gives the voice input to the robot. The robot uses microphone or Easy VR module to get the voice input. LabView is used to decode the voice to text and also get the location of the decode object from our own created database. One method of voice decoding is using the module, which directly converts voice to text, and the other is using LabView, matching the input with a preloaded voice signal. Both the input and the preloaded signal are converted to their numerical equivalent and gnal. then matched. The robot now has the name of the object. Using the database, it finds the location of the object and also the particular object’s histogram data in different positions. Using SLAM technique and LIDAR sensors it finds its path to its destination. The position of the object is obtained using Localization and then accordingly Mapping builds its map to the destination. LIDAR sensors consist of a LASER transmitter and a receiver. These sensors help the robot to detect is surroundings ansmitter by emitting laser beam through the transmitter and then checking the time it takes for the receiver to receive the transmitted light, hence, giving it the idea of its surrounding environment and hence, surrounding resulting in mapping. Once it reaches its destination, it starts taking in the images of the objects and processing them. It then finds the closest match by comparing the reference image data and the data of the image it is viewing. Next step is to find the distance of the object from the robot’s current ng. position. It uses Area Threshold method to do it. This method gives us the distance of the object. The robot then moves to the object and using its grabber grabs it. The grabber consists of limit switch and consists photoelectric sensors to assist in grabbing. After grabbing, it brings the object back to the user by tracing the same path it took to reach its destination This path or map that the robot traced was destination. stored in its memory, so that when retracing, it can follow the same path backwards. at STRUCTURE The bot is built on a hard box base with two tyres controlled by maxon motors, the speciality of these motors is that it has a built in shaft encoder. The shaft encoder gives it a sense of distance covered and to be covered. These motors are high torque motors so that they can easily mobilize the bot above. It has a third small tyre at the back to prevent it from toppling. The lower box like base structure houses the LIDAR structure. It consisits of four pairs of transmitter-receiver LIDAR c receiverof sensors on all the four sides of the box base. Above the box lies most of the interfacing unit like the camera, microphone and speakers.The design is simple, it houses a shelf like structure erected on three rods from the base. Out of the three rods one protrude out of the shelf like structur to a height above it.This rod has a camera on the top height along with a microphone.The camera is used for taking in images for processing and comparison while the microphone takes in the user voice commands. 135
3.
International Journal of
Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 6, November - December (2013), © IAEME The shelf like structure houses a pair of speakers for voice output of the robot. It also consists of flashlights which are swithed on automatically as soon as the light of the environment reduces below a threshold. Another important part of the self structure is the grabber. It has been designed anthropologically consisting of a shoulder elbow and wrist. The hand consists of three fingure like structures.The shoulder is mounted on a servo motor and can move both clockwise and anticlockwise. The shoulder consisits of another motor combined to cables.These cables being attached to the elbow. So, when the motor rotates, it coils the cables pulling the arm up, while if the motor rotates in an anticlockwise direction it releases the cable to put down the arm again.The wrist is mounted again on a servo motor for clockwise and anticlockwise rotation.The motion of the fingers is done by using motor cable combination.Three motors are used for the three fingers. When the motor winds, the cables are pulled and the fingers are moved up, while if the motor is rotated anticlockwise the cables are released and the fingers go back down. GRABBER After the system finds the image at the specefied location, the robotic handmoves towards the object. When the hand touches the object, the limit switch is triggered which sends the signal to the microcontroller and then the microcontroller sends the signal to the limit switch. In the robotic arm we have installed photoelectric sensors which are a type of proximity sensor. The photoelectric sensors are calibrated for a particular distance according to our requirement. When the robotic arm moves backward the limit switch opens and now the photoelectric sensors start sensing the object. The robotic arm moves backward in a linear direction and it stops at a particular distance where it is calibrated. The linear motion stops and the grabber grabs the object. DATABASE After the voice signal has been decoded, the decoded text goes into the database we created in LabVIEW. The location of the object which has been given as a voice input is found. For eg: If the user asks for the coffee mug, then the location of the coffee mug which is kitchen is found. We have created an array of five elements as of now which includes coffee mug, pen, charger, bottle and brush. And if the user prompts any of these objects then the location of that particular object will be found. If the user prompts an object which is not in the array, then automatically my LabVIEW program will give me the voice output as ‘Not found’. IMAGE PROCESSING In order to compare the two images we use the technique of histogram.Histogram is basically a process in which pixels of a particularimage are divided into classes.The main idea is to compare the small boxes of two images and create an array of values.These valuesare then compared with the array of values of the reference image.Now there are various possibilities for error to occur in this. For example suppose 75% of the values in an array are comparable in two images, and then it will give a result as two images are matched.but remaining 25% may be different.So final result may not be correct. To avoid thatwe divide the pixels of an image into classes which are further divided into sub classes and these sub classes are then compared.This gives us a more accurate result as it gives us more number of values in an array and all the values are compared and the closest result is given.If the screen resolution of an image is less (say 16*16 pixels) we will get limited values.So comparison may not be correct.In order to give better result in that case we go for grey scaling of an image.Also the size of image is reduced to much smaller dimensions.The advantage is that it gives a better and faster result. 136
4.
International Journal of
Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 6, November - December (2013), © IAEME LAB-VIEW Simulated Result Fig.01 MATLAB Program Fig.02 THRESHOLD In some cases when two images have comparable background and the differences are not visible then values given will be different.To solve this anomaly we go for threshold which is the minimum value that had to be exceeded in order to be counted.Here you can see the differences between a 200 pixels wide version of an image and a 100 pixels wide version of the same image. 137
5.
International Journal of
Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 6, November - December (2013), © IAEME As we can see that after giving a threshold,the values in an array are getting changed.The values below '4' are now treated as no difference.This technique is helpful in such cases.Based on the difference.This approximation,threshold value of '3' or '4' works in most of the cases.And depending on the task given you can approximate the value n value. Area Threshold This method uses a formula that relates the area of pixels and the distance between the object. This formula is derived before hand for various objects. Depending on the object, we use the respective formula to determine the distance of it from the robot once it has been detected. once 138
6.
International Journal of
Electronics and Communication Engineering & Technology (IJECET), ISSN 0976 – 6464(Print), ISSN 0976 – 6472(Online) Volume 4, Issue 6, November - December (2013), © IAEME Raspberry Pi Raspberry Pi is basically a operating system. It includes a 700 MHz ARM microprocessor. We are going to develop software using this OS that contains the database for image location and image data, and the integration of LabView and Matlab. CONCLUSION The purpose of making a Home Butler Robot was to help the disabled person in his day to day activities. It can fetch things and bring it to the user. There are no such robots made especially for this application. Our robot is compact and the feasibility is high. However, further advancements can be made. 3D sensors can be used; better algorithms can be implemented for path finding. We have just mentioned one technique for the object detection but various techniques can be used. For SLAM we have used LIDAR but various other sensors can be used along with GPS to make it better. REFERENCES 1. Image comparison from the abstract submitted by CORNELL UNIVERSITY. 2. www.cmu.edu/herb-robot/‘HERB’ developed in CMU. 3. Kabeer Mohammed and Dr.Bhaskara Reddy, “Optimized Solution for Image Processing Through Mobile Robots Working as a Team with Designated Team Members and Team Leader”, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 3, 2013, pp. 140 - 148, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. 139