SlideShare uma empresa Scribd logo
1 de 6
Baixar para ler offline
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 268
Sanjaya: A Blind Assistance System
Hitanshu Parekh1, Akshat Shah2, Grinal Tuscano3
1Student, Dept. of Information Technology, St. Francis Institute of Technology, Mumbai, Maharashtra, India
2Student, Dept. of Computer Engineering, Thadomal Shahani Engineering College, Mumbai, Maharashtra, India
3 Professor, Dept. of Information Technology, St. Francis Institute of Technology, Mumbai, Maharashtra, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - According to the World Health Organization
(WHO), there are approximately284millionpeople worldwide
who have limited vision, andapproximately39million who are
completely blind [10]. Visually impaired people confront
multiple challenges in their daily lives, particularly when
navigating from one place to another on their own. They
frequently rely on others to meet their daily needs. The
proposed system is intended to help the visually impaired by
identifying and classifying common everyday objects in real-
time. This system provides voice feedback for improved
comprehension and navigation. In addition, we have depth
estimation, which calculates the safe distance between the
object and the person, allowing them to be more self-sufficient
and less reliant on others. We were able to create this model
using TensorFlow and pre-trained models. The recommended
strategy is dependable, affordable, realistic, and feasible.
Key Words: Object Detection, SSD Mobilenet,
TensorFlow Object Detection API, COCO Dataset, Depth
Estimation, Voice Alerts, Visually Impaired People,
Machine Learning.
1. INTRODUCTION
With the recent rapid growth ofinformationtechnology(IT),
a great deal of research has been conducted to solve
everyday difficulties, and as a result, many conveniences for
people have been supplied. Nonetheless, therearestill many
inconveniences for the visually impaired.
The most significant inconveniences that a blind person
faces on a daily basis include finding information about
objects and having difficulties with indoor mobility.
Currently, the majority of vision-impaired people in India
rely on a stick to identify potential obstacles in their path.
Previous research included object analysis using ultrasonic
sensors. However, it is difficulttotell exactlywhereanobject
is located using these approaches, especially in the presence
of obstacles.
As a result, our contribution to solving this problem
statement is to create a blind assistance system capable of
object detection and real-time speech conversion so that
they may hear it as and when they need it, as well as depth
estimation for safe navigation. The SSD algorithm was
combined with machine learning using the TensorFlow API
packages to produce this system. This is encapsulated in a
single system, an app, for a simple and better user
experience.
2. Related Work
Balakrishnan et al.[1] described an imageprocessingsystem
for the visually impaired that uses wearable stereo cameras
and stereo headphones. These devices are attached to
helmets. The stereo cameras capturethesceneinfrontofthe
visually impaired. The images are then analyzed, and
significant features are retrieved to assist navigation by
describing the scene. The distance parameters are captured
using stereo cameras. The system is both complex and cost-
ineffective due to the usage of stereo cameras.
Mahmud et al.[2] described a method that uses a cane and
includes an ultrasonic sensor mounted to the cane and a
camshaft position sensor (CMP) compass sensor (511),
which gives information about various impediments and
potholes. There are a handful of drawbacks to this strategy.
Because the solution relies on a cane as its principal answer,
only impediments up to knee height can be recognized.
Furthermore, any iron item in the surrounding area might
impact the CMP compass sensor. This strategy assists the
blind in the outdoor world but has limitations in the indoor
environment. The solution just detects the obstruction and
does not attempt to recognize it, which mightbea significant
disadvantage.
Jiang et al.[3] describeda real-timevisual recognitionsystem
with results converted to 3D audio that is a multi-module
system. A portable camera device, such as a GoPro, captures
video, and the collected stream is processed on a server for
real-time image identification using an existing object
detection module. This system has a processing delay,
making it difficult to use in real-time.
Jifeng Dai et al.[4] describe a fully convolutional network
that is region-based. R-FCNN is used to recognize objects
precisely and efficiently. As a result,thiswork cansimplyuse
ResNets as fully convolutional image classifier backbones to
recognize the object. This article presents a straightforward
yet effective RFCNN framework for object detection. This
approach provides the same accuracy as the faster R-FCNN
approach. As a result, adopting the state-of-the-art image
classification framework was facilitated.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 269
Choi D. et al.[5] describe current methods for detection
models and also provide standard datasets. This paper
discussed different detectors, such as one-stage and two-
stage detectors, which assisted in the analysis of various
object detection methods and gathered some traditional as
well as new applications. It also mentioned various object
detection branches.
A.A. Diaz Toro et al.[6] described themethodsfordeveloping
a vision-based wearable device that will assist visually
impaired people in navigating a new indoor environment.
The suggested system assists the user in "purposeful
navigation." Thesystemdetectsobstacles,walkingareas,and
other items such as computers, doors, staircases, chairs,and
so on.
D.P.Khairnar et al.[7] suggested a system that integrates
smart gloves with a smartphone app. The smart glove
identifies and avoids obstacles while also enabling visually
impaired people to understand their environment. Various
objects and obstacles in the surroundingarea arerecognized
using smartphone-based obstacle and object detection.
V. Kunta et al.[8] presented a system thatusestheInternet of
Things (IoT) to connect the environment and the blind.
Among other things, sensors are used to detect obstacles,
damp floors, and staircases. The described prototype is a
simple and low-cost smart blind stick. It is outfitted with
several IoT modules and sensors.
Rajeshvaree Ravindra Karmarkar[9] proposes a deep
learning-based item recognition system for the blind. Voice
assistance can also help visually impaired people determine
the location of objects. The deep learning model for object
recognition employs the “You Only Look Once” (YOLO)
approach. To assist the blind in learning information about
items, a voice announcement is synthesized using text-to-
speech (TTS). As a result, it utilizes an efficient object-
detection technology that assists the visually impaired in
finding things inside a particular environment.
Table I compares multiple TenserFlow mobile object
detection models in terms of inference time and
performance. The models' mAP performance varies
substantially, with the more complexFasterRCNN Inception
model achieving near-optimal performance buttakingmuch
longer to infer than the less complex Tiny Yolo model.
Existing solutions include limitations such as limited scope
and functionality, inefficiency in cost, systems that are not
portable, multiple sensor needs, and the inability to manage
visually impaired persons in both indoor and outdoor
situations in real time. We have made sincere efforts to
combine all of the excellent aspects of the preceding
solutions in order to create a full, portable, cost-effective
system capable of efficiently solving real-world challenges.
Table -1: Inference time and mAP performance of trained
models [8]
3.WORKING
The proposed design of our system (Fig. 1) is based on the
detection of objects in the environment of a visually
impaired individual. The suggested object detection
technology needs a number of phases,fromframeextraction
to output recognition. To detect objects in each frame, a
comparison between the query frame and database objects
is performed. We present a real-time object recognition and
positioning system. An audio file containing information
about each identified object is activated. As a result, object
detection and identification are addressed concurrently.
Fig -1: System Flowchart
The user must initially activate the program before using a
camera to capture real-time, live imagestofeedintoa model.
Now that the image has been captured, themodel muststore
it and evaluate it to determine how far it is from the subject.
The output parameter of this information is then
transformed into an audio signal.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 270
For feature extraction, this system uses a Residual Network
(ResNet) architecture, a sort of artificial neural network that
allows the model to skip layers without affecting
performance. [11]
3.1 Methodology
This system mainly comprises of
1. The system is configured in such a manner that an
Android application captures real-time frames and
sends them to a networked server, where all
calculations are performed.
2. The server will use a pre-trained SSD detection
model that has been trained using COCO datasets.It
will then test and detect the output class using an
accuracy metric.
3. Following testing with voice modules, the object's
class will be translated into default voice notes,
which will then be sent to the blind victims for
assistance.
4. Along with object detection, we have an alert
system that will calculate the approximate distance
of the object. If the blind person is closetotheframe
or far away in a safer location, it will produce voice-
based outputs in addition to distance units.
3.5 SSD Architecture
Fig -2: SSD Model Architecture [13]
3.2 COCO Dataset
The Common Objects in Context (COCO) dataset isoneofthe
most widely used open-source object identification
databases for training deep learning algorithms. This
database contains hundreds of thousands of images along
with millions of pre-labeled objects for training [12].
3.3 TensorFlow Object Detection API
We used TensorFlow APIs to put it into action. The
advantage of using APIs is that theyprovideuswithaccessto
a variety of standard actions. The TensorFlow object
detection API is essentially a framework for developing a
deep learning network that handles the object detection
problem. There are trained models in their framework,
which they term the "Model Zoo." This package contains the
Open Images Dataset, the KITTI dataset, and the COCO
dataset. COCO datasets are our primary interest here.
3.4 Models
Many pre-trainedmodelsnowemployTensorflow.There are
several models for addressing this task. We are using SSD
Mobile Net detection in this project since the majority of the
system performs well, but you can also use MASK RCNN for
more accuracy or SSD detection for faster accuracy. So let's
take a closer look at the SSD algorithm.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 271
SSD is made up of two parts: an SSD head and a backbone
model.
The backbone model is essentially a trained image
classification network thatisusedtoextractfeatures.Similar
to ResNet, this is frequently a network trained on ImageNet
that has had the last fully connected classification layer
removed.
The SSD head is just one or more convolutional layersadded
to this backbone, and the outputs are interpreted as the
bounding boxes and classifications of objects in the final
layer activations' spatial position. As a result,wehavea deep
neural network that can extract semantic meaning from an
input image while keeping its spatial structure,althoughata
lesser resolution.
The backbone produces 256 7x7 feature maps in ResNet 34
for an input image. SSD splits the picture into grid cells, with
each grid cell is charge of identifying objects in that portion
of the image. Detecting objects involves predicting the class
and location of an object inside a given region.
3.6 Anchor Box
In SSD, each grid cell can have many anchor/prior boxes
associated with it. Within a grid cell, these pre-defined
anchor boxes are individually responsible for size and form.
During training, SSD employs the matching phase to ensure
that the anchor box matches the bounding boxes of each
ground truth object inside an image.Theanchorboxwiththe
most overlap with the item is responsible for predicting the
object's class and location. After the network has been
trained, this property is used to train it and predict the
detected objects and their locations. In practice,eachanchor
box is assigned an aspect ratio and a zoom level.Weall know
that not all objects are square in form. Some are somewhat
shorter, slightly longer, and slightly broader. To cater for
this, the SSD architecture provides for pre-defined aspect
ratios of the anchor boxes. The differentaspectratiosmay be
configured using the ratios parameter of the anchor boxes
associated with each grid cell at each zoom or scale level.
3.7 Zoom Level
The anchor boxes do not have to be the same size as the grid
cells. The user may be looking for both smaller and bigger
things within a grid cell. The zooms option is used to
determine how much the anchor boxes should be scaled up
or down in relation to each grid cell.
3.8 MobileNet
The MobileNet model, as its name implies, is intended for
usage in mobile applicationsandisTensorFlow'sfirstmobile
computer vision model [14]. This model is built on the
MobileNet model's idea ofdepthwiseseparableconvolutions
and forms factorized convolutions. These are functions that
convert basic conventional convolutions into depthwise
convolutions. Pointwise convolutions are another name for
these 1 x 1 convolutions. These depthwise convolutions
enable MobileNets to function by applying a generic, single
filter-based concept to each of the input channels. These
pointwise convolutions combine the outputs of depthwise
convolutions with a 1 x 1 convolution. Like a typical
convolution, both filters combine the inputsintoa newset of
outputs in a single step. The depth-wise identifiable
convolutions divide this into two layers: onefor filteringand
another for combining. This factorization process has the
effect of significantly reducing computation and model size.
3.9 Depth Estimation
The depth estimation or extraction feature refers to the
techniques and algorithms used to produce a representation
of a scene's spatial structure. To put it another way,itisused
to compute the distance between two objects.Ourprototype
is used to assist the blind, and it seeks to alert the blind
about impending obstacles. To do so, we must first
determine the distance between the obstacle and the
individual in any real-time circumstance. After detectingthe
object, a rectangular box is formed around it. If the object
occupies the majority of the frame, the estimated distance of
the object from the specific individual is calculated, subject
to some constraints.
3.10 Voice Generation
Following the detection of an object, it is critical to notifythe
individual of the existence of that thing on his/her path.
PYTTSX3 is essential for the voice creation module. Pyttsx3
is a Python conversion module that convertstexttovoice. To
obtain a reference to a pyttsx. Engine instance, a factory
function called as pyttsx.init() is invoked by an application.
Pyttsx3 is a simple tool for converting text to voice.
The way this algorithm operates is that every time an object
is detected, an approximation of its distance is generated.
Texts are then displayed on thescreenusingthecv2library's
cv2.putText() function. We use Python-tesseract for
character recognition to find hidden text in a picture. OCR
recognizes text content on images and encodes it in a format
that a computer can understand. This text detection is
accomplished by image scanning and analyzing. Thus, text
contained in images is identified and "read" using Python-
tesseract.
As output, audio commands are created. When the object is
too close, it displays the message"Warning: Theobject(class
of object) is very close to you." "Watch out!" Otherwise,ifthe
item is at a safer distance, a voice is generated that states,
"The object is at a safer distance." This is achieved through
the use of libraries such as pytorch, pyttsx3,pytesseract,and
engine.io.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 272
Pytorch's primary function is as a machine learning library.
Pytorch is mostly used in the audio domain. Pytorch assists
in loading the voice file in mp3 format. It also controls the
audio dimension rate. As a result, it is used to manipulate
sound qualities such as frequency, wavelength, and
waveform.
4. Output
Fig -3: Object Detection of a person
Fig -4: Object Detection of a person and bottle
Fig -5: Object Detection of a chair
Fig -6: Object Detection of a sofa
Fig -7: Object Detection of a television(TV)
Fig -8: Object Detection of a plant
The suggested system successfully detects and labels 90
objects while also demonstrating its accuracy. When the
person with the camera approaches the object, the model
estimates the distance between the object and the camera
and provides voice feedback. The dataset was tested on an
SSD Mobilenet V1 model. The model showedlesslatencyand
was faster at detecting objects.
6. FUTURE SCOPE
We can improve the system's accuracy. Furthermore, the
current system is based on the Android operating system
and may be modified to work with any convenient device.
5. CONCLUSION
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072
© 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 273
Future applicationsofthetechnologymightincludelanguage
translation, currency checking, reading literary works, a
chatbot that allows the user to communicate and interact,
smart shopping, email reading, real-time location sharing,
sign board identification, such as the "Washroom" sign, etc.
7. ACKNOWLEDGEMENT
I would like to express my deep gratitude to Ms. Grinal
Tuscano, Associate Professor at the St. Francis Institute of
Technology and my research supervisor, for their patient
guidance, enthusiastic encouragement, and useful critiques
of this research work.
8. REFERENCES
[1] Balakrishnan, G. & Sainarayanan, G.. (2008). Stereo
Image Processing Procedure for Vision Rehabilitation..
Applied Artificial Intelligence.
[2] N. Mahmud, R. K. Saha, R. B. Zafar, M. B. H. Bhuian and S.
S. Sarwar, "Vibration and voice operated navigation system
for visuallyimpaired person,"2014International Conference
on Informatics, Electronics & Vision (ICIEV), 2014, pp. 1-5.
[3] Jiang, Rui & Lin, Qian & Qu, Shuhui. (2016). Let Blind
People See: Real-Time Visual Recognition with Results
Converted to 3D Audio.
[4] Jifeng Dai, Yi Li, Kaiming He and Jian Sun, “R-FCN:Object
Detection via Regionbased Fully Con- volutional Networks.”
Conf. Neural Inform. Process.Syst.,Barcelona,Spain,Dec.4-6,
2016, pp. 379-387
[5] Choi D., and Kim M. ’, “Trends on Object Detection
Techniques Based on Deep Learning,” Electronics and
Telecommunications Trends, Vol. 33, No. 4, pp. 23- 32, Aug.
2018.
[6] A. A. Diaz Toro, S. E. Campaña Bastidas and E. F. Caicedo
Bravo, "Methodology to Build a Wearable System for
Assisting Blind People in Purposeful Navigation," 2020 3rd
International Conference on Information and Computer
Technologies (ICICT), 2020, pp. 205-212.
[7] D. P. Khairnar, R. B. Karad, A. Kapse, G. Kale and P.
Jadhav, "PARTHA: A Visually Impaired Assistance System,"
2020 3rd International Conference on Communication
System, Computing and IT Applications (CSCITA), 2020, pp.
32-37.
[8] V. Kunta, C. Tuniki and U. Sairam, "Multi-Functional Blind
Stick for Visually Impaired People," 2020 5th International
Conference on Communication and Electronics Systems
(ICCES), 2020, pp. 895-899.
[9] Rajeshvaree Ravindra Karmarkar, “Object Detection
System for the Blind with Voice Guidance”, International
Journal of Engineering Applied Sciences and
Technology(IJEAST), 2021, pp. 67-70.
[10] https://www.indiatoday.in/education-today/gk-
current-affairs/story/world-sight-day-2017-facts-and-
figures-1063009-2017-10-12
[11] https://builtin.com/artificial-intelligence/resnet-
architecture
[12] https://deepai.org/machine-learning-glossary-and-
terms/CoCo%20Dataset
[13] https://iq.opengenus.org/ssd-model-architecture/
[14] https://medium.com/analytics-vidhya/image-
classification-with-mobilenet-cc6fbb2cd470
[15] https://medium.com/analytics-vidhya/image-
classification-with-mobilenet-cc6fbb2cd470

Mais conteúdo relacionado

Semelhante a Sanjaya: A Blind Assistance System

Voice Enable Blind Assistance System -Real time Object Detection
Voice Enable Blind Assistance System -Real time Object DetectionVoice Enable Blind Assistance System -Real time Object Detection
Voice Enable Blind Assistance System -Real time Object DetectionIRJET Journal
 
IRJET - Smart E – Cane for the Visually Challenged and Blind using ML Con...
IRJET -  	  Smart E – Cane for the Visually Challenged and Blind using ML Con...IRJET -  	  Smart E – Cane for the Visually Challenged and Blind using ML Con...
IRJET - Smart E – Cane for the Visually Challenged and Blind using ML Con...IRJET Journal
 
Human Activity Recognition
Human Activity RecognitionHuman Activity Recognition
Human Activity RecognitionIRJET Journal
 
IRJET - Blind Guidance using Smart Cap
IRJET - Blind Guidance using Smart CapIRJET - Blind Guidance using Smart Cap
IRJET - Blind Guidance using Smart CapIRJET Journal
 
IRJET - Real-Time Analysis of Video Surveillance using Machine Learning a...
IRJET -  	  Real-Time Analysis of Video Surveillance using Machine Learning a...IRJET -  	  Real-Time Analysis of Video Surveillance using Machine Learning a...
IRJET - Real-Time Analysis of Video Surveillance using Machine Learning a...IRJET Journal
 
Object Detection and Tracking AI Robot
Object Detection and Tracking AI RobotObject Detection and Tracking AI Robot
Object Detection and Tracking AI RobotIRJET Journal
 
IRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe ModelIRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe ModelIRJET Journal
 
Smart Navigation Assistance System for Blind People
Smart Navigation Assistance System for Blind PeopleSmart Navigation Assistance System for Blind People
Smart Navigation Assistance System for Blind PeopleIRJET Journal
 
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET Journal
 
Machine Learning approach for Assisting Visually Impaired
Machine Learning approach for Assisting Visually ImpairedMachine Learning approach for Assisting Visually Impaired
Machine Learning approach for Assisting Visually ImpairedIJTET Journal
 
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...IRJET Journal
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET Journal
 
IRJET - Smart Blind Stick using Image Processing
IRJET - Smart Blind Stick using Image ProcessingIRJET - Smart Blind Stick using Image Processing
IRJET - Smart Blind Stick using Image ProcessingIRJET Journal
 
IRJET- Surveillance of Object Motion Detection and Caution System using B...
IRJET-  	  Surveillance of Object Motion Detection and Caution System using B...IRJET-  	  Surveillance of Object Motion Detection and Caution System using B...
IRJET- Surveillance of Object Motion Detection and Caution System using B...IRJET Journal
 
Object Detetcion using SSD-MobileNet
Object Detetcion using SSD-MobileNetObject Detetcion using SSD-MobileNet
Object Detetcion using SSD-MobileNetIRJET Journal
 
IRJET - Direct Me-Nevigation for Blind People
IRJET -  	  Direct Me-Nevigation for Blind PeopleIRJET -  	  Direct Me-Nevigation for Blind People
IRJET - Direct Me-Nevigation for Blind PeopleIRJET Journal
 
Crime Detection using Machine Learning
Crime Detection using Machine LearningCrime Detection using Machine Learning
Crime Detection using Machine LearningIRJET Journal
 
IRJET- Smart Helmet for Visually Impaired
IRJET- Smart Helmet for Visually ImpairedIRJET- Smart Helmet for Visually Impaired
IRJET- Smart Helmet for Visually ImpairedIRJET Journal
 
Review on Object Counting System
Review on Object Counting SystemReview on Object Counting System
Review on Object Counting SystemIRJET Journal
 
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...IRJET Journal
 

Semelhante a Sanjaya: A Blind Assistance System (20)

Voice Enable Blind Assistance System -Real time Object Detection
Voice Enable Blind Assistance System -Real time Object DetectionVoice Enable Blind Assistance System -Real time Object Detection
Voice Enable Blind Assistance System -Real time Object Detection
 
IRJET - Smart E – Cane for the Visually Challenged and Blind using ML Con...
IRJET -  	  Smart E – Cane for the Visually Challenged and Blind using ML Con...IRJET -  	  Smart E – Cane for the Visually Challenged and Blind using ML Con...
IRJET - Smart E – Cane for the Visually Challenged and Blind using ML Con...
 
Human Activity Recognition
Human Activity RecognitionHuman Activity Recognition
Human Activity Recognition
 
IRJET - Blind Guidance using Smart Cap
IRJET - Blind Guidance using Smart CapIRJET - Blind Guidance using Smart Cap
IRJET - Blind Guidance using Smart Cap
 
IRJET - Real-Time Analysis of Video Surveillance using Machine Learning a...
IRJET -  	  Real-Time Analysis of Video Surveillance using Machine Learning a...IRJET -  	  Real-Time Analysis of Video Surveillance using Machine Learning a...
IRJET - Real-Time Analysis of Video Surveillance using Machine Learning a...
 
Object Detection and Tracking AI Robot
Object Detection and Tracking AI RobotObject Detection and Tracking AI Robot
Object Detection and Tracking AI Robot
 
IRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe ModelIRJET- Real-Time Object Detection System using Caffe Model
IRJET- Real-Time Object Detection System using Caffe Model
 
Smart Navigation Assistance System for Blind People
Smart Navigation Assistance System for Blind PeopleSmart Navigation Assistance System for Blind People
Smart Navigation Assistance System for Blind People
 
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance SystemIRJET- A Review Analysis to Detect an Object in Video Surveillance System
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
 
Machine Learning approach for Assisting Visually Impaired
Machine Learning approach for Assisting Visually ImpairedMachine Learning approach for Assisting Visually Impaired
Machine Learning approach for Assisting Visually Impaired
 
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
 
IRJET- Sign Language Interpreter
IRJET- Sign Language InterpreterIRJET- Sign Language Interpreter
IRJET- Sign Language Interpreter
 
IRJET - Smart Blind Stick using Image Processing
IRJET - Smart Blind Stick using Image ProcessingIRJET - Smart Blind Stick using Image Processing
IRJET - Smart Blind Stick using Image Processing
 
IRJET- Surveillance of Object Motion Detection and Caution System using B...
IRJET-  	  Surveillance of Object Motion Detection and Caution System using B...IRJET-  	  Surveillance of Object Motion Detection and Caution System using B...
IRJET- Surveillance of Object Motion Detection and Caution System using B...
 
Object Detetcion using SSD-MobileNet
Object Detetcion using SSD-MobileNetObject Detetcion using SSD-MobileNet
Object Detetcion using SSD-MobileNet
 
IRJET - Direct Me-Nevigation for Blind People
IRJET -  	  Direct Me-Nevigation for Blind PeopleIRJET -  	  Direct Me-Nevigation for Blind People
IRJET - Direct Me-Nevigation for Blind People
 
Crime Detection using Machine Learning
Crime Detection using Machine LearningCrime Detection using Machine Learning
Crime Detection using Machine Learning
 
IRJET- Smart Helmet for Visually Impaired
IRJET- Smart Helmet for Visually ImpairedIRJET- Smart Helmet for Visually Impaired
IRJET- Smart Helmet for Visually Impaired
 
Review on Object Counting System
Review on Object Counting SystemReview on Object Counting System
Review on Object Counting System
 
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
IRJET- Review on Human Action Detection in Stored Videos using Support Vector...
 

Mais de IRJET Journal

TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
 
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURESTUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
 
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
 
Effect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsEffect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
 
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
 
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
 
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
 
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASIRJET Journal
 
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
 
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProP.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
 
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
 
Survey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemSurvey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
 
Review on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesReview on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesIRJET Journal
 
React based fullstack edtech web application
React based fullstack edtech web applicationReact based fullstack edtech web application
React based fullstack edtech web applicationIRJET Journal
 
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
 
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
 
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
 
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignMultistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
 
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
 

Mais de IRJET Journal (20)

TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
 
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURESTUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
 
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
 
Effect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsEffect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil Characteristics
 
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
 
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
 
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
 
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADAS
 
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
 
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProP.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
 
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
 
Survey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemSurvey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare System
 
Review on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesReview on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridges
 
React based fullstack edtech web application
React based fullstack edtech web applicationReact based fullstack edtech web application
React based fullstack edtech web application
 
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
 
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
 
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
 
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignMultistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
 
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
 

Último

(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSSIVASHANKAR N
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 

Último (20)

(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLSMANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
MANUFACTURING PROCESS-II UNIT-5 NC MACHINE TOOLS
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
Call Girls in Nagpur Suman Call 7001035870 Meet With Nagpur Escorts
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service NashikCall Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
Call Girls Service Nashik Vaishnavi 7001305949 Independent Escort Service Nashik
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 

Sanjaya: A Blind Assistance System

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 268 Sanjaya: A Blind Assistance System Hitanshu Parekh1, Akshat Shah2, Grinal Tuscano3 1Student, Dept. of Information Technology, St. Francis Institute of Technology, Mumbai, Maharashtra, India 2Student, Dept. of Computer Engineering, Thadomal Shahani Engineering College, Mumbai, Maharashtra, India 3 Professor, Dept. of Information Technology, St. Francis Institute of Technology, Mumbai, Maharashtra, India ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - According to the World Health Organization (WHO), there are approximately284millionpeople worldwide who have limited vision, andapproximately39million who are completely blind [10]. Visually impaired people confront multiple challenges in their daily lives, particularly when navigating from one place to another on their own. They frequently rely on others to meet their daily needs. The proposed system is intended to help the visually impaired by identifying and classifying common everyday objects in real- time. This system provides voice feedback for improved comprehension and navigation. In addition, we have depth estimation, which calculates the safe distance between the object and the person, allowing them to be more self-sufficient and less reliant on others. We were able to create this model using TensorFlow and pre-trained models. The recommended strategy is dependable, affordable, realistic, and feasible. Key Words: Object Detection, SSD Mobilenet, TensorFlow Object Detection API, COCO Dataset, Depth Estimation, Voice Alerts, Visually Impaired People, Machine Learning. 1. INTRODUCTION With the recent rapid growth ofinformationtechnology(IT), a great deal of research has been conducted to solve everyday difficulties, and as a result, many conveniences for people have been supplied. Nonetheless, therearestill many inconveniences for the visually impaired. The most significant inconveniences that a blind person faces on a daily basis include finding information about objects and having difficulties with indoor mobility. Currently, the majority of vision-impaired people in India rely on a stick to identify potential obstacles in their path. Previous research included object analysis using ultrasonic sensors. However, it is difficulttotell exactlywhereanobject is located using these approaches, especially in the presence of obstacles. As a result, our contribution to solving this problem statement is to create a blind assistance system capable of object detection and real-time speech conversion so that they may hear it as and when they need it, as well as depth estimation for safe navigation. The SSD algorithm was combined with machine learning using the TensorFlow API packages to produce this system. This is encapsulated in a single system, an app, for a simple and better user experience. 2. Related Work Balakrishnan et al.[1] described an imageprocessingsystem for the visually impaired that uses wearable stereo cameras and stereo headphones. These devices are attached to helmets. The stereo cameras capturethesceneinfrontofthe visually impaired. The images are then analyzed, and significant features are retrieved to assist navigation by describing the scene. The distance parameters are captured using stereo cameras. The system is both complex and cost- ineffective due to the usage of stereo cameras. Mahmud et al.[2] described a method that uses a cane and includes an ultrasonic sensor mounted to the cane and a camshaft position sensor (CMP) compass sensor (511), which gives information about various impediments and potholes. There are a handful of drawbacks to this strategy. Because the solution relies on a cane as its principal answer, only impediments up to knee height can be recognized. Furthermore, any iron item in the surrounding area might impact the CMP compass sensor. This strategy assists the blind in the outdoor world but has limitations in the indoor environment. The solution just detects the obstruction and does not attempt to recognize it, which mightbea significant disadvantage. Jiang et al.[3] describeda real-timevisual recognitionsystem with results converted to 3D audio that is a multi-module system. A portable camera device, such as a GoPro, captures video, and the collected stream is processed on a server for real-time image identification using an existing object detection module. This system has a processing delay, making it difficult to use in real-time. Jifeng Dai et al.[4] describe a fully convolutional network that is region-based. R-FCNN is used to recognize objects precisely and efficiently. As a result,thiswork cansimplyuse ResNets as fully convolutional image classifier backbones to recognize the object. This article presents a straightforward yet effective RFCNN framework for object detection. This approach provides the same accuracy as the faster R-FCNN approach. As a result, adopting the state-of-the-art image classification framework was facilitated.
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 269 Choi D. et al.[5] describe current methods for detection models and also provide standard datasets. This paper discussed different detectors, such as one-stage and two- stage detectors, which assisted in the analysis of various object detection methods and gathered some traditional as well as new applications. It also mentioned various object detection branches. A.A. Diaz Toro et al.[6] described themethodsfordeveloping a vision-based wearable device that will assist visually impaired people in navigating a new indoor environment. The suggested system assists the user in "purposeful navigation." Thesystemdetectsobstacles,walkingareas,and other items such as computers, doors, staircases, chairs,and so on. D.P.Khairnar et al.[7] suggested a system that integrates smart gloves with a smartphone app. The smart glove identifies and avoids obstacles while also enabling visually impaired people to understand their environment. Various objects and obstacles in the surroundingarea arerecognized using smartphone-based obstacle and object detection. V. Kunta et al.[8] presented a system thatusestheInternet of Things (IoT) to connect the environment and the blind. Among other things, sensors are used to detect obstacles, damp floors, and staircases. The described prototype is a simple and low-cost smart blind stick. It is outfitted with several IoT modules and sensors. Rajeshvaree Ravindra Karmarkar[9] proposes a deep learning-based item recognition system for the blind. Voice assistance can also help visually impaired people determine the location of objects. The deep learning model for object recognition employs the “You Only Look Once” (YOLO) approach. To assist the blind in learning information about items, a voice announcement is synthesized using text-to- speech (TTS). As a result, it utilizes an efficient object- detection technology that assists the visually impaired in finding things inside a particular environment. Table I compares multiple TenserFlow mobile object detection models in terms of inference time and performance. The models' mAP performance varies substantially, with the more complexFasterRCNN Inception model achieving near-optimal performance buttakingmuch longer to infer than the less complex Tiny Yolo model. Existing solutions include limitations such as limited scope and functionality, inefficiency in cost, systems that are not portable, multiple sensor needs, and the inability to manage visually impaired persons in both indoor and outdoor situations in real time. We have made sincere efforts to combine all of the excellent aspects of the preceding solutions in order to create a full, portable, cost-effective system capable of efficiently solving real-world challenges. Table -1: Inference time and mAP performance of trained models [8] 3.WORKING The proposed design of our system (Fig. 1) is based on the detection of objects in the environment of a visually impaired individual. The suggested object detection technology needs a number of phases,fromframeextraction to output recognition. To detect objects in each frame, a comparison between the query frame and database objects is performed. We present a real-time object recognition and positioning system. An audio file containing information about each identified object is activated. As a result, object detection and identification are addressed concurrently. Fig -1: System Flowchart The user must initially activate the program before using a camera to capture real-time, live imagestofeedintoa model. Now that the image has been captured, themodel muststore it and evaluate it to determine how far it is from the subject. The output parameter of this information is then transformed into an audio signal.
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 270 For feature extraction, this system uses a Residual Network (ResNet) architecture, a sort of artificial neural network that allows the model to skip layers without affecting performance. [11] 3.1 Methodology This system mainly comprises of 1. The system is configured in such a manner that an Android application captures real-time frames and sends them to a networked server, where all calculations are performed. 2. The server will use a pre-trained SSD detection model that has been trained using COCO datasets.It will then test and detect the output class using an accuracy metric. 3. Following testing with voice modules, the object's class will be translated into default voice notes, which will then be sent to the blind victims for assistance. 4. Along with object detection, we have an alert system that will calculate the approximate distance of the object. If the blind person is closetotheframe or far away in a safer location, it will produce voice- based outputs in addition to distance units. 3.5 SSD Architecture Fig -2: SSD Model Architecture [13] 3.2 COCO Dataset The Common Objects in Context (COCO) dataset isoneofthe most widely used open-source object identification databases for training deep learning algorithms. This database contains hundreds of thousands of images along with millions of pre-labeled objects for training [12]. 3.3 TensorFlow Object Detection API We used TensorFlow APIs to put it into action. The advantage of using APIs is that theyprovideuswithaccessto a variety of standard actions. The TensorFlow object detection API is essentially a framework for developing a deep learning network that handles the object detection problem. There are trained models in their framework, which they term the "Model Zoo." This package contains the Open Images Dataset, the KITTI dataset, and the COCO dataset. COCO datasets are our primary interest here. 3.4 Models Many pre-trainedmodelsnowemployTensorflow.There are several models for addressing this task. We are using SSD Mobile Net detection in this project since the majority of the system performs well, but you can also use MASK RCNN for more accuracy or SSD detection for faster accuracy. So let's take a closer look at the SSD algorithm.
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 271 SSD is made up of two parts: an SSD head and a backbone model. The backbone model is essentially a trained image classification network thatisusedtoextractfeatures.Similar to ResNet, this is frequently a network trained on ImageNet that has had the last fully connected classification layer removed. The SSD head is just one or more convolutional layersadded to this backbone, and the outputs are interpreted as the bounding boxes and classifications of objects in the final layer activations' spatial position. As a result,wehavea deep neural network that can extract semantic meaning from an input image while keeping its spatial structure,althoughata lesser resolution. The backbone produces 256 7x7 feature maps in ResNet 34 for an input image. SSD splits the picture into grid cells, with each grid cell is charge of identifying objects in that portion of the image. Detecting objects involves predicting the class and location of an object inside a given region. 3.6 Anchor Box In SSD, each grid cell can have many anchor/prior boxes associated with it. Within a grid cell, these pre-defined anchor boxes are individually responsible for size and form. During training, SSD employs the matching phase to ensure that the anchor box matches the bounding boxes of each ground truth object inside an image.Theanchorboxwiththe most overlap with the item is responsible for predicting the object's class and location. After the network has been trained, this property is used to train it and predict the detected objects and their locations. In practice,eachanchor box is assigned an aspect ratio and a zoom level.Weall know that not all objects are square in form. Some are somewhat shorter, slightly longer, and slightly broader. To cater for this, the SSD architecture provides for pre-defined aspect ratios of the anchor boxes. The differentaspectratiosmay be configured using the ratios parameter of the anchor boxes associated with each grid cell at each zoom or scale level. 3.7 Zoom Level The anchor boxes do not have to be the same size as the grid cells. The user may be looking for both smaller and bigger things within a grid cell. The zooms option is used to determine how much the anchor boxes should be scaled up or down in relation to each grid cell. 3.8 MobileNet The MobileNet model, as its name implies, is intended for usage in mobile applicationsandisTensorFlow'sfirstmobile computer vision model [14]. This model is built on the MobileNet model's idea ofdepthwiseseparableconvolutions and forms factorized convolutions. These are functions that convert basic conventional convolutions into depthwise convolutions. Pointwise convolutions are another name for these 1 x 1 convolutions. These depthwise convolutions enable MobileNets to function by applying a generic, single filter-based concept to each of the input channels. These pointwise convolutions combine the outputs of depthwise convolutions with a 1 x 1 convolution. Like a typical convolution, both filters combine the inputsintoa newset of outputs in a single step. The depth-wise identifiable convolutions divide this into two layers: onefor filteringand another for combining. This factorization process has the effect of significantly reducing computation and model size. 3.9 Depth Estimation The depth estimation or extraction feature refers to the techniques and algorithms used to produce a representation of a scene's spatial structure. To put it another way,itisused to compute the distance between two objects.Ourprototype is used to assist the blind, and it seeks to alert the blind about impending obstacles. To do so, we must first determine the distance between the obstacle and the individual in any real-time circumstance. After detectingthe object, a rectangular box is formed around it. If the object occupies the majority of the frame, the estimated distance of the object from the specific individual is calculated, subject to some constraints. 3.10 Voice Generation Following the detection of an object, it is critical to notifythe individual of the existence of that thing on his/her path. PYTTSX3 is essential for the voice creation module. Pyttsx3 is a Python conversion module that convertstexttovoice. To obtain a reference to a pyttsx. Engine instance, a factory function called as pyttsx.init() is invoked by an application. Pyttsx3 is a simple tool for converting text to voice. The way this algorithm operates is that every time an object is detected, an approximation of its distance is generated. Texts are then displayed on thescreenusingthecv2library's cv2.putText() function. We use Python-tesseract for character recognition to find hidden text in a picture. OCR recognizes text content on images and encodes it in a format that a computer can understand. This text detection is accomplished by image scanning and analyzing. Thus, text contained in images is identified and "read" using Python- tesseract. As output, audio commands are created. When the object is too close, it displays the message"Warning: Theobject(class of object) is very close to you." "Watch out!" Otherwise,ifthe item is at a safer distance, a voice is generated that states, "The object is at a safer distance." This is achieved through the use of libraries such as pytorch, pyttsx3,pytesseract,and engine.io.
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 272 Pytorch's primary function is as a machine learning library. Pytorch is mostly used in the audio domain. Pytorch assists in loading the voice file in mp3 format. It also controls the audio dimension rate. As a result, it is used to manipulate sound qualities such as frequency, wavelength, and waveform. 4. Output Fig -3: Object Detection of a person Fig -4: Object Detection of a person and bottle Fig -5: Object Detection of a chair Fig -6: Object Detection of a sofa Fig -7: Object Detection of a television(TV) Fig -8: Object Detection of a plant The suggested system successfully detects and labels 90 objects while also demonstrating its accuracy. When the person with the camera approaches the object, the model estimates the distance between the object and the camera and provides voice feedback. The dataset was tested on an SSD Mobilenet V1 model. The model showedlesslatencyand was faster at detecting objects. 6. FUTURE SCOPE We can improve the system's accuracy. Furthermore, the current system is based on the Android operating system and may be modified to work with any convenient device. 5. CONCLUSION
  • 6. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 09 Issue: 12 | Dec 2022 www.irjet.net p-ISSN: 2395-0072 © 2022, IRJET | Impact Factor value: 7.529 | ISO 9001:2008 Certified Journal | Page 273 Future applicationsofthetechnologymightincludelanguage translation, currency checking, reading literary works, a chatbot that allows the user to communicate and interact, smart shopping, email reading, real-time location sharing, sign board identification, such as the "Washroom" sign, etc. 7. ACKNOWLEDGEMENT I would like to express my deep gratitude to Ms. Grinal Tuscano, Associate Professor at the St. Francis Institute of Technology and my research supervisor, for their patient guidance, enthusiastic encouragement, and useful critiques of this research work. 8. REFERENCES [1] Balakrishnan, G. & Sainarayanan, G.. (2008). Stereo Image Processing Procedure for Vision Rehabilitation.. Applied Artificial Intelligence. [2] N. Mahmud, R. K. Saha, R. B. Zafar, M. B. H. Bhuian and S. S. Sarwar, "Vibration and voice operated navigation system for visuallyimpaired person,"2014International Conference on Informatics, Electronics & Vision (ICIEV), 2014, pp. 1-5. [3] Jiang, Rui & Lin, Qian & Qu, Shuhui. (2016). Let Blind People See: Real-Time Visual Recognition with Results Converted to 3D Audio. [4] Jifeng Dai, Yi Li, Kaiming He and Jian Sun, “R-FCN:Object Detection via Regionbased Fully Con- volutional Networks.” Conf. Neural Inform. Process.Syst.,Barcelona,Spain,Dec.4-6, 2016, pp. 379-387 [5] Choi D., and Kim M. ’, “Trends on Object Detection Techniques Based on Deep Learning,” Electronics and Telecommunications Trends, Vol. 33, No. 4, pp. 23- 32, Aug. 2018. [6] A. A. Diaz Toro, S. E. Campaña Bastidas and E. F. Caicedo Bravo, "Methodology to Build a Wearable System for Assisting Blind People in Purposeful Navigation," 2020 3rd International Conference on Information and Computer Technologies (ICICT), 2020, pp. 205-212. [7] D. P. Khairnar, R. B. Karad, A. Kapse, G. Kale and P. Jadhav, "PARTHA: A Visually Impaired Assistance System," 2020 3rd International Conference on Communication System, Computing and IT Applications (CSCITA), 2020, pp. 32-37. [8] V. Kunta, C. Tuniki and U. Sairam, "Multi-Functional Blind Stick for Visually Impaired People," 2020 5th International Conference on Communication and Electronics Systems (ICCES), 2020, pp. 895-899. [9] Rajeshvaree Ravindra Karmarkar, “Object Detection System for the Blind with Voice Guidance”, International Journal of Engineering Applied Sciences and Technology(IJEAST), 2021, pp. 67-70. [10] https://www.indiatoday.in/education-today/gk- current-affairs/story/world-sight-day-2017-facts-and- figures-1063009-2017-10-12 [11] https://builtin.com/artificial-intelligence/resnet- architecture [12] https://deepai.org/machine-learning-glossary-and- terms/CoCo%20Dataset [13] https://iq.opengenus.org/ssd-model-architecture/ [14] https://medium.com/analytics-vidhya/image- classification-with-mobilenet-cc6fbb2cd470 [15] https://medium.com/analytics-vidhya/image- classification-with-mobilenet-cc6fbb2cd470