SlideShare uma empresa Scribd logo
1 de 11
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
9
CHARACTER RECOGNITION OF KANNADA TEXT IN SCENE
IMAGES USING NEURAL NETWORK
M. M. Kodabagi1
, S. A. Angadi2
, Chetana. R. Shivanagi3
1
Department of Computer Science and Engineering, Basaveshwar Engineering College,
Bagalkot-587102, Karnataka, India,
2
Department of Computer Science and Engineering, Basaveshwar Engineering College,
Bagalkot-587102, Karnataka, India
3
Department of Information Science and Engineering, Basaveshwar Engineering College,
Bagalkot-587102, Karnataka
ABSTRACT
Character recognition in scene images is one of the most fascinating and challenging
areas of pattern recognition with various practical application potentials. It can contribute
immensely to the advancement of an automation process and can improve the interface
between man and machine in many applications. Some practical application potentials of
character recognition system are: reading aid for the blind, traffic guidance systems, tour
guide systems, location aware systems and many more. In this work, a novel method for
recognizing basic Kannada characters in natural scene images is proposed. The proposed
method uses zone wise horizontal and vertical profile based features of character images. The
method works in two phases. During training, zone wise vertical and horizontal profile based
features are extracted from training samples and neural network is trained. During testing, the
test image is processed to obtain features and recognized using neural network classifier. The
method has been evaluated on 490 Kannada character images captured from 2 Mega Pixels
cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200, which contains
samples of different sizes, styles and with different degradations, and achieves an average
recognition accuracy of 92%. The system is efficient and insensitive to the variations in size
and font, noise, blur and other degradations.
Keywords: Character Recognition, Display Boards, Low Resolution Images, Neural
Network Classifier, Zone Wise Profile Features.
INTERNATIONAL JOURNAL OF GRAPHICS AND
MULTIMEDIA (IJGM)
ISSN 0976 - 6448 (Print)
ISSN 0976 -6456 (Online)
Volume 4, Issue 1, January - April 2013, pp. 09-19
© IAEME: www.iaeme.com/ijgm.asp
Journal Impact Factor (2013): 4.1089 (Calculated by GISI)
www.jifactor.com
IJGM
© I A E M E
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
10
1. INTRODUCTION
In recent years, the hand held devices with increased computing and communication
capabilities are widespread and being used for various purposes such as information access,
mobile commerce, mobile learning, multimedia streaming, and many more. One such new
application that can be integrated in such devices is a text understanding and translation
system for low resolution natural scene images of display boards.
Everyday, several people visit various places across the world for business and other
activities, often they face problem with the language where they travel. This is especially true
in countries like India, which are multilingual. For these reasons, there is a demand for an
automated system that understands text in low resolution natural scene images and provides
translated information in localized language.
Natural scene display board images contain text information which is often required to
be automatically recognized and processed. Scene text may be any textual part of the scene
images such as names of streets, institutes names, names of shops, building names, company
names, road signs, traffic information, warning signs etc. Researchers have focused their
attention on development of techniques for understanding text on such display boards. There is
a spurt of activity in the development of web based intelligent hand held systems for such
applications.
In the reported works [1-10] on intelligent systems for hand held devices, not many
works pertain to understanding of written text on display boards. Therefore, scope exists for
exploring such possibilities. The text understanding involves several processing steps; text
detection and extraction, preprocessing for line, word and character separation, script
identification, text recognition and language translation. Therefore, text recognition at
character level is one of the very important processing steps for development of such systems
prior to further analysis.
Therefore, text recognition at word/character level is premise for the later stages of text
understanding system. The recognition of text in low resolution images of display boards is a
difficult and challenging problem due to various issues such as variability in font size, style
and spacing between characters, skew, perspective distortions, viewing angle, uneven
illuminations, script specific characters and other degradations [11]. The current work aims at
investigating the use of zone wise statistical features for recognition of Kannada characters in
scene images. The proposed method uses zone wise horizontal and vertical profile based
features of character images. The method works in two phases. During training, zone wise
horizontal and vertical profile based features are extracted from training samples and neural
network is trained. During testing, the test image is processed to obtain features and
recognized using neural network classifier. The method has been evaluated on 490 Kannada
character images captured from 2 Mega Pixels cameras on mobile phones at various sizes
240x320, 600x800 and 900x1200, which contains samples of different sizes, styles and with
different degradations, and achieves an average recognition accuracy of 92%. The system is
efficient and insensitive to the variations in size and font, noise, blur and other degradations.
The rest of the paper is organized as follows; the detailed survey related to character
recognition of text in scene images is described in Section 2. The proposed method is
presented in Section 3. The experimental results and discussions are given in Section 4.
Section 5 concludes the work and lists future directions of the work.
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
11
2. RELATED WORKS
The character recognition of text in low resolution natural scene images is a necessary
step for development of various tasks of text understanding system. A substantial amount of
work has gone into the research related to character recognition of text in natural scene
images. Some of the related works are summarized in the following.
A robust approach for recognition of text embedded in natural scenes is given in [11].
The proposed method extracts features from intensity of an image directly and utilizes a local
intensity normalization to effectively handle lighting variations. Then, Gabor transform is
employed to obtain local features and linear discriminant analysis (LDA) is used for selection
and classification of features. The proposed method has been applied to a Chinese sign
recognition task. This work is further extended integrating sign detection component with
recognition [12]. The extended method embeds multi-resolution and multi-scale edge
detection, adaptive searching, color analysis, and affine rectification in a hierarchical
framework for sign detection. The affine rectification recovers deformation of the text regions
caused by an inappropriate camera view angle and significantly improve text detection rate
and optical character recognition.
A framework that exploits both bottom-up and top-down cues for scene text
recognition at word level is presented in [13]. The method derives bottom-up cues from
individual character detections from the image. Then, a Conditional Random Field model is
built on these detections to jointly model the strength of the detections and the interactions
between them. It also imposes top-down cues obtained from a lexicon-based prior, i.e.
language statistics. The optimal word represented by the text image is obtained by minimizing
the energy function corresponding to the random field model. The method reports significant
improvements in accuracies on two challenging public datasets, namely Street View Text and
ICDAR 2003 compared to other methods. The test results showed that the reported accuracy is
only 73% and requires further improvement.
The hierarchical multilayered neural network recognition method described in [14]
extracts oriented edges, corners, and end points for color text characters in scene image. A
method called selective metric clustering which mainly deals with color is employed in [15].
A fast lexicon based and discriminative semi-Markov models for recognizing scene text are
presented in [16, 17]. An object categorization framework based on a bag-of-visual-words
representation for recognition of character in natural scene images is described in [18]. The
effectiveness of raw grayscale pixel intensities, shape context descriptors, and wavelet features
to recognize the characters is evaluated in [19]. A method for unconstrained handwritten
Kannada vowels recognition based upon invariant moments is described in [20].
The technique presented in [21] extracts stroke density, length, and number of strokes
for handwritten Kannada and English characters recognition. The method found in [22] uses
modified invariant moments for recognition of multi-font/size Kannada vowels and numerals
recognition. A model employed in [23] calculates features from connected components and
obtains 3k dimensional feature vectors for memory based recognition of camera-captured
characters. A character recognition method described in [24] uses local features for
recognition of multiple characters in a scene image.
After the thorough study of literature, it is noticed that, the some [18, 12, 23, 14] of
the reported methods work with limited datasets, other cited works [18, 17, 16] report low
recognition rates in the presence of noise and other degradations and very few works [18-22]
pertain to recognition of Kannada characters from scene images. Hence, more research is
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
12
desirable to obtain new set of discriminating features suitable for Kannada text in scene
images. In the current work, zone wise statistical features are employed for recognition of
Kannada characters in low resolution images. The detailed description of the proposed
methodology is given in the next section.
3. PROPOSED METHODOLOGY FOR CHARACTER RECOGNITION
The proposed method uses zone wise horizontal and vertical profile based features for
recognition of Kannada characters in mobile camera based images. The proposed method
contains various phases such as Preprocessing, Feature Extraction, Construction of
Knowledge Base for Training Neural Network, Training and Character Recognition with
Neural Network Classifier. The block diagram of the proposed model is given in Fig 1. The
detailed description of each phase is given in the following subsections.
3.1 Preprocessing
The input character image is preprocessed for binarization, noise removal, bounding
box generation and resized to a constant resolution of size 30x30 pixels. Further, the image is
thinned.
Fig. 1. Block Diagram of Proposed Model
3.2 Feature extraction
In this phase, each image is divided into 15 vertical zones and 15 horizontal zones,
where size of each horizontal zone is 2*30 pixels and the size of each vertical zone is 30*2
pixels. Then sum of all on pixels in every zone is determined as a feature value for the zone.
Finally, 30 features are computed from all zones and are stored in to a feature vector X as
described in the equations (1) to (5):
( )( )[ ]HFeaturesVFeaturesX = (1)
Test Sample
Preprocessing
Extraction of Zone Wise
Horizontal and Vertical
Profile Features
Construction of
Knowledge Base
Preprocessing
Character Recognition using
using Neural Network
Classifier
Extraction of Zone Wise
Horizontal and Vertical
Profile Features
Training Samples
Train Neural Network
Recognized Character
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
13
VFeatures = [ iVf ] 151 ≤≤ i (2)
HFeatures = [ iHf ] 151 ≤≤ i (3)
Where,
iHf is a feature value of ith
horizontal zone and it is computed as shown in (4).
iVf is a feature value of ith
vertical zone and it is computed as shown in (5).
∑∑=
2
1
30
1
),( yxgHf ii
  (4)
∑ ∑=
30
1
2
1
),( yxgVf ii
  (5)
Where, gi is ith
zone that encompasses the chosen region of the character image. The
dataset of such feature vectors obtained from training samples is further used for construction
of knowledge base.
3.3 Construction of Knowledge Base for Training Neural Network
For the purpose of knowledge base construction, the images were captured from
display boards of Karnataka Government offices, names of streets, institute names, names of
shops, building names, company names, road signs, traffic direction and warning signs
captured from 2 Mega Pixels cameras on mobile phones. The images are captured at various
sizes 240x320, 600x800, 900x1200 at a distance of 1 to 6 meters. All these images are used
for evaluating the performance of the proposed model. The images captured with a size of
240x320 at a distance of 1 to 3 meters are found to be clear when the viewing angle is parallel
to the text plane, perspective distortions and other degradations occur beyond 3 meters with
other viewing angles. But the images captured at a distance of 1 to 6 meters with other stated
resolutions are clear, perspective distortions still occur when the viewing angle is not parallel.
The images in the database are characterized by variable font size and style, uneven thickness,
minimal information context, small skew, noise, perspective distortion and other degradations.
The image database consists of 490 Kannada basic character images of varying resolutions.
Then from the database, 50% of samples are used for training. During training, the features are
extracted from all training samples and knowledge base is organized as a dataset of feature
vectors as depicted in (6). The stored information in the knowledge base sufficiently
characterizes all variations in the input. Testing is carried out for all samples containing 50%
trained and 50% untrained samples. Some sample images captured using 2 Mega Pixels
cameras on mobile phones from display boards are shown in Fig 2.
][ jXKB = Nj ≤≤1 (6)
Where, KB is knowledge base comprising feature vectors of training samples., Xj is a feature
vector of jth
image in the KB and N is the number of training sample images.
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
14
Fig. 2. Sample Images Captured from 2 Mega Pixels Cameras on Mobile Phones
3.4 Training and Recognition with Feed Forward Neural Network
After the data set is obtained and organized into knowledge base of basic Kannada
character images, training and recognition tasks are carried out using feed forward neural
networks. The details of training and recognition are described in the following;
Before network design, the data from in the knowledge base is prepared to cover the
range of inputs for which the network will be used. The feed forward neural network does not
have the ability to accurately extrapolate beyond the range of inputs, so the training data is
chosen to span the full range of the input space. Later, the normalization step is applied to
both the input vectors and the target vectors in the data set. In this way, the network output
always falls into a normalized range. Once the data is ready, the feed forward neural network
object is created with 30 neurons in the input layer, 15 neurons in the hidden layer, and
configured with default weights and biases for the prepared data set in the knowledgebase.
The network is configured with tan sigmoid functions in the input and hidden neurons, linear
transfer functions for output neurons and Levenberg-Marquardt and Gradient Descent with
Momentum learning algorithms. The default performance function for feed forward network
used is mean square error. The parameters learning rate and minimum performance are
initialized with value 0.01. The magnitude of the gradient and the number of validation
checks are used to terminate the training. The number of validation checks parameter is
configured with value 10 and represents the number of successive iterations that the
validation performance fails to decrease.
After the network weights and biases are initialized and configured with other training
parameters, the network is ready for training. The multilayer feed forward network is trained
for function approximation (nonlinear regression) or pattern recognition with network inputs
and target outputs. The training process tunes the values of the weights and biases of the
network to optimize network performance, as defined by the network performance function.
After the network is trained, its performance is verified using several trained and test
character images. The neural network classifier gives an average recognition accuracy of
92%.
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
15
4. EXPERIMENTAL RESULTS AND ANALYSIS
The proposed methodology has been evaluated for 490 low resolution basic Kannada character
images of varying font size and style, uneven thickness and other degradations. The experimental
results of processing a sample character image is described in section 4.1. And the results of processing
several other character images dealing with various issues, the overall performance of the system and
comparison results with other methods are reported in section 4.2.
4.1. An Experimental Analysis for a Sample Kannada Character Image
The Character image with uneven thickness, uneven lighting conditions, and other
degradations given in Fig. 3a is initially preprocessed for binarization, resized to a constant size of
30x30 pixels and thinned as shown in Fig. 3b.
Fig. 3. a) A Sample Character Test Image b) Preprocessed Image
Further, the image is divided into 15 vertical zones and 15 horizontal zones. Then, the zone
wise statistical features are computed from all zones and are organized into a feature vector T as in (1)
to (5). The experimental values of all zones are shown in Table 1.
TABLE 1. Zone Wise Vertical and Horizontal Features of Sample Input Image in Fig. 3b
Feature Vector
T
[ VFeatures (4 3 13 5 6 6 6 8 6 7 6 9 13 13 4)
HFeatures (2 2 3 6 3 4 9 5 5 6 4 4 5 9 15)
]
T= [ 4 3 13 5 6 8 6 6 8 6 7 6 9 13 13 4 2 2 3 6 3 4 9 5 5 6 4 4 5 9 15]
The experimental values in Table 1 clearly depict the distribution of pixels in various
segments/primitives of the character image. And these distributions are different from character to
character because of varying positions and shapes of segments/primitives of basic Kannada characters.
This is demonstrated considering two sample images in Table 2.
TABLE 2. Vertical and Horizontal Features of Two Sample Images Demonstrating Pixel
Distribution Patterns
Character Image Zone Wise Statistical Features
9 5 6 2 3 2 4 3 11 7 8 11 21 10 2 13 1 5 11
4 4 4 13 9 4 8 5 2 3 5 4
12 8 6 6 6 6 14 18 8 6 6 6 9 14 10 3
2 2 6 8 22 2 2 17 17 9 7 12 10 16
The values in Table 2 clearly show that, the feature values in most of the corresponding zones
of the characters are distinct. For example, the feature values 9, 5, 6, 2 of vertical zones 1, 2, 3 and 4 of
character in first row of Table 2 are distinct from feature values 12, 8, 6, 6 in the corresponding zones
of character in the second row. The similar characteristic exists with the feature values in other zones.
The arrangement of these features into a feature vector creates a pixel distribution pattern that makes
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
16
samples distinguishable. It is also observed that, the proposed zone wise features also take care of
uncertainty in the appearance of primitives of character image. After extracting features from test input
image in Fig. 2a, the neural network classifier is used to recognize the character.
4.2. An Experimental Analysis dealing with various issues
The proposed methodology has produced good results for low resolution images containing
Kannada characters of different size, font, and alignment with varying background. The advantage lies
in less computation involved in feature extraction and recognition phases of the method. During
experiments it is noticed that, the zone wise features made samples separable in the feature space.
Hence, the proposed work is robust and achieves an average recognition accuracy of 92%. The overall
performance of the system after conducting the experimentation on the dataset is reported in Table 3.
The comparison of the proposed method with other scene text recognition methods is described in
Table 4.
TABLE 3. Overall system performance
Character
Image
Number
of
Samples
Tested
Number of
Samples
Correctly
Recognized
Number
of
Samples
Miss
Classified
% of
Recognitio
n Accuracy
Character
Image
Number
of
Samples
Tested
Number of
Samples
Correctly
Recognized
Number of
Samples
Miss
Classified
% of
Recognition
Accuracy
10 9 1 90 10 10 0 100
10 9 1 90 10 9 1 90
10 9 1 90 10 9 1 90
10 9 1 90 10 10 0 100
10 10 0 100 10 9 1 90
10 9 1 90 10 10 0 100
10 9 1 90 10 9 1 90
10 10 0 100 10 9 1 90
10 10 0 100 10 8 2 80
10 9 1 90 10 10 0 100
10 10 0 100 10 9 1 90
10 9 1 90 10 9 1 90
10 9 1 90 10 9 1 90
10 9 1 90 10 9 1 90
10 8 2 80 10 10 0 100
10 10 0 100 10 8 2 80
10 10 0 100 10 10 0 100
10 10 0 100 10 9 1 90
10 9 1 90 10 8 2 80
10 8 2 80 10 10 0 100
10 10 0 100 10 9 1 90
10 9 1 90 10 9 1 90
10 9 1 90 10 8 2 80
10 10 0 100 10 9 1 90
10 9 1 90
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
17
A closer examination of results revealed that misclassifications arise due to noise,
more similarity between character structures/primitives and other degradations. It is also
noticed that, zonal features takes care of variations in the appearance of character primitives. It
is also found that, if the knowledge base is trained for all variations and degradations, better
performance can be obtained.
TABLE 4. Comparison of Proposed Method with Other Scene Text Recognition
Methods
Author Approach Features Recognition
Accuracy
Jerod J. Weinman. et. al.
(2008)
A Discriminative
Semi-Markov Model
for Robust Scene
Text Recognition
Wavelet features 82.08%
Onur Tekdas. et. al
(2009)
Recognizing
Characters in Natural
Scenes: A Feature
Study
Raw intensities,
Shape Contexts, and
wavelet features
85.328
Masakazu Iwamura. et.
al (2011)
Recognition of
Multiple Characters
in a Scene Image
Using Arrangement
of Local Features
Scale invariant
feature transform and
voting method
76.5%
Anand Mishra., etal.,
(2012)
Top down and
bottom up cues for
scene text recogntion
Bottom up cues,
language statistics
and condtional
random field model.
73%
Proposed Method Character
Recognition of
Kannada Text in
Scene Images Using
Neural Network
Zone wise vertical
and horizontal profile
based features
92%
5. CONCLUSION
In this work, a novel method for recognition of basic Kannada characters from camera
based images is proposed. The proposed method uses zone wise horizontal and vertical
profile based features and neural network classifier for basic Kannada character recognition.
The system works in two phases, training phase and testing phase. Exhaustive
experimentation was done to analyze zone wise horizontal and vertical profile based features
using neural networks classifier. The results obtained by considering zone wise horizontal
and vertical profile features and neural network classifier are encouraging and it has been
observed that the system is robust and insensitive for several challenges like, unusual fonts,
variable lighting condition, noise, blur etc. The method is tested on 490 samples and gives an
average recognition accuracy of 92%. The proposed method can be extended for character
recognition considering new set of features and classification algorithm.
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
18
REFERENCES
[1] Abowd Gregory D. Christopher G. Atkeson, Jason Hong, Sue Long, Rob Kooper,
and Mike Pinkerton, 1997, “CyberGuide: A mobile context-aware tour guide”,
Wireless Networks, 3(5): pp.421-433.
[2] Natalia Marmasse and Chris Schamandt, 2000, “Location aware information
delivery with comMotion”, In Proceedings of Conference on Human Factors in
Computing Systems, pp.157-171.
[3] Tollmar K. Yeh T. and Darrell T., 2004, “IDeixis - Image-Based Deixis for Finding
Location-Based Information”, In Proceedings of Conference on Human Factors in
Computing Systems (CHI’04), pp.781-782.
[4] Gillian Leetch, Dr. Eleni Mangina, 2005, “A Multi-Agent System to Stream
Multimedia to Handheld Devices”, Proceedings of the Sixth International
Conference on Computational Intelligence and Multimedia Applications
(ICCIMA’05).
[5] Wichian Premchaiswadi, 2009, “A mobile Image search for Tourist Information
System”, Proceedings of 9th international conference on SIGNAL PROCESSING,
COMPUTATIONAL GEOMETRY and ARTIFICIAL VISION, pp.62-67.
[6] Ma Chang-jie, Fang Jin-yun, 2008, “Location Based Mobile Tour Guide Services
Towards Digital Dunhaung”, International archives of phtotgrammtery, Remote
Sensing and Spatial Information Sciences, Vol. XXXVII, Part B4, Beijing.
[7] Shih-Hung Wu, Min-Xiang Li, Ping-che Yanga, Tsun Kub, 2010, “Ubiquitous
Wikipedia on Handheld Device for Mobile Learning”, 6th IEEE International
Conference on Wireless, Mobile, and Ubiquitous Technologies in Education, pp.
228-230.
[8] Tom yeh, Kristen Grauman, and K. Tollmar., 2005, “A picture is worth a thousand
keywords: image-based object search on a mobile platform”, In Proceedings of
Conference on Human Factors in Computing Systems, pp.2025-2028.
[9] Fan X. Xie X. Li Z. Li M. and Ma. 2005, “Photo-to-search: using multimodal
queries to search web from mobile phones”, In proceedings of 7th ACM SIGMM
international workshop on multimedia information retrieval.
[10] Lim Joo Hwee, Jean Pierre Chevallet and Sihem Nouarah Merah, 2005,
“SnapToTell: Ubiquitous information access from camera”, Mobile human
computer interaction with mobile devices and services, Glasgow, Scotland.
[11] Jing Zhang, Xilin Chen, Andreas Hanneman, Jie Yang, and Alex Waibel.,2002, “A
Robust Approach for Recognition of Text Embedded in Natural Scenes”, proc.
16th International conf. Pattern recognition, volume 3, pp. 204-207 (2002).
[12] Xilin Chen, Jie Yang, Jing Zhang, and Alex Waibel, January 2004, “Automatic
Detection and Recognition of Signs From Natural Scenes”, IEEE Transactions On
Image Processing, Vol. 13, No. 1, pp. 87-99 (January 2004).
[13] Anand Mishra, Karteek Alahari, C. V. Jawahar, 2012, “Top-Down and Bottom-Up
Cues for Scene Text Recognition” , Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), 2012.
[14] Zohra Saidane and Christophe Garcia, 2007, “Automatic Scene Text Recognition
using a Convolutional Neural Network”, CBDAR, p6, pp. 100-106 (2007)
International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print),
ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME
19
[15] Céline Mancas-Thillou, June 2007, “Natural Scene Text Understanding”,
Segmentation and Pattern Recognition, I-Tech, Vienna, Austria, pp.123-142 (June
2007)
[16] Jerod J. Weinman, Erik Learned-Miller, and Allen Hanson, September 2007, “Fast
Lexicon-Based Scene Text Recognition with Sparse Belief Propagation”, Proc. Intl.
Conf. on Document Analysis and Recognition, Curitiba, Brazil (September 2007)
[17] Jerod J. Weinman, Erik Learned-Miller and Allen Hanson, December 2008, “A
Discriminative Semi-Markov Model for Robust Scene Text Recognition”, IEEE,
Proc. Intl. Conf. on Pattern Recognition (ICPR), Tampa, FL, USA, pp. 1-5
(December 2008)
[18] Te´ofilo E. de Campos and Bodla Rakesh Bab, 2009, “Character Recognition In
Natural Images”, Computer Vision Theory and Applications, Proc. International
Conf. volume , pp. 273-280 (2009)
[19] Onur Tekdas and Nikhil Karnad, 2009, “Recognizing Characters in Natural Scenes:
A Feature Study”, CSCI 5521 Pattern Recognition, pp. 1-14 (2009)
[20] Sangame S.K., Ramteke R.J., and Rajkumar Benne, 2009, “Recognition of isolated
handwritten Kannada vowels”, Advances in Computational Research, ISSN: 0975–
3273, Volume 1, Issue 2, pp 52-55 (2009)
[21] B.V.Dhandra, Mallikarjun Hangarge, and Gururaj Mukarambi, 2010, ”Spatial
Features for Handwritten Kannada and English Character Recognition”, IJCA
Special Issue on Recent Trends in Image Processing and Pattern Recognition
(RTIPPR), pp 146-151 (2010)
[22] Mallikarjun Hangarge, Shashikala Patil, and B.V.Dhandra, 2010, “Multi-font/size
Kannada Vowels and Numerals Recognition Based on Modified Invariant
Moments”, IJCA Special Issue on Recent Trends in Image Processing and Pattern
Recognition (RTIPPR), pp 126-130 (2010)
[23] Masakazu Iwamura, Tomohiko Tsuji, and Koichi Kise, 2010, “Memory-Based
Recognition of Camera-Captured Characters”, 9th
IAPR international workshop on
document analysis systems, pp. 89-96 (2010)
[24] Masakazu Iwamura, Takuya Kobayashi, and Koichi Kise, 2011, “Recognition of
Multiple Characters in a Scene Image Using Arrangement of Local Features”,
IEEE, International Conference on Document Analysis and Recognition, pp. 1409-
1413(2011)
[25] Primekumar K.P and Sumam Mary Idicula, “Performance of on-Line Malayalam
Handwritten character Recognition using Hmm And Sfam”, International Journal of
Computer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012,
pp. 115 - 125, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
[26] Mr.Lokesh S. Khedekar and Dr.A.S.Alvi, “Advanced Smart Credential Cum
Unique Identification and Recognition System. (Ascuirs)”, International Journal of
Computer Engineering & Technology (IJCET), Volume 4, Issue 1, 2013,
pp. 97 - 104, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.

Mais conteúdo relacionado

Mais procurados

Handwritten character recognition in
Handwritten character recognition inHandwritten character recognition in
Handwritten character recognition inijaia
 
CHARACTER RECOGNITION USING NEURAL NETWORK WITHOUT FEATURE EXTRACTION FOR KAN...
CHARACTER RECOGNITION USING NEURAL NETWORK WITHOUT FEATURE EXTRACTION FOR KAN...CHARACTER RECOGNITION USING NEURAL NETWORK WITHOUT FEATURE EXTRACTION FOR KAN...
CHARACTER RECOGNITION USING NEURAL NETWORK WITHOUT FEATURE EXTRACTION FOR KAN...Editor IJMTER
 
Mixed Language Based Offline Handwritten Character Recognition Using First St...
Mixed Language Based Offline Handwritten Character Recognition Using First St...Mixed Language Based Offline Handwritten Character Recognition Using First St...
Mixed Language Based Offline Handwritten Character Recognition Using First St...CSCJournals
 
Texture features based text extraction from images using DWT and K-means clus...
Texture features based text extraction from images using DWT and K-means clus...Texture features based text extraction from images using DWT and K-means clus...
Texture features based text extraction from images using DWT and K-means clus...Divya Gera
 
Devnagari handwritten numeral recognition using geometric features and statis...
Devnagari handwritten numeral recognition using geometric features and statis...Devnagari handwritten numeral recognition using geometric features and statis...
Devnagari handwritten numeral recognition using geometric features and statis...Vikas Dongre
 
­­­­Cursive Handwriting Recognition System using Feature Extraction and Artif...
­­­­Cursive Handwriting Recognition System using Feature Extraction and Artif...­­­­Cursive Handwriting Recognition System using Feature Extraction and Artif...
­­­­Cursive Handwriting Recognition System using Feature Extraction and Artif...IRJET Journal
 
Handwriting Recognition Using Deep Learning and Computer Version
Handwriting Recognition Using Deep Learning and Computer VersionHandwriting Recognition Using Deep Learning and Computer Version
Handwriting Recognition Using Deep Learning and Computer VersionNaiyan Noor
 
Handwritten character recognition using artificial neural network
Handwritten character recognition using artificial neural networkHandwritten character recognition using artificial neural network
Handwritten character recognition using artificial neural networkHarshana Madusanka Jayamaha
 
IRJET- Image to Text Conversion using Tesseract
IRJET-  	  Image to Text Conversion using TesseractIRJET-  	  Image to Text Conversion using Tesseract
IRJET- Image to Text Conversion using TesseractIRJET Journal
 
An offline signature verification using pixels intensity levels
An offline signature verification using pixels intensity levelsAn offline signature verification using pixels intensity levels
An offline signature verification using pixels intensity levelsSalam Shah
 
A Comprehensive Study On Handwritten Character Recognition System
A Comprehensive Study On Handwritten Character Recognition SystemA Comprehensive Study On Handwritten Character Recognition System
A Comprehensive Study On Handwritten Character Recognition Systemiosrjce
 
OPTICAL CHARACTER RECOGNITION USING RBFNN
OPTICAL CHARACTER RECOGNITION USING RBFNNOPTICAL CHARACTER RECOGNITION USING RBFNN
OPTICAL CHARACTER RECOGNITION USING RBFNNAM Publications
 
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterArtificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterIOSR Journals
 
IRJET- Wearable AI Device for Blind
IRJET- Wearable AI Device for BlindIRJET- Wearable AI Device for Blind
IRJET- Wearable AI Device for BlindIRJET Journal
 
Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Chiranjeevi Adi
 

Mais procurados (19)

Handwritten character recognition in
Handwritten character recognition inHandwritten character recognition in
Handwritten character recognition in
 
Text Detection and Recognition
Text Detection and RecognitionText Detection and Recognition
Text Detection and Recognition
 
CHARACTER RECOGNITION USING NEURAL NETWORK WITHOUT FEATURE EXTRACTION FOR KAN...
CHARACTER RECOGNITION USING NEURAL NETWORK WITHOUT FEATURE EXTRACTION FOR KAN...CHARACTER RECOGNITION USING NEURAL NETWORK WITHOUT FEATURE EXTRACTION FOR KAN...
CHARACTER RECOGNITION USING NEURAL NETWORK WITHOUT FEATURE EXTRACTION FOR KAN...
 
Mixed Language Based Offline Handwritten Character Recognition Using First St...
Mixed Language Based Offline Handwritten Character Recognition Using First St...Mixed Language Based Offline Handwritten Character Recognition Using First St...
Mixed Language Based Offline Handwritten Character Recognition Using First St...
 
Texture features based text extraction from images using DWT and K-means clus...
Texture features based text extraction from images using DWT and K-means clus...Texture features based text extraction from images using DWT and K-means clus...
Texture features based text extraction from images using DWT and K-means clus...
 
Devnagari handwritten numeral recognition using geometric features and statis...
Devnagari handwritten numeral recognition using geometric features and statis...Devnagari handwritten numeral recognition using geometric features and statis...
Devnagari handwritten numeral recognition using geometric features and statis...
 
­­­­Cursive Handwriting Recognition System using Feature Extraction and Artif...
­­­­Cursive Handwriting Recognition System using Feature Extraction and Artif...­­­­Cursive Handwriting Recognition System using Feature Extraction and Artif...
­­­­Cursive Handwriting Recognition System using Feature Extraction and Artif...
 
Handwriting Recognition Using Deep Learning and Computer Version
Handwriting Recognition Using Deep Learning and Computer VersionHandwriting Recognition Using Deep Learning and Computer Version
Handwriting Recognition Using Deep Learning and Computer Version
 
Handwritten character recognition using artificial neural network
Handwritten character recognition using artificial neural networkHandwritten character recognition using artificial neural network
Handwritten character recognition using artificial neural network
 
IRJET- Image to Text Conversion using Tesseract
IRJET-  	  Image to Text Conversion using TesseractIRJET-  	  Image to Text Conversion using Tesseract
IRJET- Image to Text Conversion using Tesseract
 
Handwritten Character Recognition
Handwritten Character RecognitionHandwritten Character Recognition
Handwritten Character Recognition
 
An offline signature verification using pixels intensity levels
An offline signature verification using pixels intensity levelsAn offline signature verification using pixels intensity levels
An offline signature verification using pixels intensity levels
 
A Comprehensive Study On Handwritten Character Recognition System
A Comprehensive Study On Handwritten Character Recognition SystemA Comprehensive Study On Handwritten Character Recognition System
A Comprehensive Study On Handwritten Character Recognition System
 
OPTICAL CHARACTER RECOGNITION USING RBFNN
OPTICAL CHARACTER RECOGNITION USING RBFNNOPTICAL CHARACTER RECOGNITION USING RBFNN
OPTICAL CHARACTER RECOGNITION USING RBFNN
 
Offline Signature Verification and Recognition using Neural Network
Offline Signature Verification and Recognition using Neural NetworkOffline Signature Verification and Recognition using Neural Network
Offline Signature Verification and Recognition using Neural Network
 
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari CharacterArtificial Neural Network For Recognition Of Handwritten Devanagari Character
Artificial Neural Network For Recognition Of Handwritten Devanagari Character
 
Ijetcas14 619
Ijetcas14 619Ijetcas14 619
Ijetcas14 619
 
IRJET- Wearable AI Device for Blind
IRJET- Wearable AI Device for BlindIRJET- Wearable AI Device for Blind
IRJET- Wearable AI Device for Blind
 
Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks Hand Written Character Recognition Using Neural Networks
Hand Written Character Recognition Using Neural Networks
 

Semelhante a Character recognition of kannada text in scene images using neural

Recognition of basic kannada characters in scene images using euclidean dis
Recognition of basic kannada characters in scene images using euclidean disRecognition of basic kannada characters in scene images using euclidean dis
Recognition of basic kannada characters in scene images using euclidean disIAEME Publication
 
Automated Identification of Road Identifications using CNN and Keras
Automated Identification of Road Identifications using CNN and KerasAutomated Identification of Road Identifications using CNN and Keras
Automated Identification of Road Identifications using CNN and KerasIRJET Journal
 
IRJET- Detection and Recognition of Text for Dusty Image using Long Short...
IRJET-  	  Detection and Recognition of Text for Dusty Image using Long Short...IRJET-  	  Detection and Recognition of Text for Dusty Image using Long Short...
IRJET- Detection and Recognition of Text for Dusty Image using Long Short...IRJET Journal
 
Script identification using dct coefficients 2
Script identification using dct coefficients 2Script identification using dct coefficients 2
Script identification using dct coefficients 2IAEME Publication
 
Investigating the Effect of BD-CRAFT to Text Detection Algorithms
Investigating the Effect of BD-CRAFT to Text Detection AlgorithmsInvestigating the Effect of BD-CRAFT to Text Detection Algorithms
Investigating the Effect of BD-CRAFT to Text Detection Algorithmsgerogepatton
 
INVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMS
INVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMSINVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMS
INVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMSijaia
 
A novel character segmentation reconstruction approach for license plate reco...
A novel character segmentation reconstruction approach for license plate reco...A novel character segmentation reconstruction approach for license plate reco...
A novel character segmentation reconstruction approach for license plate reco...Journal Papers
 
Text Extraction of Colour Images using Mathematical Morphology & HAAR Transform
Text Extraction of Colour Images using Mathematical Morphology & HAAR TransformText Extraction of Colour Images using Mathematical Morphology & HAAR Transform
Text Extraction of Colour Images using Mathematical Morphology & HAAR TransformIOSR Journals
 
PROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITION
PROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITIONPROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITION
PROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITIONcscpconf
 
Projection Profile Based Number Plate Localization and Recognition
Projection Profile Based Number Plate Localization and Recognition Projection Profile Based Number Plate Localization and Recognition
Projection Profile Based Number Plate Localization and Recognition csandit
 
Traffic sign recognition and detection using SVM and CNN
Traffic sign recognition and detection using SVM and CNNTraffic sign recognition and detection using SVM and CNN
Traffic sign recognition and detection using SVM and CNNIRJET Journal
 
Enhancement and Segmentation of Historical Records
Enhancement and Segmentation of Historical RecordsEnhancement and Segmentation of Historical Records
Enhancement and Segmentation of Historical Recordscsandit
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language DetectionIRJET Journal
 
IRJET- Scandroid: A Machine Learning Approach for Understanding Handwritten N...
IRJET- Scandroid: A Machine Learning Approach for Understanding Handwritten N...IRJET- Scandroid: A Machine Learning Approach for Understanding Handwritten N...
IRJET- Scandroid: A Machine Learning Approach for Understanding Handwritten N...IRJET Journal
 
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...IRJET Journal
 
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...IRJET Journal
 

Semelhante a Character recognition of kannada text in scene images using neural (20)

Recognition of basic kannada characters in scene images using euclidean dis
Recognition of basic kannada characters in scene images using euclidean disRecognition of basic kannada characters in scene images using euclidean dis
Recognition of basic kannada characters in scene images using euclidean dis
 
Automated Identification of Road Identifications using CNN and Keras
Automated Identification of Road Identifications using CNN and KerasAutomated Identification of Road Identifications using CNN and Keras
Automated Identification of Road Identifications using CNN and Keras
 
IRJET- Detection and Recognition of Text for Dusty Image using Long Short...
IRJET-  	  Detection and Recognition of Text for Dusty Image using Long Short...IRJET-  	  Detection and Recognition of Text for Dusty Image using Long Short...
IRJET- Detection and Recognition of Text for Dusty Image using Long Short...
 
Script identification using dct coefficients 2
Script identification using dct coefficients 2Script identification using dct coefficients 2
Script identification using dct coefficients 2
 
Investigating the Effect of BD-CRAFT to Text Detection Algorithms
Investigating the Effect of BD-CRAFT to Text Detection AlgorithmsInvestigating the Effect of BD-CRAFT to Text Detection Algorithms
Investigating the Effect of BD-CRAFT to Text Detection Algorithms
 
INVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMS
INVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMSINVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMS
INVESTIGATING THE EFFECT OF BD-CRAFT TO TEXT DETECTION ALGORITHMS
 
A novel character segmentation reconstruction approach for license plate reco...
A novel character segmentation reconstruction approach for license plate reco...A novel character segmentation reconstruction approach for license plate reco...
A novel character segmentation reconstruction approach for license plate reco...
 
50320140502001 2
50320140502001 250320140502001 2
50320140502001 2
 
50320140502001
5032014050200150320140502001
50320140502001
 
Telugu letters dataset and parallel deep convolutional neural network with a...
Telugu letters dataset and parallel deep convolutional neural  network with a...Telugu letters dataset and parallel deep convolutional neural  network with a...
Telugu letters dataset and parallel deep convolutional neural network with a...
 
Text Extraction of Colour Images using Mathematical Morphology & HAAR Transform
Text Extraction of Colour Images using Mathematical Morphology & HAAR TransformText Extraction of Colour Images using Mathematical Morphology & HAAR Transform
Text Extraction of Colour Images using Mathematical Morphology & HAAR Transform
 
40120140501009
4012014050100940120140501009
40120140501009
 
PROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITION
PROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITIONPROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITION
PROJECTION PROFILE BASED NUMBER PLATE LOCALIZATION AND RECOGNITION
 
Projection Profile Based Number Plate Localization and Recognition
Projection Profile Based Number Plate Localization and Recognition Projection Profile Based Number Plate Localization and Recognition
Projection Profile Based Number Plate Localization and Recognition
 
Traffic sign recognition and detection using SVM and CNN
Traffic sign recognition and detection using SVM and CNNTraffic sign recognition and detection using SVM and CNN
Traffic sign recognition and detection using SVM and CNN
 
Enhancement and Segmentation of Historical Records
Enhancement and Segmentation of Historical RecordsEnhancement and Segmentation of Historical Records
Enhancement and Segmentation of Historical Records
 
Real Time Sign Language Detection
Real Time Sign Language DetectionReal Time Sign Language Detection
Real Time Sign Language Detection
 
IRJET- Scandroid: A Machine Learning Approach for Understanding Handwritten N...
IRJET- Scandroid: A Machine Learning Approach for Understanding Handwritten N...IRJET- Scandroid: A Machine Learning Approach for Understanding Handwritten N...
IRJET- Scandroid: A Machine Learning Approach for Understanding Handwritten N...
 
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
A Survey on Portable Camera-Based Assistive Text and Product Label Reading Fr...
 
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
Implementation of Various Machine Learning Algorithms for Traffic Sign Detect...
 

Mais de IAEME Publication

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME Publication
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEIAEME Publication
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
 

Mais de IAEME Publication (20)

IAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdfIAEME_Publication_Call_for_Paper_September_2022.pdf
IAEME_Publication_Call_for_Paper_September_2022.pdf
 
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
 
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSA STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
 
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSBROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
 
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSDETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
 
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
 
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOVOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
 
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
 
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYVISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
 
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
 
GANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICEGANDHI ON NON-VIOLENT POLICE
GANDHI ON NON-VIOLENT POLICE
 
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
 
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
 
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
 
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
 
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
 
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
 
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
 
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
 
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTA MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
 

Último

[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?Antenna Manufacturer Coco
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsJoaquim Jorge
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfsudhanshuwaghmare1
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CVKhem
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 

Último (20)

[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?What Are The Drone Anti-jamming Systems Technology?
What Are The Drone Anti-jamming Systems Technology?
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
Artificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and MythsArtificial Intelligence: Facts and Myths
Artificial Intelligence: Facts and Myths
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Boost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdfBoost Fertility New Invention Ups Success Rates.pdf
Boost Fertility New Invention Ups Success Rates.pdf
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 

Character recognition of kannada text in scene images using neural

  • 1. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 9 CHARACTER RECOGNITION OF KANNADA TEXT IN SCENE IMAGES USING NEURAL NETWORK M. M. Kodabagi1 , S. A. Angadi2 , Chetana. R. Shivanagi3 1 Department of Computer Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka, India, 2 Department of Computer Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka, India 3 Department of Information Science and Engineering, Basaveshwar Engineering College, Bagalkot-587102, Karnataka ABSTRACT Character recognition in scene images is one of the most fascinating and challenging areas of pattern recognition with various practical application potentials. It can contribute immensely to the advancement of an automation process and can improve the interface between man and machine in many applications. Some practical application potentials of character recognition system are: reading aid for the blind, traffic guidance systems, tour guide systems, location aware systems and many more. In this work, a novel method for recognizing basic Kannada characters in natural scene images is proposed. The proposed method uses zone wise horizontal and vertical profile based features of character images. The method works in two phases. During training, zone wise vertical and horizontal profile based features are extracted from training samples and neural network is trained. During testing, the test image is processed to obtain features and recognized using neural network classifier. The method has been evaluated on 490 Kannada character images captured from 2 Mega Pixels cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200, which contains samples of different sizes, styles and with different degradations, and achieves an average recognition accuracy of 92%. The system is efficient and insensitive to the variations in size and font, noise, blur and other degradations. Keywords: Character Recognition, Display Boards, Low Resolution Images, Neural Network Classifier, Zone Wise Profile Features. INTERNATIONAL JOURNAL OF GRAPHICS AND MULTIMEDIA (IJGM) ISSN 0976 - 6448 (Print) ISSN 0976 -6456 (Online) Volume 4, Issue 1, January - April 2013, pp. 09-19 © IAEME: www.iaeme.com/ijgm.asp Journal Impact Factor (2013): 4.1089 (Calculated by GISI) www.jifactor.com IJGM © I A E M E
  • 2. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 10 1. INTRODUCTION In recent years, the hand held devices with increased computing and communication capabilities are widespread and being used for various purposes such as information access, mobile commerce, mobile learning, multimedia streaming, and many more. One such new application that can be integrated in such devices is a text understanding and translation system for low resolution natural scene images of display boards. Everyday, several people visit various places across the world for business and other activities, often they face problem with the language where they travel. This is especially true in countries like India, which are multilingual. For these reasons, there is a demand for an automated system that understands text in low resolution natural scene images and provides translated information in localized language. Natural scene display board images contain text information which is often required to be automatically recognized and processed. Scene text may be any textual part of the scene images such as names of streets, institutes names, names of shops, building names, company names, road signs, traffic information, warning signs etc. Researchers have focused their attention on development of techniques for understanding text on such display boards. There is a spurt of activity in the development of web based intelligent hand held systems for such applications. In the reported works [1-10] on intelligent systems for hand held devices, not many works pertain to understanding of written text on display boards. Therefore, scope exists for exploring such possibilities. The text understanding involves several processing steps; text detection and extraction, preprocessing for line, word and character separation, script identification, text recognition and language translation. Therefore, text recognition at character level is one of the very important processing steps for development of such systems prior to further analysis. Therefore, text recognition at word/character level is premise for the later stages of text understanding system. The recognition of text in low resolution images of display boards is a difficult and challenging problem due to various issues such as variability in font size, style and spacing between characters, skew, perspective distortions, viewing angle, uneven illuminations, script specific characters and other degradations [11]. The current work aims at investigating the use of zone wise statistical features for recognition of Kannada characters in scene images. The proposed method uses zone wise horizontal and vertical profile based features of character images. The method works in two phases. During training, zone wise horizontal and vertical profile based features are extracted from training samples and neural network is trained. During testing, the test image is processed to obtain features and recognized using neural network classifier. The method has been evaluated on 490 Kannada character images captured from 2 Mega Pixels cameras on mobile phones at various sizes 240x320, 600x800 and 900x1200, which contains samples of different sizes, styles and with different degradations, and achieves an average recognition accuracy of 92%. The system is efficient and insensitive to the variations in size and font, noise, blur and other degradations. The rest of the paper is organized as follows; the detailed survey related to character recognition of text in scene images is described in Section 2. The proposed method is presented in Section 3. The experimental results and discussions are given in Section 4. Section 5 concludes the work and lists future directions of the work.
  • 3. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 11 2. RELATED WORKS The character recognition of text in low resolution natural scene images is a necessary step for development of various tasks of text understanding system. A substantial amount of work has gone into the research related to character recognition of text in natural scene images. Some of the related works are summarized in the following. A robust approach for recognition of text embedded in natural scenes is given in [11]. The proposed method extracts features from intensity of an image directly and utilizes a local intensity normalization to effectively handle lighting variations. Then, Gabor transform is employed to obtain local features and linear discriminant analysis (LDA) is used for selection and classification of features. The proposed method has been applied to a Chinese sign recognition task. This work is further extended integrating sign detection component with recognition [12]. The extended method embeds multi-resolution and multi-scale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection. The affine rectification recovers deformation of the text regions caused by an inappropriate camera view angle and significantly improve text detection rate and optical character recognition. A framework that exploits both bottom-up and top-down cues for scene text recognition at word level is presented in [13]. The method derives bottom-up cues from individual character detections from the image. Then, a Conditional Random Field model is built on these detections to jointly model the strength of the detections and the interactions between them. It also imposes top-down cues obtained from a lexicon-based prior, i.e. language statistics. The optimal word represented by the text image is obtained by minimizing the energy function corresponding to the random field model. The method reports significant improvements in accuracies on two challenging public datasets, namely Street View Text and ICDAR 2003 compared to other methods. The test results showed that the reported accuracy is only 73% and requires further improvement. The hierarchical multilayered neural network recognition method described in [14] extracts oriented edges, corners, and end points for color text characters in scene image. A method called selective metric clustering which mainly deals with color is employed in [15]. A fast lexicon based and discriminative semi-Markov models for recognizing scene text are presented in [16, 17]. An object categorization framework based on a bag-of-visual-words representation for recognition of character in natural scene images is described in [18]. The effectiveness of raw grayscale pixel intensities, shape context descriptors, and wavelet features to recognize the characters is evaluated in [19]. A method for unconstrained handwritten Kannada vowels recognition based upon invariant moments is described in [20]. The technique presented in [21] extracts stroke density, length, and number of strokes for handwritten Kannada and English characters recognition. The method found in [22] uses modified invariant moments for recognition of multi-font/size Kannada vowels and numerals recognition. A model employed in [23] calculates features from connected components and obtains 3k dimensional feature vectors for memory based recognition of camera-captured characters. A character recognition method described in [24] uses local features for recognition of multiple characters in a scene image. After the thorough study of literature, it is noticed that, the some [18, 12, 23, 14] of the reported methods work with limited datasets, other cited works [18, 17, 16] report low recognition rates in the presence of noise and other degradations and very few works [18-22] pertain to recognition of Kannada characters from scene images. Hence, more research is
  • 4. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 12 desirable to obtain new set of discriminating features suitable for Kannada text in scene images. In the current work, zone wise statistical features are employed for recognition of Kannada characters in low resolution images. The detailed description of the proposed methodology is given in the next section. 3. PROPOSED METHODOLOGY FOR CHARACTER RECOGNITION The proposed method uses zone wise horizontal and vertical profile based features for recognition of Kannada characters in mobile camera based images. The proposed method contains various phases such as Preprocessing, Feature Extraction, Construction of Knowledge Base for Training Neural Network, Training and Character Recognition with Neural Network Classifier. The block diagram of the proposed model is given in Fig 1. The detailed description of each phase is given in the following subsections. 3.1 Preprocessing The input character image is preprocessed for binarization, noise removal, bounding box generation and resized to a constant resolution of size 30x30 pixels. Further, the image is thinned. Fig. 1. Block Diagram of Proposed Model 3.2 Feature extraction In this phase, each image is divided into 15 vertical zones and 15 horizontal zones, where size of each horizontal zone is 2*30 pixels and the size of each vertical zone is 30*2 pixels. Then sum of all on pixels in every zone is determined as a feature value for the zone. Finally, 30 features are computed from all zones and are stored in to a feature vector X as described in the equations (1) to (5): ( )( )[ ]HFeaturesVFeaturesX = (1) Test Sample Preprocessing Extraction of Zone Wise Horizontal and Vertical Profile Features Construction of Knowledge Base Preprocessing Character Recognition using using Neural Network Classifier Extraction of Zone Wise Horizontal and Vertical Profile Features Training Samples Train Neural Network Recognized Character
  • 5. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 13 VFeatures = [ iVf ] 151 ≤≤ i (2) HFeatures = [ iHf ] 151 ≤≤ i (3) Where, iHf is a feature value of ith horizontal zone and it is computed as shown in (4). iVf is a feature value of ith vertical zone and it is computed as shown in (5). ∑∑= 2 1 30 1 ),( yxgHf ii   (4) ∑ ∑= 30 1 2 1 ),( yxgVf ii   (5) Where, gi is ith zone that encompasses the chosen region of the character image. The dataset of such feature vectors obtained from training samples is further used for construction of knowledge base. 3.3 Construction of Knowledge Base for Training Neural Network For the purpose of knowledge base construction, the images were captured from display boards of Karnataka Government offices, names of streets, institute names, names of shops, building names, company names, road signs, traffic direction and warning signs captured from 2 Mega Pixels cameras on mobile phones. The images are captured at various sizes 240x320, 600x800, 900x1200 at a distance of 1 to 6 meters. All these images are used for evaluating the performance of the proposed model. The images captured with a size of 240x320 at a distance of 1 to 3 meters are found to be clear when the viewing angle is parallel to the text plane, perspective distortions and other degradations occur beyond 3 meters with other viewing angles. But the images captured at a distance of 1 to 6 meters with other stated resolutions are clear, perspective distortions still occur when the viewing angle is not parallel. The images in the database are characterized by variable font size and style, uneven thickness, minimal information context, small skew, noise, perspective distortion and other degradations. The image database consists of 490 Kannada basic character images of varying resolutions. Then from the database, 50% of samples are used for training. During training, the features are extracted from all training samples and knowledge base is organized as a dataset of feature vectors as depicted in (6). The stored information in the knowledge base sufficiently characterizes all variations in the input. Testing is carried out for all samples containing 50% trained and 50% untrained samples. Some sample images captured using 2 Mega Pixels cameras on mobile phones from display boards are shown in Fig 2. ][ jXKB = Nj ≤≤1 (6) Where, KB is knowledge base comprising feature vectors of training samples., Xj is a feature vector of jth image in the KB and N is the number of training sample images.
  • 6. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 14 Fig. 2. Sample Images Captured from 2 Mega Pixels Cameras on Mobile Phones 3.4 Training and Recognition with Feed Forward Neural Network After the data set is obtained and organized into knowledge base of basic Kannada character images, training and recognition tasks are carried out using feed forward neural networks. The details of training and recognition are described in the following; Before network design, the data from in the knowledge base is prepared to cover the range of inputs for which the network will be used. The feed forward neural network does not have the ability to accurately extrapolate beyond the range of inputs, so the training data is chosen to span the full range of the input space. Later, the normalization step is applied to both the input vectors and the target vectors in the data set. In this way, the network output always falls into a normalized range. Once the data is ready, the feed forward neural network object is created with 30 neurons in the input layer, 15 neurons in the hidden layer, and configured with default weights and biases for the prepared data set in the knowledgebase. The network is configured with tan sigmoid functions in the input and hidden neurons, linear transfer functions for output neurons and Levenberg-Marquardt and Gradient Descent with Momentum learning algorithms. The default performance function for feed forward network used is mean square error. The parameters learning rate and minimum performance are initialized with value 0.01. The magnitude of the gradient and the number of validation checks are used to terminate the training. The number of validation checks parameter is configured with value 10 and represents the number of successive iterations that the validation performance fails to decrease. After the network weights and biases are initialized and configured with other training parameters, the network is ready for training. The multilayer feed forward network is trained for function approximation (nonlinear regression) or pattern recognition with network inputs and target outputs. The training process tunes the values of the weights and biases of the network to optimize network performance, as defined by the network performance function. After the network is trained, its performance is verified using several trained and test character images. The neural network classifier gives an average recognition accuracy of 92%.
  • 7. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 15 4. EXPERIMENTAL RESULTS AND ANALYSIS The proposed methodology has been evaluated for 490 low resolution basic Kannada character images of varying font size and style, uneven thickness and other degradations. The experimental results of processing a sample character image is described in section 4.1. And the results of processing several other character images dealing with various issues, the overall performance of the system and comparison results with other methods are reported in section 4.2. 4.1. An Experimental Analysis for a Sample Kannada Character Image The Character image with uneven thickness, uneven lighting conditions, and other degradations given in Fig. 3a is initially preprocessed for binarization, resized to a constant size of 30x30 pixels and thinned as shown in Fig. 3b. Fig. 3. a) A Sample Character Test Image b) Preprocessed Image Further, the image is divided into 15 vertical zones and 15 horizontal zones. Then, the zone wise statistical features are computed from all zones and are organized into a feature vector T as in (1) to (5). The experimental values of all zones are shown in Table 1. TABLE 1. Zone Wise Vertical and Horizontal Features of Sample Input Image in Fig. 3b Feature Vector T [ VFeatures (4 3 13 5 6 6 6 8 6 7 6 9 13 13 4) HFeatures (2 2 3 6 3 4 9 5 5 6 4 4 5 9 15) ] T= [ 4 3 13 5 6 8 6 6 8 6 7 6 9 13 13 4 2 2 3 6 3 4 9 5 5 6 4 4 5 9 15] The experimental values in Table 1 clearly depict the distribution of pixels in various segments/primitives of the character image. And these distributions are different from character to character because of varying positions and shapes of segments/primitives of basic Kannada characters. This is demonstrated considering two sample images in Table 2. TABLE 2. Vertical and Horizontal Features of Two Sample Images Demonstrating Pixel Distribution Patterns Character Image Zone Wise Statistical Features 9 5 6 2 3 2 4 3 11 7 8 11 21 10 2 13 1 5 11 4 4 4 13 9 4 8 5 2 3 5 4 12 8 6 6 6 6 14 18 8 6 6 6 9 14 10 3 2 2 6 8 22 2 2 17 17 9 7 12 10 16 The values in Table 2 clearly show that, the feature values in most of the corresponding zones of the characters are distinct. For example, the feature values 9, 5, 6, 2 of vertical zones 1, 2, 3 and 4 of character in first row of Table 2 are distinct from feature values 12, 8, 6, 6 in the corresponding zones of character in the second row. The similar characteristic exists with the feature values in other zones. The arrangement of these features into a feature vector creates a pixel distribution pattern that makes
  • 8. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 16 samples distinguishable. It is also observed that, the proposed zone wise features also take care of uncertainty in the appearance of primitives of character image. After extracting features from test input image in Fig. 2a, the neural network classifier is used to recognize the character. 4.2. An Experimental Analysis dealing with various issues The proposed methodology has produced good results for low resolution images containing Kannada characters of different size, font, and alignment with varying background. The advantage lies in less computation involved in feature extraction and recognition phases of the method. During experiments it is noticed that, the zone wise features made samples separable in the feature space. Hence, the proposed work is robust and achieves an average recognition accuracy of 92%. The overall performance of the system after conducting the experimentation on the dataset is reported in Table 3. The comparison of the proposed method with other scene text recognition methods is described in Table 4. TABLE 3. Overall system performance Character Image Number of Samples Tested Number of Samples Correctly Recognized Number of Samples Miss Classified % of Recognitio n Accuracy Character Image Number of Samples Tested Number of Samples Correctly Recognized Number of Samples Miss Classified % of Recognition Accuracy 10 9 1 90 10 10 0 100 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 10 0 100 10 10 0 100 10 9 1 90 10 9 1 90 10 10 0 100 10 9 1 90 10 9 1 90 10 10 0 100 10 9 1 90 10 10 0 100 10 8 2 80 10 9 1 90 10 10 0 100 10 10 0 100 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 8 2 80 10 10 0 100 10 10 0 100 10 8 2 80 10 10 0 100 10 10 0 100 10 10 0 100 10 9 1 90 10 9 1 90 10 8 2 80 10 8 2 80 10 10 0 100 10 10 0 100 10 9 1 90 10 9 1 90 10 9 1 90 10 9 1 90 10 8 2 80 10 10 0 100 10 9 1 90 10 9 1 90
  • 9. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 17 A closer examination of results revealed that misclassifications arise due to noise, more similarity between character structures/primitives and other degradations. It is also noticed that, zonal features takes care of variations in the appearance of character primitives. It is also found that, if the knowledge base is trained for all variations and degradations, better performance can be obtained. TABLE 4. Comparison of Proposed Method with Other Scene Text Recognition Methods Author Approach Features Recognition Accuracy Jerod J. Weinman. et. al. (2008) A Discriminative Semi-Markov Model for Robust Scene Text Recognition Wavelet features 82.08% Onur Tekdas. et. al (2009) Recognizing Characters in Natural Scenes: A Feature Study Raw intensities, Shape Contexts, and wavelet features 85.328 Masakazu Iwamura. et. al (2011) Recognition of Multiple Characters in a Scene Image Using Arrangement of Local Features Scale invariant feature transform and voting method 76.5% Anand Mishra., etal., (2012) Top down and bottom up cues for scene text recogntion Bottom up cues, language statistics and condtional random field model. 73% Proposed Method Character Recognition of Kannada Text in Scene Images Using Neural Network Zone wise vertical and horizontal profile based features 92% 5. CONCLUSION In this work, a novel method for recognition of basic Kannada characters from camera based images is proposed. The proposed method uses zone wise horizontal and vertical profile based features and neural network classifier for basic Kannada character recognition. The system works in two phases, training phase and testing phase. Exhaustive experimentation was done to analyze zone wise horizontal and vertical profile based features using neural networks classifier. The results obtained by considering zone wise horizontal and vertical profile features and neural network classifier are encouraging and it has been observed that the system is robust and insensitive for several challenges like, unusual fonts, variable lighting condition, noise, blur etc. The method is tested on 490 samples and gives an average recognition accuracy of 92%. The proposed method can be extended for character recognition considering new set of features and classification algorithm.
  • 10. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 18 REFERENCES [1] Abowd Gregory D. Christopher G. Atkeson, Jason Hong, Sue Long, Rob Kooper, and Mike Pinkerton, 1997, “CyberGuide: A mobile context-aware tour guide”, Wireless Networks, 3(5): pp.421-433. [2] Natalia Marmasse and Chris Schamandt, 2000, “Location aware information delivery with comMotion”, In Proceedings of Conference on Human Factors in Computing Systems, pp.157-171. [3] Tollmar K. Yeh T. and Darrell T., 2004, “IDeixis - Image-Based Deixis for Finding Location-Based Information”, In Proceedings of Conference on Human Factors in Computing Systems (CHI’04), pp.781-782. [4] Gillian Leetch, Dr. Eleni Mangina, 2005, “A Multi-Agent System to Stream Multimedia to Handheld Devices”, Proceedings of the Sixth International Conference on Computational Intelligence and Multimedia Applications (ICCIMA’05). [5] Wichian Premchaiswadi, 2009, “A mobile Image search for Tourist Information System”, Proceedings of 9th international conference on SIGNAL PROCESSING, COMPUTATIONAL GEOMETRY and ARTIFICIAL VISION, pp.62-67. [6] Ma Chang-jie, Fang Jin-yun, 2008, “Location Based Mobile Tour Guide Services Towards Digital Dunhaung”, International archives of phtotgrammtery, Remote Sensing and Spatial Information Sciences, Vol. XXXVII, Part B4, Beijing. [7] Shih-Hung Wu, Min-Xiang Li, Ping-che Yanga, Tsun Kub, 2010, “Ubiquitous Wikipedia on Handheld Device for Mobile Learning”, 6th IEEE International Conference on Wireless, Mobile, and Ubiquitous Technologies in Education, pp. 228-230. [8] Tom yeh, Kristen Grauman, and K. Tollmar., 2005, “A picture is worth a thousand keywords: image-based object search on a mobile platform”, In Proceedings of Conference on Human Factors in Computing Systems, pp.2025-2028. [9] Fan X. Xie X. Li Z. Li M. and Ma. 2005, “Photo-to-search: using multimodal queries to search web from mobile phones”, In proceedings of 7th ACM SIGMM international workshop on multimedia information retrieval. [10] Lim Joo Hwee, Jean Pierre Chevallet and Sihem Nouarah Merah, 2005, “SnapToTell: Ubiquitous information access from camera”, Mobile human computer interaction with mobile devices and services, Glasgow, Scotland. [11] Jing Zhang, Xilin Chen, Andreas Hanneman, Jie Yang, and Alex Waibel.,2002, “A Robust Approach for Recognition of Text Embedded in Natural Scenes”, proc. 16th International conf. Pattern recognition, volume 3, pp. 204-207 (2002). [12] Xilin Chen, Jie Yang, Jing Zhang, and Alex Waibel, January 2004, “Automatic Detection and Recognition of Signs From Natural Scenes”, IEEE Transactions On Image Processing, Vol. 13, No. 1, pp. 87-99 (January 2004). [13] Anand Mishra, Karteek Alahari, C. V. Jawahar, 2012, “Top-Down and Bottom-Up Cues for Scene Text Recognition” , Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [14] Zohra Saidane and Christophe Garcia, 2007, “Automatic Scene Text Recognition using a Convolutional Neural Network”, CBDAR, p6, pp. 100-106 (2007)
  • 11. International Journal of Graphics and Multimedia (IJGM), ISSN 0976 – 6448(Print), ISSN 0976 – 6456(Online) Volume 4, Issue 1, January - April 2013, © IAEME 19 [15] Céline Mancas-Thillou, June 2007, “Natural Scene Text Understanding”, Segmentation and Pattern Recognition, I-Tech, Vienna, Austria, pp.123-142 (June 2007) [16] Jerod J. Weinman, Erik Learned-Miller, and Allen Hanson, September 2007, “Fast Lexicon-Based Scene Text Recognition with Sparse Belief Propagation”, Proc. Intl. Conf. on Document Analysis and Recognition, Curitiba, Brazil (September 2007) [17] Jerod J. Weinman, Erik Learned-Miller and Allen Hanson, December 2008, “A Discriminative Semi-Markov Model for Robust Scene Text Recognition”, IEEE, Proc. Intl. Conf. on Pattern Recognition (ICPR), Tampa, FL, USA, pp. 1-5 (December 2008) [18] Te´ofilo E. de Campos and Bodla Rakesh Bab, 2009, “Character Recognition In Natural Images”, Computer Vision Theory and Applications, Proc. International Conf. volume , pp. 273-280 (2009) [19] Onur Tekdas and Nikhil Karnad, 2009, “Recognizing Characters in Natural Scenes: A Feature Study”, CSCI 5521 Pattern Recognition, pp. 1-14 (2009) [20] Sangame S.K., Ramteke R.J., and Rajkumar Benne, 2009, “Recognition of isolated handwritten Kannada vowels”, Advances in Computational Research, ISSN: 0975– 3273, Volume 1, Issue 2, pp 52-55 (2009) [21] B.V.Dhandra, Mallikarjun Hangarge, and Gururaj Mukarambi, 2010, ”Spatial Features for Handwritten Kannada and English Character Recognition”, IJCA Special Issue on Recent Trends in Image Processing and Pattern Recognition (RTIPPR), pp 146-151 (2010) [22] Mallikarjun Hangarge, Shashikala Patil, and B.V.Dhandra, 2010, “Multi-font/size Kannada Vowels and Numerals Recognition Based on Modified Invariant Moments”, IJCA Special Issue on Recent Trends in Image Processing and Pattern Recognition (RTIPPR), pp 126-130 (2010) [23] Masakazu Iwamura, Tomohiko Tsuji, and Koichi Kise, 2010, “Memory-Based Recognition of Camera-Captured Characters”, 9th IAPR international workshop on document analysis systems, pp. 89-96 (2010) [24] Masakazu Iwamura, Takuya Kobayashi, and Koichi Kise, 2011, “Recognition of Multiple Characters in a Scene Image Using Arrangement of Local Features”, IEEE, International Conference on Document Analysis and Recognition, pp. 1409- 1413(2011) [25] Primekumar K.P and Sumam Mary Idicula, “Performance of on-Line Malayalam Handwritten character Recognition using Hmm And Sfam”, International Journal of Computer Engineering & Technology (IJCET), Volume 3, Issue 1, 2012, pp. 115 - 125, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. [26] Mr.Lokesh S. Khedekar and Dr.A.S.Alvi, “Advanced Smart Credential Cum Unique Identification and Recognition System. (Ascuirs)”, International Journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 1, 2013, pp. 97 - 104, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.