SlideShare a Scribd company logo
1 of 6
Download to read offline
ISSN: 2278 ā€“ 1323
            International Journal of Advanced Research in Computer Engineering & Technology
                                                                 Volume 1, Issue 5, July 2012


  CLASSIFICATION OF TEXT USING FUZZY BASED INCREMENTAL
             FEATURE CLUSTERING ALGORITHM

ANILKUMARREDDY TETALI                       B P N MADHUKUMAR                     K.CHANDRAKUMAR
    M.Tech Scholar                          Associate Professor                    Associate Professor
  Department of CSE ,                        Department of CSE,                    Department of CSE,
B V C Engineering college,                B V C Engineering college,             VSL Engineering college,
    Odalarevu                                  Odalarevu                                Kakinada
akr.tetali@gmail.com                   bpnmadhukumar@hotmail.com              chandhu_kynm@yahoo.com



Abstract:                                                 applied before the classification of the text takes
           The dimensionality of feature vector           place. Feature selection [1] and feature extraction
plays a major in text classification. We can              [2][3] approaches have been proposed for feature
reduce the dimensionality of feature vector by            reduction.
using feature clustering based on fuzzy logic.                        Classical feature extraction methods
We propose a fuzzy based incremental feature              uses algebraic transformations to convert the
clustering algorithm. Based on the similarity             representation of the original high dimensional data
test we can classify the feature vector of a              set into a lower-dimensional data by a projecting
document set are grouped into clusters                    process. Even though different algebraic
following clustering properties and each cluster          transformations are available the complexity of
is characterized by a membership function with            these approaches is still high. Feature clustering is
statistical mean and deviation .Then a desired            the most effective technique for feature reduction
number of clusters are formed automatically.              in text classification. The idea of feature clustering
We then take one extracted feature from each              is to group the original features into clusters with a
cluster which is a weighted combination of                high degree of pair wise semantic relatedness. Each
words contained in a cluster. By using our                cluster is treated as a single new feature, and, thus,
algorithm the derived membership function                 feature dimensionality can be drastically reduced.
match closely with real distribution of training          McCallum proposed a first feature extraction
data. By our work we reduce the burden on user            algorithm which was derived from the
in specifying the number of features in advance.          ā€œdistributional clusteringā€[4] idea of Pereira et al.
                                                          to generate an efficient representation of
Keywords:                                                 documents and applied a learning logic approach
                                                          for training text classifiers. In these feature
Incremental feature clustering, fuzzy                     clustering methods, each new feature is generated
similarity, dimensionality reduction,                     by combining a subset of the original words and
weighting matrix, text classifier.                        follows hard clustering, also mean and variance of
                                                          a cluster are not considered. These methods
Introduction:                                             impose a burden on the user in specifying the
                                                          number of clusters.
            A feature vector contains a set of features               We propose a fuzzy based incremental
which are used for the classification of the text.        feature clustering algorithm, which is an
The dimensionality of feature vector plays a              incremental feature clustering[[5][6] approach to
major role in classification of text. For example if a    reduce the number of features for the text
document set have 100000 words then it becomes            classification task. The feature vector of a
difficult task for the classification of text. To solve   document are grouped into clusters following
this problem, feature reduction approaches are            clustering properties and each cluster is


                                                                                                           313
                                    All Rights Reserved Ā© 2012 IJARCET
ISSN: 2278 ā€“ 1323
           International Journal of Advanced Research in Computer Engineering & Technology
                                                                Volume 1, Issue 5, July 2012


characterized by a membership function with             Dimensionality Reduction of the
statistical mean and deviation This forms the
desired number of clusters automatically. We then       Feature Vector:
take one extracted feature from each cluster which                In general, there are two ways of doing
is a weighted combination of words contained in a       feature reduction, feature selection, and feature
cluster. By using our algorithm the derived             extraction. By feature selection approaches, a new
membership function match closely with real             feature set W0 is obtained, which is a subset of the
distribution of training data. Also user need not to    original feature set W. Then W0 is used as inputs
specify the number of features in advance.              for classification tasks. Information Gain (IG) is
                                                        frequently employed in the feature selection
                                                        approach. Feature clustering is an efficient
The main advantages                     of      the     approach for feature reduction which groups all
proposed work are:                                      features into some clusters, where features in a
     ļ‚·    A fuzzy incremental feature clustering        cluster are similar to each other. The feature
                                                        clustering methods proposed before are ā€œhardā€
          (FIFC) algorithm which is an incremental
                                                        clustering methods, where each word of the
          clustering   approach   to   reduce    the    original features belongs to exactly one word
                                                        cluster. Therefore each word contributes to the
          dimensionality of the features in text        synthesis of only one new feature. Each new
          classification.                               feature is obtained by summing up the words
                                                        belonging to one cluster.
     ļ‚·    Determine     the   number   of    features
                                                        2(a).Proposed Method:
          automatically.                                          There are some drawbacks to the existing
     ļ‚·    Match membership functions closely            methods. First up all the user need to specify the
                                                        number of clusters in advance. Second          when
          with the real distribution of the training    calculating the similarities the variance of the
          data.                                         underlying cluster are not considered. Third all
                                                        words in a cluster have the same degree of
     ļ‚·    Runs faster than other methods.               contribution to the resulting extracted feature. Our
                                                        fuzzy incremental feature clustering algorithm is
     ļ‚·    Better extracted features than other
                                                        proposed to deal with these issues.
          methods.                                        Suppose we are given a document set D of n
                                                        documents d1,d2ā€¦dn together with a feature
Background and Related work:                            vector W of m words w1,w2ā€¦wn, and p classes
          Let D=<d1,d2ā€¦dn> be a document set            c1,c2ā€¦cp. We then construct one word pattern for
of n documents, where d1,d2ā€¦dn are individual           each word in W. For word wi, its word pattern xi is
documents and each document belongs to one of           defined as
the classes in the set {c1,c2ā€¦cp}. If a two or
document belongs to two or more classes, then two
or more copies of the document with different
classes are included in D. Let the word set             Where
W={w1,w2...wn} be the feature vector of the
document set. The feature reduction task is to find
a new word set W0 such that W and W0 work
equally but W0<W well for all the desired               For iā‰¤jā‰¤p. Here dqi indicates the number of
properties with D. Based on the new feature vector      occurrences of wi in document dq
the documents are classified into corresponding
clusters.




                                                                                                       314
                                  All Rights Reserved Ā© 2012 IJARCET
ISSN: 2278 ā€“ 1323
           International Journal of Advanced Research in Computer Engineering & Technology
                                                                Volume 1, Issue 5, July 2012


Also                                                   initialized. On the contrary, when the word pattern
                                                       is combined into an existing cluster, the
                                                       membership function of that cluster should be
                                                       updated accordingly.
                                                       Let k be the number of currently existing clusters.
          Therefore we have m word patterns in         The clusters are G1,G2ā€¦Gk respectively. Each
total. It is these word patterns, our clustering
                                                       cluster Gj have the mean mj=<mj1,mj2ā€¦mjp>
algorithm will work on. Our goal is to group the
                                                       and deviation Ļƒj=< Ļƒj1, Ļƒj2ā€¦ Ļƒjp>. Let Sj be the
words in W into clusters, based on these word
                                                       size of the cluster Gj. Initially we have k=0.so no
patterns. A cluster contains a certain number of
                                                       clusters exist at the beginning. For each word
word patterns, and is characterized by the product
of P one-dimensional Gaussian functions. Gaussian      pattern xi=<xi1,xi2ā€¦xip>, 1ā‰¤iā‰¤m,we calculate
functions[7][8] are adopted because of their           the similarity of xi to each existing clusters as
superiority over other functions in performance.
Let G be a cluster containing q word patterns
Xj=<xj1,xj2ā€¦xjp> 1ā‰¤jā‰¤p. Then the mean is
defined as m=<m1,m2ā€¦mp> and the deviation
Ļƒ =< Ļƒ1, Ļƒ2ā€¦ Ļƒp> of G are defined                      For 1ā‰¤jā‰¤k, we sat that xi passes the similarity test
                                                       on cluster Gj, if



                                                       Where Ļ, oā‰¤Ļā‰¤1 is a predefined threshold. If the
                                                       user intends to have larger clusters, then he/she
                                                       can give a smaller threshold. Otherwise, a bigger
                                                       threshold can be given. As the threshold increases,
For iā‰¤jā‰¤p, where |G| denotes the size of G.the fuzzy   the number of clusters also increases. Two cases
                                                       may occur. First, there are no existing fuzzy
similarity of a word pattern X=<x1,x2ā€¦xp>
                                                       clusters on which Xi has passed the similarity test.
to cluster G is defined by the following
                                                       For this case, we assume that xi is not similar
membership function
                                                       enough to any existing cluster and a new cluster
                                                       Gh, h=k+1, is created with



Where 0ā‰¤ĀµGā‰¤1.                                          Where Ļƒ0 is a user defined constant vector. The
                                                       new vector Gh contains only one member i.e., the
2(b).Fuzzy    based               Incremental          word pattern xi at this point, since it contains only
                                                       one member the deviation of a cluster is zero. We
Feature Clustering:                                    cannot use deviation zero in calculating fuzzy
           Our clustering algorithm is an
                                                       similarities. Hence we initialize the deviation of a
incremental, self constructing algorithm. Word
                                                       newly created cluster by and the number of clusters
patters are considered one by one. No clusters exist
                                                       in increased by 1 and the size of cluster Gh, Sh
the beginning, and clusters are created can be
                                                       should be initialized i.e.,
created if necessary. For each word pattern, the
similarity of this word pattern to each existing
cluster is calculated to decide whether it is
combined into an existing cluster or a new cluster
is created. Once a new cluster is created, the
corresponding membership function should be



                                                                                                       315
                                  All Rights Reserved Ā© 2012 IJARCET
ISSN: 2278 ā€“ 1323
           International Journal of Advanced Research in Computer Engineering & Technology
                                                                Volume 1, Issue 5, July 2012


Second, if there are existing clusters on which xi      3. Feature extraction using
has passed the similarity test, let cluster Gt be the
cluster with the largest membership degree, i.e.,       Weighting Matrix:
                                                        Feature Extraction can be expressed in the
                                                        following form:


In this case the modification of cluster       Gt is    Where
describes as follows




                                                        With




                                                        For 1ā‰¤iā‰¤n. Where T is a weighting matrix. The
                                                        goal of feature reduction is achieved by finding
          We      discuss     briefly    here    the    appropriate T such that k is smaller than m. In the
computational cost of our method and compare it         divisive information theoretic feature clustering
with DC[9] , IOC[10] , and IG[11]. For an input         algorithm the elements of T are binary and can be
pattern, we have to calculate the similarity between    defined as follows:
the input pattern and every existing cluster. Each
pattern consists of p components where p is the
number of classes in the document set. Therefore,
in worst case, the time complexity of our method is
O(mkp) where m is the number of original features                  By applying our feature clustering
and k is the number of clusters finally obtained.       algorithm word patterns have been grouped into
For DC, the complexity is O(mkp) where t is the         clusters, and words in the feature vector W are also
number of iterations to be done. The complexity of      clustered accordingly. For one cluster, we have one
IG is O(mp+mlogm) and the complexity of IOC is          extracted feature. Since we have k clusters, we
O(mkpn) where n is the number of documents              have k extracted features. The elements of T are
involved. Apparently, IG is the quickest one. Our       derived based on the obtained clusters, and feature
method is better than DC and IOC.                       extraction will be done. We propose three
                                                        weighting approaches: hard, soft, and mixed. In the
                                                        hard-weighting approach, each word is only
                                                        allowed to belong to a cluster, and so it only
                                                        contributes to a new extracted feature. In this case,
                                                        the elements of T are defined as follows:




                                                                                                        316
                                   All Rights Reserved Ā© 2012 IJARCET
ISSN: 2278 ā€“ 1323
           International Journal of Advanced Research in Computer Engineering & Technology
                                                                Volume 1, Issue 5, July 2012




 In the soft-weighting approach, each word is
allowed to contribute to all new extracted features,
with the degrees depending on the values of the
membership functions. The elements of T are            Where l is the number of training patterns, C is a
defined as follows:                                    parameter, which gives a tradeoff between
                                                       maximum margin and classification error, and yi,
The mixed-weighting approach is a combination of       being +1 or -1, is the target label of pattern xi.
the hard-weighting approach and the soft-              Ƙ:X ā†’ F is a mapping from the input space to the
weighting approach. In this case, the elements of T    feature space F, where patterns are more easily
are defined as follows:                                separated, and                      is the hyper
                                                       plane to be derived with w, and b being weight
                                                       vector and offset, respectively.
                                                                 We follow the idea to construct an
                                                       SVM-based classifier. Suppose, d is an unknown
   By selecting the value of Ī³, we provide
                                                       document. We first convert d to d0 by
flexibility to the user. When the similarity
threshold is small, the number of clusters is small,
and each cluster covers more training patterns. In                Then we feed dŹ¹to the classifier. We get
this case, a smaller Ī³ will favor soft-weighting and   p values, one from each SVM. Then d belongs to
get a higher accuracy. When the similarity             those classes with 1, appearing at the outputs of
threshold is large, the number of clusters is large,   their corresponding SVMs.
and each cluster covers fewer training patterns
which get a higher accuracy.                           5. Conclusions:
                                                                 We have presented a fuzzy based
4. Classification Of Text Data::                       incremental feature clustering (FIFC)
           Given a set D of training documents, text   algorithm, which is an incremental clustering
classification can be done as follows: We specify      approach to reduce the dimensionality of the
the similarity threshold Ļ, and apply our clustering   features classification of text. Feature that are
algorithm. Assume that k clusters are obtained for     similar to each other are placed in the same
the words in the feature vector W. Then we find        cluster. New clusters formed automatically, if
the weighting matrix T and convert D to Dā€™ .
Using Dā€™ as training data, a text classifier based
                                                       a word is not similar to any existing cluster.
on support vector machines (SVM) is built. SVM is      Each cluster so formed is characterized by a
a kernel method, which finds the maximum margin        membership function with statistical mean
hyperplane in feature space separating the images      and deviation. By our work the derived
of the training patterns into two groups[12][13]. A    membership functions match closely with the real
slack variables Ī¾i are introduced to account for       distribution of the training data. We reduce the
misclassifications. The objective function and         burden on the user in specifying the number of
constraints of the classification problem can be       extracted features in advance. Experiments results
formulated as:                                         shows that our method can run faster and obtain
                                                       better extracted features methods.




                                                                                                     317
                                  All Rights Reserved Ā© 2012 IJARCET
ISSN: 2278 ā€“ 1323
            International Journal of Advanced Research in Computer Engineering & Technology
                                                                 Volume 1, Issue 5, July 2012




6. References:
                                                             [10]J. Yan, B. Zhang, N. Liu, S. Yan, Q. Cheng, W. Fan,
[1]Y.Yang and J.O.Pedersen, ā€œA Comparative                   Q. Yang, W. Xi, and Z. Chen, ā€œEffective and Efficient
Study    on     Feature     Selection    in   Text           Dimensionality Reduction for Large-Scale and Streaming
Categorization,ā€ Proc. 14th Intā€™l Conf. Machine              Data Preprocessing,ā€ IEEE Trans. Knowledge and
Learning, pp. 412-420, 1997.                                 Data Eng., vol. 18, no. 3, pp. 320-333, Mar. 2006.

[2]D.D.Lewis, ā€œFeature Selection and Feature                 [11]Y. Yang and J.O. Pedersen, ā€œA Comparative Study
Extraction for Text Categorization,ā€ Proc.                   on Feature Selection in Text Categorization,ā€ Proc. 14th
                                                             Intā€™l Conf. Machine Learning, pp. 412-420, 1997.
Workshop Speech and Natural Language, pp. 212-
217, 1992.
                                                             [12]B.SchoĀØlkopf and A.J.Smola, Learning with
[3]H.Li,T.Jiang, and K.Zang, ā€œEfficient and Robust           Kernels: Support Vector Machines, Regularization,
                                                             Optimization, and Beyond. MIT Press, 2001.
Feature Extraction by Maximum Margin
Criterion,ā€ T.Sebastian, S.Lawrence, and
                                                             [13]J.Shawe-Taylor and N.Cristianini, Kernel
S. Bernhard eds. Advances in Neural Information
Processing System, pp. 97-104, Springer, 2004.               Methods for Pattern Analysis. Cambridge Univ.
                                                             Press, 2004.
.
[4]L.D.Baker and A.McCallum, ā€œDistributional
Clustering of Words for Text Classification,ā€ Proc.          7. About the Authors:
ACM SIGIR, pp. 96-103, 1998.
                                                             Anil Kumar Reddy Tetali is currently pursuing his
[5]L.D.Baker    and     A.McCallum,        ā€œDistributional   M.Tech in Computer Science and Engineering at
Clustering of Words for Text Classification,ā€ Proc. ACM      BVC Engineering College Odalarevu.
SIGIR, pp. 96-103, 1998.
                                                             B P N Madhu Kumar is currently working as an
 [6]R.Bekkerman, R. El-Yaniv, N. Tishby, and Y. Winter,      Associate Professor in Computer Science and
ā€œDistributional Word Clusters versus Words for Text          Engineering department, BVC Engineering College
Categorization,ā€ J. Machine Learning Research, vol.          Odalarevu. His research interests include data
3, pp. 1183-1208, 2003.
                                                             mining, web mining.

                                                             K.Chandra Kumar is currently working as an
[7J.Yen and RLangari, Fuzzy Logic-Intelligence,
                                                             Associate Professor in Computer Science and
Control, and Information. Prentice-Hall, 1999.
                                                             Engineering Department,VSL Engineering College
                                                             Kakinada. His research interests include data
[8]J.S.Wang and C.S.G.Lee, ā€œSelf-Adaptive
                                                             mining and text mining.
Neurofuzzy Inference Systems for Classification
Applications,ā€ IEEE Trans. Fuzzy Systems, vol.
10, no. 6, pp. 790-802, Dec. 2002.

[9]I.S. Dhillon, S. Mallela, and R. Kumar, ā€œA Divisive
Infomation- Theoretic Feature Clustering Algorithm for
Text Classification,ā€ J. Machine Learning Research,
vol. 3, pp. 1265-1287, 2003.




                                                                                                                318
                                      All Rights Reserved Ā© 2012 IJARCET

More Related Content

What's hot

Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoSeongwon Hwang
Ā 
IRJET- Wavelet Transform based Steganography
IRJET- Wavelet Transform based SteganographyIRJET- Wavelet Transform based Steganography
IRJET- Wavelet Transform based SteganographyIRJET Journal
Ā 
Neural network and mlp
Neural network and mlpNeural network and mlp
Neural network and mlppartha pratim deb
Ā 
Neural Learning to Rank
Neural Learning to RankNeural Learning to Rank
Neural Learning to RankBhaskar Mitra
Ā 
Introduction to Deep Learning and Tensorflow
Introduction to Deep Learning and TensorflowIntroduction to Deep Learning and Tensorflow
Introduction to Deep Learning and TensorflowOswald Campesato
Ā 
Processing vietnamese news titles to answer relative questions in vnewsqa ict...
Processing vietnamese news titles to answer relative questions in vnewsqa ict...Processing vietnamese news titles to answer relative questions in vnewsqa ict...
Processing vietnamese news titles to answer relative questions in vnewsqa ict...ijnlc
Ā 
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...ijsrd.com
Ā 
Evaluation of subjective answers using glsa enhanced with contextual synonymy
Evaluation of subjective answers using glsa enhanced with contextual synonymyEvaluation of subjective answers using glsa enhanced with contextual synonymy
Evaluation of subjective answers using glsa enhanced with contextual synonymyijnlc
Ā 
Enhancing Privacy of Confidential Data using K Anonymization
Enhancing Privacy of Confidential Data using K AnonymizationEnhancing Privacy of Confidential Data using K Anonymization
Enhancing Privacy of Confidential Data using K AnonymizationIDES Editor
Ā 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
Ā 
Review_Cibe Sridharan
Review_Cibe SridharanReview_Cibe Sridharan
Review_Cibe SridharanCibe Sridharan
Ā 
Leveraging collaborativetaggingforwebitemdesign ajithajjarani
Leveraging collaborativetaggingforwebitemdesign ajithajjaraniLeveraging collaborativetaggingforwebitemdesign ajithajjarani
Leveraging collaborativetaggingforwebitemdesign ajithajjaraniAjith Ajjarani
Ā 
Ijartes v1-i2-006
Ijartes v1-i2-006Ijartes v1-i2-006
Ijartes v1-i2-006IJARTES
Ā 
Improving Performance of Back propagation Learning Algorithm
Improving Performance of Back propagation Learning AlgorithmImproving Performance of Back propagation Learning Algorithm
Improving Performance of Back propagation Learning Algorithmijsrd.com
Ā 
LOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLING
LOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLINGLOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLING
LOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLINGijaia
Ā 
Basic Learning Algorithms of ANN
Basic Learning Algorithms of ANNBasic Learning Algorithms of ANN
Basic Learning Algorithms of ANNwaseem khan
Ā 

What's hot (19)

Zizka aimsa 2012
Zizka aimsa 2012Zizka aimsa 2012
Zizka aimsa 2012
Ā 
T24144148
T24144148T24144148
T24144148
Ā 
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in TheanoConvolutional Neural Network (CNN) presentation from theory to code in Theano
Convolutional Neural Network (CNN) presentation from theory to code in Theano
Ā 
IRJET- Wavelet Transform based Steganography
IRJET- Wavelet Transform based SteganographyIRJET- Wavelet Transform based Steganography
IRJET- Wavelet Transform based Steganography
Ā 
Neural network and mlp
Neural network and mlpNeural network and mlp
Neural network and mlp
Ā 
Neural Learning to Rank
Neural Learning to RankNeural Learning to Rank
Neural Learning to Rank
Ā 
Introduction to Deep Learning and Tensorflow
Introduction to Deep Learning and TensorflowIntroduction to Deep Learning and Tensorflow
Introduction to Deep Learning and Tensorflow
Ā 
Processing vietnamese news titles to answer relative questions in vnewsqa ict...
Processing vietnamese news titles to answer relative questions in vnewsqa ict...Processing vietnamese news titles to answer relative questions in vnewsqa ict...
Processing vietnamese news titles to answer relative questions in vnewsqa ict...
Ā 
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Sensitivity Analysis of GRA Method for Interval Valued Intuitionistic Fuzzy M...
Ā 
Evaluation of subjective answers using glsa enhanced with contextual synonymy
Evaluation of subjective answers using glsa enhanced with contextual synonymyEvaluation of subjective answers using glsa enhanced with contextual synonymy
Evaluation of subjective answers using glsa enhanced with contextual synonymy
Ā 
Enhancing Privacy of Confidential Data using K Anonymization
Enhancing Privacy of Confidential Data using K AnonymizationEnhancing Privacy of Confidential Data using K Anonymization
Enhancing Privacy of Confidential Data using K Anonymization
Ā 
Bb25322324
Bb25322324Bb25322324
Bb25322324
Ā 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
Ā 
Review_Cibe Sridharan
Review_Cibe SridharanReview_Cibe Sridharan
Review_Cibe Sridharan
Ā 
Leveraging collaborativetaggingforwebitemdesign ajithajjarani
Leveraging collaborativetaggingforwebitemdesign ajithajjaraniLeveraging collaborativetaggingforwebitemdesign ajithajjarani
Leveraging collaborativetaggingforwebitemdesign ajithajjarani
Ā 
Ijartes v1-i2-006
Ijartes v1-i2-006Ijartes v1-i2-006
Ijartes v1-i2-006
Ā 
Improving Performance of Back propagation Learning Algorithm
Improving Performance of Back propagation Learning AlgorithmImproving Performance of Back propagation Learning Algorithm
Improving Performance of Back propagation Learning Algorithm
Ā 
LOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLING
LOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLINGLOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLING
LOG MESSAGE ANOMALY DETECTION WITH OVERSAMPLING
Ā 
Basic Learning Algorithms of ANN
Basic Learning Algorithms of ANNBasic Learning Algorithms of ANN
Basic Learning Algorithms of ANN
Ā 

Viewers also liked

Volume 2-issue-6-2061-2063
Volume 2-issue-6-2061-2063Volume 2-issue-6-2061-2063
Volume 2-issue-6-2061-2063Editor IJARCET
Ā 
Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Editor IJARCET
Ā 
Ijarcet vol-2-issue-4-1659-1662
Ijarcet vol-2-issue-4-1659-1662Ijarcet vol-2-issue-4-1659-1662
Ijarcet vol-2-issue-4-1659-1662Editor IJARCET
Ā 
Ijarcet vol-2-issue-3-916-919
Ijarcet vol-2-issue-3-916-919Ijarcet vol-2-issue-3-916-919
Ijarcet vol-2-issue-3-916-919Editor IJARCET
Ā 
Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Editor IJARCET
Ā 
Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189Editor IJARCET
Ā 
Ijarcet vol-2-issue-3-884-890
Ijarcet vol-2-issue-3-884-890Ijarcet vol-2-issue-3-884-890
Ijarcet vol-2-issue-3-884-890Editor IJARCET
Ā 
Regression lineaire Multiple (Autosaved) (Autosaved)
Regression lineaire Multiple (Autosaved) (Autosaved)Regression lineaire Multiple (Autosaved) (Autosaved)
Regression lineaire Multiple (Autosaved) (Autosaved)Pierre Robentz Cassion
Ā 
SAƚDE MENTAL, ƁLCOOL E OUTRAS DROGAS
SAƚDE MENTAL, ƁLCOOL E OUTRAS DROGASSAƚDE MENTAL, ƁLCOOL E OUTRAS DROGAS
SAƚDE MENTAL, ƁLCOOL E OUTRAS DROGASflaviocampos
Ā 
GuĆ­a para elaboraciĆ³n de guĆ­as
GuĆ­a para elaboraciĆ³n de guĆ­asGuĆ­a para elaboraciĆ³n de guĆ­as
GuĆ­a para elaboraciĆ³n de guĆ­asMariaC Bernal
Ā 
Highscoreluisvidigalinovacaoemservicospublicos nov2014-141124194103-conversio...
Highscoreluisvidigalinovacaoemservicospublicos nov2014-141124194103-conversio...Highscoreluisvidigalinovacaoemservicospublicos nov2014-141124194103-conversio...
Highscoreluisvidigalinovacaoemservicospublicos nov2014-141124194103-conversio...Jacinto Adriano Massambu
Ā 
REVISTA GC BRASIL NĀ°. 06
REVISTA GC BRASIL NĀ°. 06REVISTA GC BRASIL NĀ°. 06
REVISTA GC BRASIL NĀ°. 06Lourdes Martins
Ā 

Viewers also liked (20)

1850 1854
1850 18541850 1854
1850 1854
Ā 
Volume 2-issue-6-2061-2063
Volume 2-issue-6-2061-2063Volume 2-issue-6-2061-2063
Volume 2-issue-6-2061-2063
Ā 
92 97
92 9792 97
92 97
Ā 
Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154
Ā 
1904 1908
1904 19081904 1908
1904 1908
Ā 
8 17
8 178 17
8 17
Ā 
Ijarcet vol-2-issue-4-1659-1662
Ijarcet vol-2-issue-4-1659-1662Ijarcet vol-2-issue-4-1659-1662
Ijarcet vol-2-issue-4-1659-1662
Ā 
Ijarcet vol-2-issue-3-916-919
Ijarcet vol-2-issue-3-916-919Ijarcet vol-2-issue-3-916-919
Ijarcet vol-2-issue-3-916-919
Ā 
1909 1913
1909 19131909 1913
1909 1913
Ā 
Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204
Ā 
Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189Volume 2-issue-6-2186-2189
Volume 2-issue-6-2186-2189
Ā 
210 214
210 214210 214
210 214
Ā 
Ijarcet vol-2-issue-3-884-890
Ijarcet vol-2-issue-3-884-890Ijarcet vol-2-issue-3-884-890
Ijarcet vol-2-issue-3-884-890
Ā 
671 679
671 679671 679
671 679
Ā 
ApresentaĆ§Ć£o de Paulo Barreto
ApresentaĆ§Ć£o de Paulo BarretoApresentaĆ§Ć£o de Paulo Barreto
ApresentaĆ§Ć£o de Paulo Barreto
Ā 
Regression lineaire Multiple (Autosaved) (Autosaved)
Regression lineaire Multiple (Autosaved) (Autosaved)Regression lineaire Multiple (Autosaved) (Autosaved)
Regression lineaire Multiple (Autosaved) (Autosaved)
Ā 
SAƚDE MENTAL, ƁLCOOL E OUTRAS DROGAS
SAƚDE MENTAL, ƁLCOOL E OUTRAS DROGASSAƚDE MENTAL, ƁLCOOL E OUTRAS DROGAS
SAƚDE MENTAL, ƁLCOOL E OUTRAS DROGAS
Ā 
GuĆ­a para elaboraciĆ³n de guĆ­as
GuĆ­a para elaboraciĆ³n de guĆ­asGuĆ­a para elaboraciĆ³n de guĆ­as
GuĆ­a para elaboraciĆ³n de guĆ­as
Ā 
Highscoreluisvidigalinovacaoemservicospublicos nov2014-141124194103-conversio...
Highscoreluisvidigalinovacaoemservicospublicos nov2014-141124194103-conversio...Highscoreluisvidigalinovacaoemservicospublicos nov2014-141124194103-conversio...
Highscoreluisvidigalinovacaoemservicospublicos nov2014-141124194103-conversio...
Ā 
REVISTA GC BRASIL NĀ°. 06
REVISTA GC BRASIL NĀ°. 06REVISTA GC BRASIL NĀ°. 06
REVISTA GC BRASIL NĀ°. 06
Ā 

Similar to 313 318

Classification of text data using feature clustering algorithm
Classification of text data using feature clustering algorithmClassification of text data using feature clustering algorithm
Classification of text data using feature clustering algorithmeSAT Publishing House
Ā 
FAST FUZZY FEATURE CLUSTERING FOR TEXT CLASSIFICATION
FAST FUZZY FEATURE CLUSTERING FOR TEXT CLASSIFICATION FAST FUZZY FEATURE CLUSTERING FOR TEXT CLASSIFICATION
FAST FUZZY FEATURE CLUSTERING FOR TEXT CLASSIFICATION cscpconf
Ā 
Density Based Clustering Approach for Solving the Software Component Restruct...
Density Based Clustering Approach for Solving the Software Component Restruct...Density Based Clustering Approach for Solving the Software Component Restruct...
Density Based Clustering Approach for Solving the Software Component Restruct...IRJET Journal
Ā 
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...IRJET Journal
Ā 
03 cs3024 pankaj_jajoo
03 cs3024 pankaj_jajoo03 cs3024 pankaj_jajoo
03 cs3024 pankaj_jajooMeetika Gupta
Ā 
Improved Performance of Unsupervised Method by Renovated K-Means
Improved Performance of Unsupervised Method by Renovated K-MeansImproved Performance of Unsupervised Method by Renovated K-Means
Improved Performance of Unsupervised Method by Renovated K-MeansIJASCSE
Ā 
0021.system partitioning
0021.system partitioning0021.system partitioning
0021.system partitioningsean chen
Ā 
Clustering Algorithm with a Novel Similarity Measure
Clustering Algorithm with a Novel Similarity MeasureClustering Algorithm with a Novel Similarity Measure
Clustering Algorithm with a Novel Similarity MeasureIOSR Journals
Ā 
An Iterative Improved k-means Clustering
An Iterative Improved k-means ClusteringAn Iterative Improved k-means Clustering
An Iterative Improved k-means ClusteringIDES Editor
Ā 
Text Categorization Using Improved K Nearest Neighbor Algorithm
Text Categorization Using Improved K Nearest Neighbor AlgorithmText Categorization Using Improved K Nearest Neighbor Algorithm
Text Categorization Using Improved K Nearest Neighbor AlgorithmIJTET Journal
Ā 
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANS
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSCONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANS
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSijseajournal
Ā 
Observations
ObservationsObservations
Observationsbutest
Ā 
An Empirical Study for Defect Prediction using Clustering
An Empirical Study for Defect Prediction using ClusteringAn Empirical Study for Defect Prediction using Clustering
An Empirical Study for Defect Prediction using Clusteringidescitation
Ā 
Object oriented basics
Object oriented basicsObject oriented basics
Object oriented basicsvamshimahi
Ā 
Scaling Down Dimensions and Feature Extraction in Document Repository Classif...
Scaling Down Dimensions and Feature Extraction in Document Repository Classif...Scaling Down Dimensions and Feature Extraction in Document Repository Classif...
Scaling Down Dimensions and Feature Extraction in Document Repository Classif...ijdmtaiir
Ā 
Estimating project development effort using clustered regression approach
Estimating project development effort using clustered regression approachEstimating project development effort using clustered regression approach
Estimating project development effort using clustered regression approachcsandit
Ā 

Similar to 313 318 (20)

Classification of text data using feature clustering algorithm
Classification of text data using feature clustering algorithmClassification of text data using feature clustering algorithm
Classification of text data using feature clustering algorithm
Ā 
FAST FUZZY FEATURE CLUSTERING FOR TEXT CLASSIFICATION
FAST FUZZY FEATURE CLUSTERING FOR TEXT CLASSIFICATION FAST FUZZY FEATURE CLUSTERING FOR TEXT CLASSIFICATION
FAST FUZZY FEATURE CLUSTERING FOR TEXT CLASSIFICATION
Ā 
Density Based Clustering Approach for Solving the Software Component Restruct...
Density Based Clustering Approach for Solving the Software Component Restruct...Density Based Clustering Approach for Solving the Software Component Restruct...
Density Based Clustering Approach for Solving the Software Component Restruct...
Ā 
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
IRJET- Diverse Approaches for Document Clustering in Product Development Anal...
Ā 
03 cs3024 pankaj_jajoo
03 cs3024 pankaj_jajoo03 cs3024 pankaj_jajoo
03 cs3024 pankaj_jajoo
Ā 
Improved Performance of Unsupervised Method by Renovated K-Means
Improved Performance of Unsupervised Method by Renovated K-MeansImproved Performance of Unsupervised Method by Renovated K-Means
Improved Performance of Unsupervised Method by Renovated K-Means
Ā 
0021.system partitioning
0021.system partitioning0021.system partitioning
0021.system partitioning
Ā 
Clustering Algorithm with a Novel Similarity Measure
Clustering Algorithm with a Novel Similarity MeasureClustering Algorithm with a Novel Similarity Measure
Clustering Algorithm with a Novel Similarity Measure
Ā 
An Iterative Improved k-means Clustering
An Iterative Improved k-means ClusteringAn Iterative Improved k-means Clustering
An Iterative Improved k-means Clustering
Ā 
653 656
653 656653 656
653 656
Ā 
Text Categorization Using Improved K Nearest Neighbor Algorithm
Text Categorization Using Improved K Nearest Neighbor AlgorithmText Categorization Using Improved K Nearest Neighbor Algorithm
Text Categorization Using Improved K Nearest Neighbor Algorithm
Ā 
Bj24390398
Bj24390398Bj24390398
Bj24390398
Ā 
600 608
600 608600 608
600 608
Ā 
Advance oops concepts
Advance oops conceptsAdvance oops concepts
Advance oops concepts
Ā 
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANS
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANSCONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANS
CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANS
Ā 
Observations
ObservationsObservations
Observations
Ā 
An Empirical Study for Defect Prediction using Clustering
An Empirical Study for Defect Prediction using ClusteringAn Empirical Study for Defect Prediction using Clustering
An Empirical Study for Defect Prediction using Clustering
Ā 
Object oriented basics
Object oriented basicsObject oriented basics
Object oriented basics
Ā 
Scaling Down Dimensions and Feature Extraction in Document Repository Classif...
Scaling Down Dimensions and Feature Extraction in Document Repository Classif...Scaling Down Dimensions and Feature Extraction in Document Repository Classif...
Scaling Down Dimensions and Feature Extraction in Document Repository Classif...
Ā 
Estimating project development effort using clustered regression approach
Estimating project development effort using clustered regression approachEstimating project development effort using clustered regression approach
Estimating project development effort using clustered regression approach
Ā 

More from Editor IJARCET

Electrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturizationElectrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturizationEditor IJARCET
Ā 
Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207Editor IJARCET
Ā 
Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199Editor IJARCET
Ā 
Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Editor IJARCET
Ā 
Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194Editor IJARCET
Ā 
Volume 2-issue-6-2177-2185
Volume 2-issue-6-2177-2185Volume 2-issue-6-2177-2185
Volume 2-issue-6-2177-2185Editor IJARCET
Ā 
Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176Editor IJARCET
Ā 
Volume 2-issue-6-2165-2172
Volume 2-issue-6-2165-2172Volume 2-issue-6-2165-2172
Volume 2-issue-6-2165-2172Editor IJARCET
Ā 
Volume 2-issue-6-2159-2164
Volume 2-issue-6-2159-2164Volume 2-issue-6-2159-2164
Volume 2-issue-6-2159-2164Editor IJARCET
Ā 
Volume 2-issue-6-2155-2158
Volume 2-issue-6-2155-2158Volume 2-issue-6-2155-2158
Volume 2-issue-6-2155-2158Editor IJARCET
Ā 
Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Editor IJARCET
Ā 
Volume 2-issue-6-2143-2147
Volume 2-issue-6-2143-2147Volume 2-issue-6-2143-2147
Volume 2-issue-6-2143-2147Editor IJARCET
Ā 
Volume 2-issue-6-2119-2124
Volume 2-issue-6-2119-2124Volume 2-issue-6-2119-2124
Volume 2-issue-6-2119-2124Editor IJARCET
Ā 
Volume 2-issue-6-2139-2142
Volume 2-issue-6-2139-2142Volume 2-issue-6-2139-2142
Volume 2-issue-6-2139-2142Editor IJARCET
Ā 
Volume 2-issue-6-2130-2138
Volume 2-issue-6-2130-2138Volume 2-issue-6-2130-2138
Volume 2-issue-6-2130-2138Editor IJARCET
Ā 
Volume 2-issue-6-2125-2129
Volume 2-issue-6-2125-2129Volume 2-issue-6-2125-2129
Volume 2-issue-6-2125-2129Editor IJARCET
Ā 
Volume 2-issue-6-2114-2118
Volume 2-issue-6-2114-2118Volume 2-issue-6-2114-2118
Volume 2-issue-6-2114-2118Editor IJARCET
Ā 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Editor IJARCET
Ā 
Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Editor IJARCET
Ā 
Volume 2-issue-6-2098-2101
Volume 2-issue-6-2098-2101Volume 2-issue-6-2098-2101
Volume 2-issue-6-2098-2101Editor IJARCET
Ā 

More from Editor IJARCET (20)

Electrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturizationElectrically small antennas: The art of miniaturization
Electrically small antennas: The art of miniaturization
Ā 
Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207Volume 2-issue-6-2205-2207
Volume 2-issue-6-2205-2207
Ā 
Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199Volume 2-issue-6-2195-2199
Volume 2-issue-6-2195-2199
Ā 
Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204Volume 2-issue-6-2200-2204
Volume 2-issue-6-2200-2204
Ā 
Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194Volume 2-issue-6-2190-2194
Volume 2-issue-6-2190-2194
Ā 
Volume 2-issue-6-2177-2185
Volume 2-issue-6-2177-2185Volume 2-issue-6-2177-2185
Volume 2-issue-6-2177-2185
Ā 
Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176Volume 2-issue-6-2173-2176
Volume 2-issue-6-2173-2176
Ā 
Volume 2-issue-6-2165-2172
Volume 2-issue-6-2165-2172Volume 2-issue-6-2165-2172
Volume 2-issue-6-2165-2172
Ā 
Volume 2-issue-6-2159-2164
Volume 2-issue-6-2159-2164Volume 2-issue-6-2159-2164
Volume 2-issue-6-2159-2164
Ā 
Volume 2-issue-6-2155-2158
Volume 2-issue-6-2155-2158Volume 2-issue-6-2155-2158
Volume 2-issue-6-2155-2158
Ā 
Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154Volume 2-issue-6-2148-2154
Volume 2-issue-6-2148-2154
Ā 
Volume 2-issue-6-2143-2147
Volume 2-issue-6-2143-2147Volume 2-issue-6-2143-2147
Volume 2-issue-6-2143-2147
Ā 
Volume 2-issue-6-2119-2124
Volume 2-issue-6-2119-2124Volume 2-issue-6-2119-2124
Volume 2-issue-6-2119-2124
Ā 
Volume 2-issue-6-2139-2142
Volume 2-issue-6-2139-2142Volume 2-issue-6-2139-2142
Volume 2-issue-6-2139-2142
Ā 
Volume 2-issue-6-2130-2138
Volume 2-issue-6-2130-2138Volume 2-issue-6-2130-2138
Volume 2-issue-6-2130-2138
Ā 
Volume 2-issue-6-2125-2129
Volume 2-issue-6-2125-2129Volume 2-issue-6-2125-2129
Volume 2-issue-6-2125-2129
Ā 
Volume 2-issue-6-2114-2118
Volume 2-issue-6-2114-2118Volume 2-issue-6-2114-2118
Volume 2-issue-6-2114-2118
Ā 
Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113Volume 2-issue-6-2108-2113
Volume 2-issue-6-2108-2113
Ā 
Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107Volume 2-issue-6-2102-2107
Volume 2-issue-6-2102-2107
Ā 
Volume 2-issue-6-2098-2101
Volume 2-issue-6-2098-2101Volume 2-issue-6-2098-2101
Volume 2-issue-6-2098-2101
Ā 

Recently uploaded

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
Ā 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusZilliz
Ā 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...Zilliz
Ā 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
Ā 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbuapidays
Ā 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...apidays
Ā 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
Ā 
Navi Mumbai Call Girls šŸ„° 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls šŸ„° 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls šŸ„° 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls šŸ„° 8617370543 Service Offer VIP Hot ModelDeepika Singh
Ā 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoffsammart93
Ā 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyKhushali Kathiriya
Ā 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
Ā 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
Ā 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?Igalia
Ā 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel AraĆŗjo
Ā 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
Ā 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...Martijn de Jong
Ā 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...apidays
Ā 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
Ā 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdflior mazor
Ā 

Recently uploaded (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
Ā 
A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
Ā 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
Ā 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
Ā 
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu SubbuApidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Apidays Singapore 2024 - Modernizing Securities Finance by Madhu Subbu
Ā 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Ā 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
Ā 
Navi Mumbai Call Girls šŸ„° 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls šŸ„° 8617370543 Service Offer VIP Hot ModelNavi Mumbai Call Girls šŸ„° 8617370543 Service Offer VIP Hot Model
Navi Mumbai Call Girls šŸ„° 8617370543 Service Offer VIP Hot Model
Ā 
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot TakeoffStrategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
Ā 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
Ā 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Ā 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
Ā 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
Ā 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Ā 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
Ā 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Ā 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
Ā 
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Apidays New York 2024 - The Good, the Bad and the Governed by David O'Neill, ...
Ā 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
Ā 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
Ā 

313 318

  • 1. ISSN: 2278 ā€“ 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 CLASSIFICATION OF TEXT USING FUZZY BASED INCREMENTAL FEATURE CLUSTERING ALGORITHM ANILKUMARREDDY TETALI B P N MADHUKUMAR K.CHANDRAKUMAR M.Tech Scholar Associate Professor Associate Professor Department of CSE , Department of CSE, Department of CSE, B V C Engineering college, B V C Engineering college, VSL Engineering college, Odalarevu Odalarevu Kakinada akr.tetali@gmail.com bpnmadhukumar@hotmail.com chandhu_kynm@yahoo.com Abstract: applied before the classification of the text takes The dimensionality of feature vector place. Feature selection [1] and feature extraction plays a major in text classification. We can [2][3] approaches have been proposed for feature reduce the dimensionality of feature vector by reduction. using feature clustering based on fuzzy logic. Classical feature extraction methods We propose a fuzzy based incremental feature uses algebraic transformations to convert the clustering algorithm. Based on the similarity representation of the original high dimensional data test we can classify the feature vector of a set into a lower-dimensional data by a projecting document set are grouped into clusters process. Even though different algebraic following clustering properties and each cluster transformations are available the complexity of is characterized by a membership function with these approaches is still high. Feature clustering is statistical mean and deviation .Then a desired the most effective technique for feature reduction number of clusters are formed automatically. in text classification. The idea of feature clustering We then take one extracted feature from each is to group the original features into clusters with a cluster which is a weighted combination of high degree of pair wise semantic relatedness. Each words contained in a cluster. By using our cluster is treated as a single new feature, and, thus, algorithm the derived membership function feature dimensionality can be drastically reduced. match closely with real distribution of training McCallum proposed a first feature extraction data. By our work we reduce the burden on user algorithm which was derived from the in specifying the number of features in advance. ā€œdistributional clusteringā€[4] idea of Pereira et al. to generate an efficient representation of Keywords: documents and applied a learning logic approach for training text classifiers. In these feature Incremental feature clustering, fuzzy clustering methods, each new feature is generated similarity, dimensionality reduction, by combining a subset of the original words and weighting matrix, text classifier. follows hard clustering, also mean and variance of a cluster are not considered. These methods Introduction: impose a burden on the user in specifying the number of clusters. A feature vector contains a set of features We propose a fuzzy based incremental which are used for the classification of the text. feature clustering algorithm, which is an The dimensionality of feature vector plays a incremental feature clustering[[5][6] approach to major role in classification of text. For example if a reduce the number of features for the text document set have 100000 words then it becomes classification task. The feature vector of a difficult task for the classification of text. To solve document are grouped into clusters following this problem, feature reduction approaches are clustering properties and each cluster is 313 All Rights Reserved Ā© 2012 IJARCET
  • 2. ISSN: 2278 ā€“ 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 characterized by a membership function with Dimensionality Reduction of the statistical mean and deviation This forms the desired number of clusters automatically. We then Feature Vector: take one extracted feature from each cluster which In general, there are two ways of doing is a weighted combination of words contained in a feature reduction, feature selection, and feature cluster. By using our algorithm the derived extraction. By feature selection approaches, a new membership function match closely with real feature set W0 is obtained, which is a subset of the distribution of training data. Also user need not to original feature set W. Then W0 is used as inputs specify the number of features in advance. for classification tasks. Information Gain (IG) is frequently employed in the feature selection approach. Feature clustering is an efficient The main advantages of the approach for feature reduction which groups all proposed work are: features into some clusters, where features in a ļ‚· A fuzzy incremental feature clustering cluster are similar to each other. The feature clustering methods proposed before are ā€œhardā€ (FIFC) algorithm which is an incremental clustering methods, where each word of the clustering approach to reduce the original features belongs to exactly one word cluster. Therefore each word contributes to the dimensionality of the features in text synthesis of only one new feature. Each new classification. feature is obtained by summing up the words belonging to one cluster. ļ‚· Determine the number of features 2(a).Proposed Method: automatically. There are some drawbacks to the existing ļ‚· Match membership functions closely methods. First up all the user need to specify the number of clusters in advance. Second when with the real distribution of the training calculating the similarities the variance of the data. underlying cluster are not considered. Third all words in a cluster have the same degree of ļ‚· Runs faster than other methods. contribution to the resulting extracted feature. Our fuzzy incremental feature clustering algorithm is ļ‚· Better extracted features than other proposed to deal with these issues. methods. Suppose we are given a document set D of n documents d1,d2ā€¦dn together with a feature Background and Related work: vector W of m words w1,w2ā€¦wn, and p classes Let D=<d1,d2ā€¦dn> be a document set c1,c2ā€¦cp. We then construct one word pattern for of n documents, where d1,d2ā€¦dn are individual each word in W. For word wi, its word pattern xi is documents and each document belongs to one of defined as the classes in the set {c1,c2ā€¦cp}. If a two or document belongs to two or more classes, then two or more copies of the document with different classes are included in D. Let the word set Where W={w1,w2...wn} be the feature vector of the document set. The feature reduction task is to find a new word set W0 such that W and W0 work equally but W0<W well for all the desired For iā‰¤jā‰¤p. Here dqi indicates the number of properties with D. Based on the new feature vector occurrences of wi in document dq the documents are classified into corresponding clusters. 314 All Rights Reserved Ā© 2012 IJARCET
  • 3. ISSN: 2278 ā€“ 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 Also initialized. On the contrary, when the word pattern is combined into an existing cluster, the membership function of that cluster should be updated accordingly. Let k be the number of currently existing clusters. Therefore we have m word patterns in The clusters are G1,G2ā€¦Gk respectively. Each total. It is these word patterns, our clustering cluster Gj have the mean mj=<mj1,mj2ā€¦mjp> algorithm will work on. Our goal is to group the and deviation Ļƒj=< Ļƒj1, Ļƒj2ā€¦ Ļƒjp>. Let Sj be the words in W into clusters, based on these word size of the cluster Gj. Initially we have k=0.so no patterns. A cluster contains a certain number of clusters exist at the beginning. For each word word patterns, and is characterized by the product of P one-dimensional Gaussian functions. Gaussian pattern xi=<xi1,xi2ā€¦xip>, 1ā‰¤iā‰¤m,we calculate functions[7][8] are adopted because of their the similarity of xi to each existing clusters as superiority over other functions in performance. Let G be a cluster containing q word patterns Xj=<xj1,xj2ā€¦xjp> 1ā‰¤jā‰¤p. Then the mean is defined as m=<m1,m2ā€¦mp> and the deviation Ļƒ =< Ļƒ1, Ļƒ2ā€¦ Ļƒp> of G are defined For 1ā‰¤jā‰¤k, we sat that xi passes the similarity test on cluster Gj, if Where Ļ, oā‰¤Ļā‰¤1 is a predefined threshold. If the user intends to have larger clusters, then he/she can give a smaller threshold. Otherwise, a bigger threshold can be given. As the threshold increases, For iā‰¤jā‰¤p, where |G| denotes the size of G.the fuzzy the number of clusters also increases. Two cases may occur. First, there are no existing fuzzy similarity of a word pattern X=<x1,x2ā€¦xp> clusters on which Xi has passed the similarity test. to cluster G is defined by the following For this case, we assume that xi is not similar membership function enough to any existing cluster and a new cluster Gh, h=k+1, is created with Where 0ā‰¤ĀµGā‰¤1. Where Ļƒ0 is a user defined constant vector. The new vector Gh contains only one member i.e., the 2(b).Fuzzy based Incremental word pattern xi at this point, since it contains only one member the deviation of a cluster is zero. We Feature Clustering: cannot use deviation zero in calculating fuzzy Our clustering algorithm is an similarities. Hence we initialize the deviation of a incremental, self constructing algorithm. Word newly created cluster by and the number of clusters patters are considered one by one. No clusters exist in increased by 1 and the size of cluster Gh, Sh the beginning, and clusters are created can be should be initialized i.e., created if necessary. For each word pattern, the similarity of this word pattern to each existing cluster is calculated to decide whether it is combined into an existing cluster or a new cluster is created. Once a new cluster is created, the corresponding membership function should be 315 All Rights Reserved Ā© 2012 IJARCET
  • 4. ISSN: 2278 ā€“ 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 Second, if there are existing clusters on which xi 3. Feature extraction using has passed the similarity test, let cluster Gt be the cluster with the largest membership degree, i.e., Weighting Matrix: Feature Extraction can be expressed in the following form: In this case the modification of cluster Gt is Where describes as follows With For 1ā‰¤iā‰¤n. Where T is a weighting matrix. The goal of feature reduction is achieved by finding We discuss briefly here the appropriate T such that k is smaller than m. In the computational cost of our method and compare it divisive information theoretic feature clustering with DC[9] , IOC[10] , and IG[11]. For an input algorithm the elements of T are binary and can be pattern, we have to calculate the similarity between defined as follows: the input pattern and every existing cluster. Each pattern consists of p components where p is the number of classes in the document set. Therefore, in worst case, the time complexity of our method is O(mkp) where m is the number of original features By applying our feature clustering and k is the number of clusters finally obtained. algorithm word patterns have been grouped into For DC, the complexity is O(mkp) where t is the clusters, and words in the feature vector W are also number of iterations to be done. The complexity of clustered accordingly. For one cluster, we have one IG is O(mp+mlogm) and the complexity of IOC is extracted feature. Since we have k clusters, we O(mkpn) where n is the number of documents have k extracted features. The elements of T are involved. Apparently, IG is the quickest one. Our derived based on the obtained clusters, and feature method is better than DC and IOC. extraction will be done. We propose three weighting approaches: hard, soft, and mixed. In the hard-weighting approach, each word is only allowed to belong to a cluster, and so it only contributes to a new extracted feature. In this case, the elements of T are defined as follows: 316 All Rights Reserved Ā© 2012 IJARCET
  • 5. ISSN: 2278 ā€“ 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 In the soft-weighting approach, each word is allowed to contribute to all new extracted features, with the degrees depending on the values of the membership functions. The elements of T are Where l is the number of training patterns, C is a defined as follows: parameter, which gives a tradeoff between maximum margin and classification error, and yi, The mixed-weighting approach is a combination of being +1 or -1, is the target label of pattern xi. the hard-weighting approach and the soft- Ƙ:X ā†’ F is a mapping from the input space to the weighting approach. In this case, the elements of T feature space F, where patterns are more easily are defined as follows: separated, and is the hyper plane to be derived with w, and b being weight vector and offset, respectively. We follow the idea to construct an SVM-based classifier. Suppose, d is an unknown By selecting the value of Ī³, we provide document. We first convert d to d0 by flexibility to the user. When the similarity threshold is small, the number of clusters is small, and each cluster covers more training patterns. In Then we feed dŹ¹to the classifier. We get this case, a smaller Ī³ will favor soft-weighting and p values, one from each SVM. Then d belongs to get a higher accuracy. When the similarity those classes with 1, appearing at the outputs of threshold is large, the number of clusters is large, their corresponding SVMs. and each cluster covers fewer training patterns which get a higher accuracy. 5. Conclusions: We have presented a fuzzy based 4. Classification Of Text Data:: incremental feature clustering (FIFC) Given a set D of training documents, text algorithm, which is an incremental clustering classification can be done as follows: We specify approach to reduce the dimensionality of the the similarity threshold Ļ, and apply our clustering features classification of text. Feature that are algorithm. Assume that k clusters are obtained for similar to each other are placed in the same the words in the feature vector W. Then we find cluster. New clusters formed automatically, if the weighting matrix T and convert D to Dā€™ . Using Dā€™ as training data, a text classifier based a word is not similar to any existing cluster. on support vector machines (SVM) is built. SVM is Each cluster so formed is characterized by a a kernel method, which finds the maximum margin membership function with statistical mean hyperplane in feature space separating the images and deviation. By our work the derived of the training patterns into two groups[12][13]. A membership functions match closely with the real slack variables Ī¾i are introduced to account for distribution of the training data. We reduce the misclassifications. The objective function and burden on the user in specifying the number of constraints of the classification problem can be extracted features in advance. Experiments results formulated as: shows that our method can run faster and obtain better extracted features methods. 317 All Rights Reserved Ā© 2012 IJARCET
  • 6. ISSN: 2278 ā€“ 1323 International Journal of Advanced Research in Computer Engineering & Technology Volume 1, Issue 5, July 2012 6. References: [10]J. Yan, B. Zhang, N. Liu, S. Yan, Q. Cheng, W. Fan, [1]Y.Yang and J.O.Pedersen, ā€œA Comparative Q. Yang, W. Xi, and Z. Chen, ā€œEffective and Efficient Study on Feature Selection in Text Dimensionality Reduction for Large-Scale and Streaming Categorization,ā€ Proc. 14th Intā€™l Conf. Machine Data Preprocessing,ā€ IEEE Trans. Knowledge and Learning, pp. 412-420, 1997. Data Eng., vol. 18, no. 3, pp. 320-333, Mar. 2006. [2]D.D.Lewis, ā€œFeature Selection and Feature [11]Y. Yang and J.O. Pedersen, ā€œA Comparative Study Extraction for Text Categorization,ā€ Proc. on Feature Selection in Text Categorization,ā€ Proc. 14th Intā€™l Conf. Machine Learning, pp. 412-420, 1997. Workshop Speech and Natural Language, pp. 212- 217, 1992. [12]B.SchoĀØlkopf and A.J.Smola, Learning with [3]H.Li,T.Jiang, and K.Zang, ā€œEfficient and Robust Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2001. Feature Extraction by Maximum Margin Criterion,ā€ T.Sebastian, S.Lawrence, and [13]J.Shawe-Taylor and N.Cristianini, Kernel S. Bernhard eds. Advances in Neural Information Processing System, pp. 97-104, Springer, 2004. Methods for Pattern Analysis. Cambridge Univ. Press, 2004. . [4]L.D.Baker and A.McCallum, ā€œDistributional Clustering of Words for Text Classification,ā€ Proc. 7. About the Authors: ACM SIGIR, pp. 96-103, 1998. Anil Kumar Reddy Tetali is currently pursuing his [5]L.D.Baker and A.McCallum, ā€œDistributional M.Tech in Computer Science and Engineering at Clustering of Words for Text Classification,ā€ Proc. ACM BVC Engineering College Odalarevu. SIGIR, pp. 96-103, 1998. B P N Madhu Kumar is currently working as an [6]R.Bekkerman, R. El-Yaniv, N. Tishby, and Y. Winter, Associate Professor in Computer Science and ā€œDistributional Word Clusters versus Words for Text Engineering department, BVC Engineering College Categorization,ā€ J. Machine Learning Research, vol. Odalarevu. His research interests include data 3, pp. 1183-1208, 2003. mining, web mining. K.Chandra Kumar is currently working as an [7J.Yen and RLangari, Fuzzy Logic-Intelligence, Associate Professor in Computer Science and Control, and Information. Prentice-Hall, 1999. Engineering Department,VSL Engineering College Kakinada. His research interests include data [8]J.S.Wang and C.S.G.Lee, ā€œSelf-Adaptive mining and text mining. Neurofuzzy Inference Systems for Classification Applications,ā€ IEEE Trans. Fuzzy Systems, vol. 10, no. 6, pp. 790-802, Dec. 2002. [9]I.S. Dhillon, S. Mallela, and R. Kumar, ā€œA Divisive Infomation- Theoretic Feature Clustering Algorithm for Text Classification,ā€ J. Machine Learning Research, vol. 3, pp. 1265-1287, 2003. 318 All Rights Reserved Ā© 2012 IJARCET