SlideShare uma empresa Scribd logo
1 de 38
Baixar para ler offline
1/39




An Algorithm for Incremental
Unsupervised Learning and
Topology Representation

       Shen Furao

       Hasegawa Lab
       Department of Computational
       Intelligence and Systems Science
2/39




Contents
   Chapter 1: Introduction
   Chapter 2: Vector Quantization
   Chapter 3: Adaptive Incremental LBG
   Chapter 4: Experiment of adaptive
    incremental LBG
   Chapter 5: Self-organizing incremental
    neural network
   Chapter 6: Experiment with artificial data
   Chapter 7: Application
   Chapter 8: Conclusion and discussion
3/39




     Introduction

   Clustering: Construct decision boundaries
    based on unlabeled data.
   Topology learning: find a topology
    structure that closely reflects the topology
    of the data distribution
   Online incremental learning: Adapt to new
    information without corrupting previously
    learned information
4/39




    Vector Quantization
   Targets
       To minimize the average distortion through a
        suitable choice of codewords
   Application
       Data compression, speech recognition
       Separate the data set to Voronoi regions, find the
        centroid of the Voronoi regions
   LBG method (Linde, Buzo & Gray, 1980)
       Dependence on initial starting conditions
       Tendency to result in local minima
5/39


        Adaptive incremental LBG
        (Shen & Hasegawa, 2005)
   To solve the problem caused by poorly chosen
    initial conditions
       independent of initial conditions
   With fixed number of codewords, to find a suitable
    codebook to minimize the distortion error MQE.
       It can work better than or same as ELBG (Patane &
        Russo, 2001)
   With fixed distortion error, to minimize the number
    of codewords and find a suitable codebook.
       Meaning: To get the same reconstruction quality for
        different vector set, the codebook will have different size
        and thus can save plenty of storage.
6/39




       Test Image
   Lena (512*512*8) is
    separated to 4*4 blocks. Such
    blocks are the input vectors.
    There are totally 16384
    vectors.
   Peak Signal to Noise Ratio
    (PSNR) is used to evaluate the
    resulting images after the
    quantization process.
                              2552
PSNR  10 log10
                  1
                      
                          N
                      i 1
                           ( f (i )  g (i )) 2
                  N                               Lena (512*512*8)
7/39


        Improvement I:
        Incrementally inserting codewords

   The optimal
    solution of k-
    clustering
    problem can
    be reachable
    from the (k-
    1)-clustering
    problem.
8/39


         Improvement II:
         Distance measure function
   Within cluster
    distance must be
    significantly less
    than between
    cluster distance.
               l
d ( x, c)  ( ( xi  ci ) 2 ) p
              i 1


p  log10 q  1
9/39


     Improvement III:
     Delete and insert codeword

Delete codeword
with lowest local
distortion error
Insert codeword
near the codeword
with highest local
distortion error
10/39




      Experiment 1
                                    PSNR
Number of
codewords LBG (Linde Mk (Lee et ELBG(Pata
                                            AILBG
          et al.,1980) al., 1997) ne, 2001)
   256          31.60          31.92          31.94        32.01
   512          32.49          33.09          33.14        33.22
  1024          33.37          34.42          34.59        34.71
  Meaning: With the same number of codewords, proposed
  method can get highest PSNR, i.e., with the same compression
  ratio, proposed method can get best reconstruction quality.
11/39




    Experiment 2
                         Number of codewords
      PSNR            ELBG (Patane,
                                       AILBG
                         2001)
      31.94                256          244
      33.14                512          488
      34.59               1024          988

Meaning:
• With a predefined reconstruction quality, proposed method can
  find a good codebook with reasonable number of codewords.
12/39




Experiment 3: Original Images




  Boat              Gray21
13/39




           Results of experiment 3
         PSNR                  Number         of codewords
         (dB)             Gray21              Lena       Boat
         28.0                9                 22          54
         30.0               12                 76         199
         33.0               15                454        1018
Meaning:
1.   For different images, with the same PSNR, number of codewords will be different.
2.   Proposed method can be used to set up an image database with same
     reconstruction quality (PSNR)
14/39




            Unsupervised learning
   Clustering
       K-means (King, 1967), ELBG (Patane, 2001), Global k-means (Likas, 2003),
        AILBG (Shen, 2005)
           Determine the number of clusters k in advance

           data sets consisting only of isotropic clusters

       Single-link (Sneath, 1973), complete-link (King, 1967), CURE (Guha, 1998)
           Computation overload, much memory space

           Unsuitable for large data sets or online data


   Topology Learning:        Reflects topology of high-dimension data distribution
       SOM (Kohonen, 1982): predetermined structure and size
       CHL+NG (Martinetz, 1994): a priori decision about the network size
       GNG (Fritzke, 1995): permanent increase in the number of nodes
   Online Learning
       GNG-U (Frutzke, 1998): destroy learned knowledge
       LLCS (Hamker, 2001): supervised learning
15/39



       Self-organizing incremental neural
       network (Shen & Hasegawa, 2005)
1. To process the on-line non-stationary data.
2. To do the unsupervised learning without any priori
   condition such as:
   • suitable number of nodes
   • a good initial codebook
   • how many classes there are
3. Report a suitable number of classes
4. Represent the topological structure of the input probability
   density.
5. Separate the classes with some low-density overlaps
6. Detect the main structure of clusters polluted by noises
16/39




          The Proposed algorithm

            First Layer            Second Layer
Input        Growing       First     Growing        Second
pattern      Network      Output     Network        Output




            Insert        Delete
                                         Classify
            Node          Node
17/39




         Algorithms
   Insert new nodes
       Criterion: nodes with high errors serve as a criterion to
        insert a new node
       error-radius is used to judge if the insert is successful
   Delete nodes
       Criterion: remove nodes in low probability density
        regions
       Realize: delete nodes with no or only one direct topology
        neighbor
   Classify
       Criterion: all nodes linked with edges will be one cluster
18/39
First-layer                  Second-layer


                                             Input signals==
               Initialize                      multiple of 


              Input signal                     Within-class
                                                Insertion

          Find winner                       Judge if insertion
       and second winner                      is successful

                                            Delete overlap and
 Y       Between-class                         noise nodes
           Insertion

                N                       N    Input signals==
        Connect winner                        multiple of LT
       and second winner
                                                     Y
       Update weight of                        First-layer       Y
      winner and neighbor
                                                 N
                                              Output results
19/39




   Experiment
                                Environment
                       I   II    III IV V VI VII
                    A 1    0      1 0 0 0 0
                    B 0    1      0 1 0 0 0
                    C 0    0      1 0 0 1 0
                    D 0    0      0 1 1 0 0
                    E1 0   0      0 0 1 0 0
                    E2 0   0      0 0 0 1 0
Original Data Set   E3 0   0      0 0 0 0 1
20/39

    Experiment:
    Stationary environment




Original Data Set   GNG (Fritzke, 1995)
21/39
        Experiment:
        Stationary environment




Proposed method: first layer   Proposed method: final results
22/39



  Experiment:
  Non-stationary environment




GNG (Fritzke, 1995)   GNG-U (Fritzke, 1998)
23/39


Experiment:
Non-stationary environment




      Proposed method: first layer
24/39


Experiment:
Non-stationary environment




      Proposed method: first layer
25/39



Experiment:
Non-stationary environment




      Proposed method: first layer
26/39


       Experiment:
       Non-stationary environment




Proposed method: first layer Proposed method: Final output
27/39
Application: Face recognition
(ATT_FACE)
Facial Image




               (a) 10 classes




               (b) 10 samples of class 1
28/39

Face recognition: Feature Vector




          Vector of (a)




          Vector of (b)
29/39

Face Recognition: results


                        10 clusters

                        Stationary
                        Correct
                        Recognition
                        Ratio: 90%

                        Non-Stationary
                        Correct
                        Recognition
                        Ratio: 86%
30/39


Application: Vector Quantization




                            Stationary Environment: Decoding
Original Lena (512*512*8)
                            image, 130 nodes, 0.45bpp,
                            PSNR = 30.79dB
31/39


  Vector Quantization:
  Compare with GNG
  Stationary Environment

                   Number of
                                bpp    PSNR
                     Nodes
 First-layer           130      0.45   30.79
GNG (Fritzke,
                       130      0.45   29.98
   1995)
Second-layer               52   0.34   29.29

    GNG                    52   0.34   28.61
32/39

           Vector Quantization:
        Non-stationary Environment




First-layer: 499 nodes, 0.56bpp,   Second-layer: 64 nodes, 0.375bpp,
PSNR = 32.91dB                     PSNR = 29.66dB
33/39


     Application: Handwritten
     character recognition
   Optical Recognition of Handwritten Digits
    database (optdigits) (UCI repository, 1996)
       10 classes (handwritten digits) from a total of 43
        people
       30 contributed to the training set, 3823 samples
       Different 13 to the test set, 1797 samples
       Dimension of the samples is 64
   Method:
       Train: A separate SOINN to describe each class of data
       Test: Classify an unknown data point according to
        whichever model gives the best match (nearest
        neighbor)
34/39



  Optdigits: Comparison with 1-NN
                            Proposed method
              1-NN
                      (1)     (2)     (3)      (4)
Recognition
              98%    98.5% 97.1%     96.5% 96.0%
   ratio
   No. of
              3823   845      544     415     334
 prototype
 Speed up
               1     4.53     7.02   9.21     11.45
  (times)
 Memory       100% 22.1% 14.2%       10.8%    8.7%
35/39




   Optdigits: Comparison with SVM

                                Improved SVM
           Traditional SVM
                               (Passerini, 2002)     Proposed
                                                      method
         One-vs-All All-pairs One-vs-All All-pairs
Recog
nition     97.2       97.4       98.2      98.1        98.5
 ratio

Gaussian Kernel
36/39




Application: others

   Humanoid robot
   Scene recognition
   Texture recognition
   Semi-supervised learning
37/39




     Journal papers (2003~2005)
1.    Shen Furao & Osamu Hasegawa, “An adaptive incremental LBG
      for vector quantization,” Neural Networks, accepted.
2.    Shen Furao & Osamu Hasegawa, “An incremental network for on-
      line unsupervised classification and topology learning,” Neural
      Networks, accepted.
3.    Shen Furao & Osamu Hasegawa, Fractal image coding with
      simulated annealing search, Journal of Advanced Computational
      Intelligence and Intelligent Informatics, Vol.9, No.1, pp.80-88,
      2005.
4.    Shen Furao & Osamu Hasegawa, A fast no search fractal image
      coding method, Signal Processing: Image Communication, vol.19,
      pp.393-404, (2004)
5.    Shen Furao & Osamu Hasegawa, A growing neural network for
      online unsupervised learning, Journal of Advanced Computational
      Intelligence and Intelligent Informatics, Vol.8, No.2, pp.121-129,
      (2004)
38/39


      Refereed International
      Conference (2003~2005)
1.   Shen Furao, Youki Kamiya & Osamu Hasegawa, “An incremental neural network for online
     supervised learning and topology representation,” 12th International Conference on Neural
     Information Processing (ICONIP 2005), Taipei, Taiwan, October 30 - November 2, 2005, accepted.
2.   Shen Furao & Osamu Hasegawa, “An incremental k-means clustering algorithm with adaptive
     distance measure,” 12th International Conference on Neural Information Processing (ICONIP
     2005), Taipei, Taiwan, October 30 - November 2, 2005, accepted.
3.   Shen Furao & Osamu Hasegawa, “An on-line learning mechanism for unsupervised classification
     and topology representation,” IEEE Computer Society International Conference on Computer
     Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, June 21-26, 2005.
4.   Shen Furao & Osamu Hasegawa, “An incremental neural network for non-stationary unsupervised
     learning,” 11th International Conference on Neural Information Processing (ICONIP 2004), Calcutta,
     India, November 22-25, 2004.
5.   Shen Furao & Osamu Hasegawa, “An effective fractal image coding method without search,” IEEE
     International Conference on Image Processing (ICIP 2004), Singapore, October 24-27, 2004.
6.   Youki Kamiya, Shen Furao & Osamu Hasegawa, “Non-stop learning : a new scheme for continuous
     learning and recognition,” Joint 2nd SCIS and 5th ISIS, Keio University, Yokohama, Japan,
     September 21-24, 2004.
7.   Osamu Hasegawa & Shen Furao, “A self-structurizing neural network for online incremental
     learning,” CD-ROM SICE Annual Conference in Sapporo, FAII-5-2, August 4-6, 2004.
8.   Shen Furao & Osamu Hasegawa, “A self-organized growing network for on-line unsupervised
     learning,” 2004 International Joint Conference on Neural Networks (IJCNN 2004), Budapest,
     Hungary, CD-ROM ISBN 0-7803-8360-5, Vol.1, pp.11-16, 2004.
9.   Shen Furao & Osamu Hasegawa, “A fast and less loss fractal image coding method using
     simulated annealing,” 7th Joint Conference on Information Science (JCIS 2003), Cary, North
     Carolina, USA, September 26-30, 2003.

Mais conteúdo relacionado

Mais procurados

Deep Learning Based Voice Activity Detection and Speech Enhancement
Deep Learning Based Voice Activity Detection and Speech EnhancementDeep Learning Based Voice Activity Detection and Speech Enhancement
Deep Learning Based Voice Activity Detection and Speech EnhancementNAVER Engineering
 
#6 PyData Warsaw: Deep learning for image segmentation
#6 PyData Warsaw: Deep learning for image segmentation#6 PyData Warsaw: Deep learning for image segmentation
#6 PyData Warsaw: Deep learning for image segmentationMatthew Opala
 
YOLO9000 - PR023
YOLO9000 - PR023YOLO9000 - PR023
YOLO9000 - PR023Jinwon Lee
 
Image Compression Using Wavelet Packet Tree
Image Compression Using Wavelet Packet TreeImage Compression Using Wavelet Packet Tree
Image Compression Using Wavelet Packet TreeIDES Editor
 
EMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
EMNLP 2014: Opinion Mining with Deep Recurrent Neural NetworkEMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
EMNLP 2014: Opinion Mining with Deep Recurrent Neural NetworkPeinan ZHANG
 
Case Study of Convolutional Neural Network
Case Study of Convolutional Neural NetworkCase Study of Convolutional Neural Network
Case Study of Convolutional Neural NetworkNamHyuk Ahn
 
Deep Learning - 인공지능 기계학습의 새로운 트랜드 :김인중
Deep Learning - 인공지능 기계학습의 새로운 트랜드 :김인중Deep Learning - 인공지능 기계학습의 새로운 트랜드 :김인중
Deep Learning - 인공지능 기계학습의 새로운 트랜드 :김인중datasciencekorea
 
Deep Learning and TensorFlow
Deep Learning and TensorFlowDeep Learning and TensorFlow
Deep Learning and TensorFlowOswald Campesato
 
Introduction to deep learning based voice activity detection
Introduction to deep learning based voice activity detectionIntroduction to deep learning based voice activity detection
Introduction to deep learning based voice activity detectionNAVER Engineering
 
Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)SungminYou
 
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...Marina Santini
 
TypeScript and Deep Learning
TypeScript and Deep LearningTypeScript and Deep Learning
TypeScript and Deep LearningOswald Campesato
 
Score based Generative Modeling through Stochastic Differential Equations
Score based Generative Modeling through Stochastic Differential EquationsScore based Generative Modeling through Stochastic Differential Equations
Score based Generative Modeling through Stochastic Differential EquationsSungchul Kim
 
CVPR2008 tutorial generalized pca
CVPR2008 tutorial generalized pcaCVPR2008 tutorial generalized pca
CVPR2008 tutorial generalized pcazukun
 
Lecture 29 Convolutional Neural Networks - Computer Vision Spring2015
Lecture 29 Convolutional Neural Networks -  Computer Vision Spring2015Lecture 29 Convolutional Neural Networks -  Computer Vision Spring2015
Lecture 29 Convolutional Neural Networks - Computer Vision Spring2015Jia-Bin Huang
 
Batch normalization paper review
Batch normalization paper reviewBatch normalization paper review
Batch normalization paper reviewMinho Heo
 
Object detection with deep learning
Object detection with deep learningObject detection with deep learning
Object detection with deep learningSushant Shrivastava
 
Speaker Dependent WaveNet Vocoder
Speaker Dependent WaveNet VocoderSpeaker Dependent WaveNet Vocoder
Speaker Dependent WaveNet VocoderAkira Tamamori
 
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020Universitat Politècnica de Catalunya
 

Mais procurados (20)

Deep Learning Based Voice Activity Detection and Speech Enhancement
Deep Learning Based Voice Activity Detection and Speech EnhancementDeep Learning Based Voice Activity Detection and Speech Enhancement
Deep Learning Based Voice Activity Detection and Speech Enhancement
 
#6 PyData Warsaw: Deep learning for image segmentation
#6 PyData Warsaw: Deep learning for image segmentation#6 PyData Warsaw: Deep learning for image segmentation
#6 PyData Warsaw: Deep learning for image segmentation
 
YOLO9000 - PR023
YOLO9000 - PR023YOLO9000 - PR023
YOLO9000 - PR023
 
Image Compression Using Wavelet Packet Tree
Image Compression Using Wavelet Packet TreeImage Compression Using Wavelet Packet Tree
Image Compression Using Wavelet Packet Tree
 
EMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
EMNLP 2014: Opinion Mining with Deep Recurrent Neural NetworkEMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
EMNLP 2014: Opinion Mining with Deep Recurrent Neural Network
 
Case Study of Convolutional Neural Network
Case Study of Convolutional Neural NetworkCase Study of Convolutional Neural Network
Case Study of Convolutional Neural Network
 
Deep Learning - 인공지능 기계학습의 새로운 트랜드 :김인중
Deep Learning - 인공지능 기계학습의 새로운 트랜드 :김인중Deep Learning - 인공지능 기계학습의 새로운 트랜드 :김인중
Deep Learning - 인공지능 기계학습의 새로운 트랜드 :김인중
 
Deep Learning and TensorFlow
Deep Learning and TensorFlowDeep Learning and TensorFlow
Deep Learning and TensorFlow
 
Introduction to deep learning based voice activity detection
Introduction to deep learning based voice activity detectionIntroduction to deep learning based voice activity detection
Introduction to deep learning based voice activity detection
 
Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)Deep learning lecture - part 1 (basics, CNN)
Deep learning lecture - part 1 (basics, CNN)
 
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...
Lecture 02: Machine Learning for Language Technology - Decision Trees and Nea...
 
TypeScript and Deep Learning
TypeScript and Deep LearningTypeScript and Deep Learning
TypeScript and Deep Learning
 
Score based Generative Modeling through Stochastic Differential Equations
Score based Generative Modeling through Stochastic Differential EquationsScore based Generative Modeling through Stochastic Differential Equations
Score based Generative Modeling through Stochastic Differential Equations
 
CVPR2008 tutorial generalized pca
CVPR2008 tutorial generalized pcaCVPR2008 tutorial generalized pca
CVPR2008 tutorial generalized pca
 
Lecture 29 Convolutional Neural Networks - Computer Vision Spring2015
Lecture 29 Convolutional Neural Networks -  Computer Vision Spring2015Lecture 29 Convolutional Neural Networks -  Computer Vision Spring2015
Lecture 29 Convolutional Neural Networks - Computer Vision Spring2015
 
Batch normalization paper review
Batch normalization paper reviewBatch normalization paper review
Batch normalization paper review
 
Deep learning
Deep learningDeep learning
Deep learning
 
Object detection with deep learning
Object detection with deep learningObject detection with deep learning
Object detection with deep learning
 
Speaker Dependent WaveNet Vocoder
Speaker Dependent WaveNet VocoderSpeaker Dependent WaveNet Vocoder
Speaker Dependent WaveNet Vocoder
 
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
Convolutional Neural Networks - Xavier Giro - UPC TelecomBCN Barcelona 2020
 

Semelhante a PhDThesis, Dr Shen Furao

Multilabel Classification by BCH Code and Random Forests
Multilabel Classification by BCH Code and Random ForestsMultilabel Classification by BCH Code and Random Forests
Multilabel Classification by BCH Code and Random ForestsIDES Editor
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Gaurav Mittal
 
Supervised sequence labelling with recurrent neural networks ch1 6
Supervised sequence labelling with recurrent neural networks ch1 6Supervised sequence labelling with recurrent neural networks ch1 6
Supervised sequence labelling with recurrent neural networks ch1 6SungminYou
 
論文紹介:Learning With Neighbor Consistency for Noisy Labels
論文紹介:Learning With Neighbor Consistency for Noisy Labels論文紹介:Learning With Neighbor Consistency for Noisy Labels
論文紹介:Learning With Neighbor Consistency for Noisy LabelsToru Tamaki
 
Artificial neural networks introduction
Artificial neural networks introductionArtificial neural networks introduction
Artificial neural networks introductionSungminYou
 
Anchor free object detection by deep learning
Anchor free object detection by deep learningAnchor free object detection by deep learning
Anchor free object detection by deep learningYu Huang
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Data compression using python draft
Data compression using python draftData compression using python draft
Data compression using python draftAshok Govindarajan
 
Learning multifractal structure in large networks (Purdue ML Seminar)
Learning multifractal structure in large networks (Purdue ML Seminar)Learning multifractal structure in large networks (Purdue ML Seminar)
Learning multifractal structure in large networks (Purdue ML Seminar)Austin Benson
 
Deep Learning: R with Keras and TensorFlow
Deep Learning: R with Keras and TensorFlowDeep Learning: R with Keras and TensorFlow
Deep Learning: R with Keras and TensorFlowOswald Campesato
 
Restricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for AttributionRestricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for Attributiontaeseon ryu
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspectiveAnirban Santara
 

Semelhante a PhDThesis, Dr Shen Furao (20)

Multilabel Classification by BCH Code and Random Forests
Multilabel Classification by BCH Code and Random ForestsMultilabel Classification by BCH Code and Random Forests
Multilabel Classification by BCH Code and Random Forests
 
Multidimensional RNN
Multidimensional RNNMultidimensional RNN
Multidimensional RNN
 
Et25897899
Et25897899Et25897899
Et25897899
 
Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)Convolutional Neural Networks (CNN)
Convolutional Neural Networks (CNN)
 
79 83
79 8379 83
79 83
 
Supervised sequence labelling with recurrent neural networks ch1 6
Supervised sequence labelling with recurrent neural networks ch1 6Supervised sequence labelling with recurrent neural networks ch1 6
Supervised sequence labelling with recurrent neural networks ch1 6
 
論文紹介:Learning With Neighbor Consistency for Noisy Labels
論文紹介:Learning With Neighbor Consistency for Noisy Labels論文紹介:Learning With Neighbor Consistency for Noisy Labels
論文紹介:Learning With Neighbor Consistency for Noisy Labels
 
Artificial neural networks introduction
Artificial neural networks introductionArtificial neural networks introduction
Artificial neural networks introduction
 
Anchor free object detection by deep learning
Anchor free object detection by deep learningAnchor free object detection by deep learning
Anchor free object detection by deep learning
 
Turbo Code
Turbo Code Turbo Code
Turbo Code
 
MPEG/Audio Compression
MPEG/Audio CompressionMPEG/Audio Compression
MPEG/Audio Compression
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
deep CNN vs conventional ML
deep CNN vs conventional MLdeep CNN vs conventional ML
deep CNN vs conventional ML
 
Data compression using python draft
Data compression using python draftData compression using python draft
Data compression using python draft
 
Learning multifractal structure in large networks (Purdue ML Seminar)
Learning multifractal structure in large networks (Purdue ML Seminar)Learning multifractal structure in large networks (Purdue ML Seminar)
Learning multifractal structure in large networks (Purdue ML Seminar)
 
Deep Learning: R with Keras and TensorFlow
Deep Learning: R with Keras and TensorFlowDeep Learning: R with Keras and TensorFlow
Deep Learning: R with Keras and TensorFlow
 
Graph500
Graph500Graph500
Graph500
 
Restricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for AttributionRestricting the Flow: Information Bottlenecks for Attribution
Restricting the Flow: Information Bottlenecks for Attribution
 
4 4 bopardikar_concealment
4 4 bopardikar_concealment4 4 bopardikar_concealment
4 4 bopardikar_concealment
 
Deep learning from a novice perspective
Deep learning from a novice perspectiveDeep learning from a novice perspective
Deep learning from a novice perspective
 

Mais de SOINN Inc.

Dr.Kawewong Ph.D Thesis
Dr.Kawewong Ph.D ThesisDr.Kawewong Ph.D Thesis
Dr.Kawewong Ph.D ThesisSOINN Inc.
 
ロボットによる一般問題解決
ロボットによる一般問題解決ロボットによる一般問題解決
ロボットによる一般問題解決SOINN Inc.
 
学生さんへのメッセージ
学生さんへのメッセージ学生さんへのメッセージ
学生さんへのメッセージSOINN Inc.
 
超高速オンライン転移学習
超高速オンライン転移学習超高速オンライン転移学習
超高速オンライン転移学習SOINN Inc.
 
東工大長谷川修研紹介 2011 (8月1日版)
東工大長谷川修研紹介 2011 (8月1日版)東工大長谷川修研紹介 2011 (8月1日版)
東工大長谷川修研紹介 2011 (8月1日版)SOINN Inc.
 
東工大 長谷川修研の環境学習・認識・探索技術
東工大 長谷川修研の環境学習・認識・探索技術東工大 長谷川修研の環境学習・認識・探索技術
東工大 長谷川修研の環境学習・認識・探索技術SOINN Inc.
 

Mais de SOINN Inc. (13)

PBAI
PBAIPBAI
PBAI
 
Dr.Kawewong Ph.D Thesis
Dr.Kawewong Ph.D ThesisDr.Kawewong Ph.D Thesis
Dr.Kawewong Ph.D Thesis
 
I
II
I
 
PIRF-NAV2
PIRF-NAV2PIRF-NAV2
PIRF-NAV2
 
SSA-SOINN
SSA-SOINNSSA-SOINN
SSA-SOINN
 
E-SOINN
E-SOINNE-SOINN
E-SOINN
 
ロボットによる一般問題解決
ロボットによる一般問題解決ロボットによる一般問題解決
ロボットによる一般問題解決
 
SOINN PBR
SOINN PBRSOINN PBR
SOINN PBR
 
SOINN-AM
SOINN-AMSOINN-AM
SOINN-AM
 
学生さんへのメッセージ
学生さんへのメッセージ学生さんへのメッセージ
学生さんへのメッセージ
 
超高速オンライン転移学習
超高速オンライン転移学習超高速オンライン転移学習
超高速オンライン転移学習
 
東工大長谷川修研紹介 2011 (8月1日版)
東工大長谷川修研紹介 2011 (8月1日版)東工大長谷川修研紹介 2011 (8月1日版)
東工大長谷川修研紹介 2011 (8月1日版)
 
東工大 長谷川修研の環境学習・認識・探索技術
東工大 長谷川修研の環境学習・認識・探索技術東工大 長谷川修研の環境学習・認識・探索技術
東工大 長谷川修研の環境学習・認識・探索技術
 

Último

08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking MenDelhi Call girls
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...gurkirankumar98700
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesSinan KOZAK
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024Scott Keck-Warren
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGSujit Pal
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Igalia
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking MenDelhi Call girls
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationRadu Cotescu
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...HostedbyConfluent
 

Último (20)

08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
08448380779 Call Girls In Diplomatic Enclave Women Seeking Men
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
Kalyanpur ) Call Girls in Lucknow Finest Escorts Service 🍸 8923113531 🎰 Avail...
 
Unblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen FramesUnblocking The Main Thread Solving ANRs and Frozen Frames
Unblocking The Main Thread Solving ANRs and Frozen Frames
 
SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024SQL Database Design For Developers at php[tek] 2024
SQL Database Design For Developers at php[tek] 2024
 
Google AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAGGoogle AI Hackathon: LLM based Evaluator for RAG
Google AI Hackathon: LLM based Evaluator for RAG
 
Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
Raspberry Pi 5: Challenges and Solutions in Bringing up an OpenGL/Vulkan Driv...
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men08448380779 Call Girls In Friends Colony Women Seeking Men
08448380779 Call Girls In Friends Colony Women Seeking Men
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
Transforming Data Streams with Kafka Connect: An Introduction to Single Messa...
 

PhDThesis, Dr Shen Furao

  • 1. 1/39 An Algorithm for Incremental Unsupervised Learning and Topology Representation Shen Furao Hasegawa Lab Department of Computational Intelligence and Systems Science
  • 2. 2/39 Contents  Chapter 1: Introduction  Chapter 2: Vector Quantization  Chapter 3: Adaptive Incremental LBG  Chapter 4: Experiment of adaptive incremental LBG  Chapter 5: Self-organizing incremental neural network  Chapter 6: Experiment with artificial data  Chapter 7: Application  Chapter 8: Conclusion and discussion
  • 3. 3/39 Introduction  Clustering: Construct decision boundaries based on unlabeled data.  Topology learning: find a topology structure that closely reflects the topology of the data distribution  Online incremental learning: Adapt to new information without corrupting previously learned information
  • 4. 4/39 Vector Quantization  Targets  To minimize the average distortion through a suitable choice of codewords  Application  Data compression, speech recognition  Separate the data set to Voronoi regions, find the centroid of the Voronoi regions  LBG method (Linde, Buzo & Gray, 1980)  Dependence on initial starting conditions  Tendency to result in local minima
  • 5. 5/39 Adaptive incremental LBG (Shen & Hasegawa, 2005)  To solve the problem caused by poorly chosen initial conditions  independent of initial conditions  With fixed number of codewords, to find a suitable codebook to minimize the distortion error MQE.  It can work better than or same as ELBG (Patane & Russo, 2001)  With fixed distortion error, to minimize the number of codewords and find a suitable codebook.  Meaning: To get the same reconstruction quality for different vector set, the codebook will have different size and thus can save plenty of storage.
  • 6. 6/39 Test Image  Lena (512*512*8) is separated to 4*4 blocks. Such blocks are the input vectors. There are totally 16384 vectors.  Peak Signal to Noise Ratio (PSNR) is used to evaluate the resulting images after the quantization process. 2552 PSNR  10 log10 1  N i 1 ( f (i )  g (i )) 2 N Lena (512*512*8)
  • 7. 7/39 Improvement I: Incrementally inserting codewords  The optimal solution of k- clustering problem can be reachable from the (k- 1)-clustering problem.
  • 8. 8/39 Improvement II: Distance measure function  Within cluster distance must be significantly less than between cluster distance. l d ( x, c)  ( ( xi  ci ) 2 ) p i 1 p  log10 q  1
  • 9. 9/39 Improvement III: Delete and insert codeword Delete codeword with lowest local distortion error Insert codeword near the codeword with highest local distortion error
  • 10. 10/39 Experiment 1 PSNR Number of codewords LBG (Linde Mk (Lee et ELBG(Pata AILBG et al.,1980) al., 1997) ne, 2001) 256 31.60 31.92 31.94 32.01 512 32.49 33.09 33.14 33.22 1024 33.37 34.42 34.59 34.71 Meaning: With the same number of codewords, proposed method can get highest PSNR, i.e., with the same compression ratio, proposed method can get best reconstruction quality.
  • 11. 11/39 Experiment 2 Number of codewords PSNR ELBG (Patane, AILBG 2001) 31.94 256 244 33.14 512 488 34.59 1024 988 Meaning: • With a predefined reconstruction quality, proposed method can find a good codebook with reasonable number of codewords.
  • 12. 12/39 Experiment 3: Original Images Boat Gray21
  • 13. 13/39 Results of experiment 3 PSNR Number of codewords (dB) Gray21 Lena Boat 28.0 9 22 54 30.0 12 76 199 33.0 15 454 1018 Meaning: 1. For different images, with the same PSNR, number of codewords will be different. 2. Proposed method can be used to set up an image database with same reconstruction quality (PSNR)
  • 14. 14/39 Unsupervised learning  Clustering  K-means (King, 1967), ELBG (Patane, 2001), Global k-means (Likas, 2003), AILBG (Shen, 2005)  Determine the number of clusters k in advance  data sets consisting only of isotropic clusters  Single-link (Sneath, 1973), complete-link (King, 1967), CURE (Guha, 1998)  Computation overload, much memory space  Unsuitable for large data sets or online data  Topology Learning: Reflects topology of high-dimension data distribution  SOM (Kohonen, 1982): predetermined structure and size  CHL+NG (Martinetz, 1994): a priori decision about the network size  GNG (Fritzke, 1995): permanent increase in the number of nodes  Online Learning  GNG-U (Frutzke, 1998): destroy learned knowledge  LLCS (Hamker, 2001): supervised learning
  • 15. 15/39 Self-organizing incremental neural network (Shen & Hasegawa, 2005) 1. To process the on-line non-stationary data. 2. To do the unsupervised learning without any priori condition such as: • suitable number of nodes • a good initial codebook • how many classes there are 3. Report a suitable number of classes 4. Represent the topological structure of the input probability density. 5. Separate the classes with some low-density overlaps 6. Detect the main structure of clusters polluted by noises
  • 16. 16/39 The Proposed algorithm First Layer Second Layer Input Growing First Growing Second pattern Network Output Network Output Insert Delete Classify Node Node
  • 17. 17/39 Algorithms  Insert new nodes  Criterion: nodes with high errors serve as a criterion to insert a new node  error-radius is used to judge if the insert is successful  Delete nodes  Criterion: remove nodes in low probability density regions  Realize: delete nodes with no or only one direct topology neighbor  Classify  Criterion: all nodes linked with edges will be one cluster
  • 18. 18/39 First-layer Second-layer Input signals== Initialize multiple of  Input signal Within-class Insertion Find winner Judge if insertion and second winner is successful Delete overlap and Y Between-class noise nodes Insertion N N Input signals== Connect winner multiple of LT and second winner Y Update weight of First-layer Y winner and neighbor N Output results
  • 19. 19/39 Experiment Environment I II III IV V VI VII A 1 0 1 0 0 0 0 B 0 1 0 1 0 0 0 C 0 0 1 0 0 1 0 D 0 0 0 1 1 0 0 E1 0 0 0 0 1 0 0 E2 0 0 0 0 0 1 0 Original Data Set E3 0 0 0 0 0 0 1
  • 20. 20/39 Experiment: Stationary environment Original Data Set GNG (Fritzke, 1995)
  • 21. 21/39 Experiment: Stationary environment Proposed method: first layer Proposed method: final results
  • 22. 22/39 Experiment: Non-stationary environment GNG (Fritzke, 1995) GNG-U (Fritzke, 1998)
  • 23. 23/39 Experiment: Non-stationary environment Proposed method: first layer
  • 24. 24/39 Experiment: Non-stationary environment Proposed method: first layer
  • 25. 25/39 Experiment: Non-stationary environment Proposed method: first layer
  • 26. 26/39 Experiment: Non-stationary environment Proposed method: first layer Proposed method: Final output
  • 27. 27/39 Application: Face recognition (ATT_FACE) Facial Image (a) 10 classes (b) 10 samples of class 1
  • 28. 28/39 Face recognition: Feature Vector Vector of (a) Vector of (b)
  • 29. 29/39 Face Recognition: results 10 clusters Stationary Correct Recognition Ratio: 90% Non-Stationary Correct Recognition Ratio: 86%
  • 30. 30/39 Application: Vector Quantization Stationary Environment: Decoding Original Lena (512*512*8) image, 130 nodes, 0.45bpp, PSNR = 30.79dB
  • 31. 31/39 Vector Quantization: Compare with GNG Stationary Environment Number of bpp PSNR Nodes First-layer 130 0.45 30.79 GNG (Fritzke, 130 0.45 29.98 1995) Second-layer 52 0.34 29.29 GNG 52 0.34 28.61
  • 32. 32/39 Vector Quantization: Non-stationary Environment First-layer: 499 nodes, 0.56bpp, Second-layer: 64 nodes, 0.375bpp, PSNR = 32.91dB PSNR = 29.66dB
  • 33. 33/39 Application: Handwritten character recognition  Optical Recognition of Handwritten Digits database (optdigits) (UCI repository, 1996)  10 classes (handwritten digits) from a total of 43 people  30 contributed to the training set, 3823 samples  Different 13 to the test set, 1797 samples  Dimension of the samples is 64  Method:  Train: A separate SOINN to describe each class of data  Test: Classify an unknown data point according to whichever model gives the best match (nearest neighbor)
  • 34. 34/39 Optdigits: Comparison with 1-NN Proposed method 1-NN (1) (2) (3) (4) Recognition 98% 98.5% 97.1% 96.5% 96.0% ratio No. of 3823 845 544 415 334 prototype Speed up 1 4.53 7.02 9.21 11.45 (times) Memory 100% 22.1% 14.2% 10.8% 8.7%
  • 35. 35/39 Optdigits: Comparison with SVM Improved SVM Traditional SVM (Passerini, 2002) Proposed method One-vs-All All-pairs One-vs-All All-pairs Recog nition 97.2 97.4 98.2 98.1 98.5 ratio Gaussian Kernel
  • 36. 36/39 Application: others  Humanoid robot  Scene recognition  Texture recognition  Semi-supervised learning
  • 37. 37/39 Journal papers (2003~2005) 1. Shen Furao & Osamu Hasegawa, “An adaptive incremental LBG for vector quantization,” Neural Networks, accepted. 2. Shen Furao & Osamu Hasegawa, “An incremental network for on- line unsupervised classification and topology learning,” Neural Networks, accepted. 3. Shen Furao & Osamu Hasegawa, Fractal image coding with simulated annealing search, Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol.9, No.1, pp.80-88, 2005. 4. Shen Furao & Osamu Hasegawa, A fast no search fractal image coding method, Signal Processing: Image Communication, vol.19, pp.393-404, (2004) 5. Shen Furao & Osamu Hasegawa, A growing neural network for online unsupervised learning, Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol.8, No.2, pp.121-129, (2004)
  • 38. 38/39 Refereed International Conference (2003~2005) 1. Shen Furao, Youki Kamiya & Osamu Hasegawa, “An incremental neural network for online supervised learning and topology representation,” 12th International Conference on Neural Information Processing (ICONIP 2005), Taipei, Taiwan, October 30 - November 2, 2005, accepted. 2. Shen Furao & Osamu Hasegawa, “An incremental k-means clustering algorithm with adaptive distance measure,” 12th International Conference on Neural Information Processing (ICONIP 2005), Taipei, Taiwan, October 30 - November 2, 2005, accepted. 3. Shen Furao & Osamu Hasegawa, “An on-line learning mechanism for unsupervised classification and topology representation,” IEEE Computer Society International Conference on Computer Vision and Pattern Recognition (CVPR 2005), San Diego, CA, USA, June 21-26, 2005. 4. Shen Furao & Osamu Hasegawa, “An incremental neural network for non-stationary unsupervised learning,” 11th International Conference on Neural Information Processing (ICONIP 2004), Calcutta, India, November 22-25, 2004. 5. Shen Furao & Osamu Hasegawa, “An effective fractal image coding method without search,” IEEE International Conference on Image Processing (ICIP 2004), Singapore, October 24-27, 2004. 6. Youki Kamiya, Shen Furao & Osamu Hasegawa, “Non-stop learning : a new scheme for continuous learning and recognition,” Joint 2nd SCIS and 5th ISIS, Keio University, Yokohama, Japan, September 21-24, 2004. 7. Osamu Hasegawa & Shen Furao, “A self-structurizing neural network for online incremental learning,” CD-ROM SICE Annual Conference in Sapporo, FAII-5-2, August 4-6, 2004. 8. Shen Furao & Osamu Hasegawa, “A self-organized growing network for on-line unsupervised learning,” 2004 International Joint Conference on Neural Networks (IJCNN 2004), Budapest, Hungary, CD-ROM ISBN 0-7803-8360-5, Vol.1, pp.11-16, 2004. 9. Shen Furao & Osamu Hasegawa, “A fast and less loss fractal image coding method using simulated annealing,” 7th Joint Conference on Information Science (JCIS 2003), Cary, North Carolina, USA, September 26-30, 2003.