SlideShare uma empresa Scribd logo
1 de 16
Semi-Supervised Autoencoders for
Predicting Sentiment Distributions
Socher, Pennington, Huang, Ng, Manning (Stanford)
         Presented by Danushka Bollegala
Task
•   Predict sentence level sentiment
    •   white blood cells destroying an infection: +
    •   an infection destroying white blood cells: -
•   Predict the distribution of sentiment
    1. Sorry, Hugs: User offers condolences to author.
    2. You Rock: Indicating approval, congratulations.
    3. Teehee: User found the anecdote amusing.
    4. I Understand: Show of empathy.
    5. Wow, Just Wow: Expression of surprise,shock.
                                                         2
Approach
•   Learn a representation for the entire sentence using autoencoders
    (unsupervised)
    •   A tree structure is learnt (node and edges)
•   Learn a sentiment classifier using the representation learnt in the
    previous step (supervised)
•   Together, the approach becomes a semi-supervised learning task.




                                                                          3
Neural Word Representations
•   Randomly initialize the word representations
    •   For a vector x representing a word, sample it from a zero mean
        Guassian x 2 Rn , x ⇠ N (0, 2 )
    •   Works well when the task is supervised because given training data we
        can later tune the weights in the representation.
•   Pre-trained word representations
    •   Bengio et al. 2003
    •   Given a context vector c, remove a word x that co-occur in c, and use
        the remaining features in c to guess the occurrence of x.
    •   Similar to Ando et al 2003 suggestion for transfer learning via
        Alternating Structure Optimization (ASO).
    •   Can take into account co-occurrence information.
    •   Rich syntactic and semantic representations for words!
                                                                                4
Representing the input
•   L: word-representation matrix
    •   Each word in the vocabulary is represented by an n-
        dimensional vector
    •   Those vectors are then stacked in columns to create
        the matrix L
•   Given a word, k, it is represented as 1-of-k binary vector
    bk. Then the vector representing the word k is given by,
    x = L bk.
•   This continuous representation is better than the
    original binary representation because internal sigmoid
    activators are continuous.
                                                              5
Autoencoders (Tree Given)
•   binary trees are assumed
    (each parent have two
    children).
•   Given child node
    representations
    iteratively compute the
    representation of their
    parent node.
•   Concatenate the two
    vectors for the child
    nodes and apply W and
    bias b, followed up by
    Sigmoid function f.        6
Structure Prediction
•   Build the tree as well as node representations
•   Concept
    •   Given a sentence (a sequence of words), generate all possible trees with those
        words.
    •   For each tree learn autoencoders at each non-leaf node and compute the
        reconstruction error .
    •   Total reconstruction error of a tree is the sum of reconstruction errors at each
        node in the tree.
    •   Select the tree with the minimum total reconstruction error.




        Too expensive (slow) in practice!

                                                                                           7
Greedy Unsupervised RAE
•   For each consecutive word pairs, compute their parent.
•   Repeat this process until we are left with one parent. Given n child
    nodes at some step, we are left with n-1 parents, thus reducing
    the number of nodes to process as we go up the tree.
•   Now, select the parent at each level that has the minimum
    reconstruction error.
•   Weighted reconstruction
    •   The number of descendants of a parent must be considered
        when computing its reconstruction error.




•   Parent vectors are L2 normalized (p/||p||) to avoid hidden layer
    (W) becoming smaller (thereby reducing the reconstruction error) 8
Semi-Supervised RAE
•   Each parent node is assigned with an 2n dimensional feature vector
•   We can learn a weight vector (2n dim) on top of those parent vectors
•   Logistic sigmoid function is used as the output
•   The sum of cross-entropy is minimized over all parent nodes in the tree
•   Similar to logistic regression




                                                                           9
Equations
   Sigmoid output at a parent node p

     Cross entropy loss

          Aggregate the loss over all
     sentence x, label t pairs in the corpus


                 Total loss of a tree is the
              sum of losses at all parent nodes


        Loss at a parent node comes from two sources:
          reconstruction error + cross entropy loss


                                                  10
Learning
•   Compute the gradient w.r.t. parameters (Ws, b) and use LBFGS
•   The objective function is not convex.
•   Only local optima can be achieved with this procedure.
•   Works well in practice.




                                                                   11
EP Dataset
•   Experience Project (EP) dataset
    •   People write confessions and others tag them.
    •   Five categories:




                                                        12
Baselines
•   Random (20%)
•   Most frequent (38.1%)
    •   I understand
•   Binary Bag of words (46.4%)
    •   Represent each sentence as a binary bag of words and do not use pre-trained
        representations.
•   Features (47.0%)
    •   Use sentiment lexicons, training data (supervised sentiment classification with SVMs)
•   Word vectors (45.5%)
    •   Ignore tree structure learnt by RAE. Aggregate the pre-trained representations for
        each word in the sentence and train an SVM.
•   Proposed method (50.1%)
    •   Learn the tree structure with RAE using unlabeled data (sentences only).
    •   Softmax layer is trained on each parent node using labeled data.

                                                                                             13
Predicting the distribution




                          14
Random vectors!!!
•   Randomly initializing the word vectors does very well!
•   Because supervision occurs later, it is OK to initialize randomly
•   Randomly initializing RAEs have shown similar good performances in
    other tasks
    •   On random weights and unsupervised feature learning ICML’11




                                                                         15
Summary
•   Learn a sentiment classifier using recursive
    autoencoders
•   Structure is also learn using unlabeled data
    •   greedy algorithm for tree construction
•   Semi-supervision at the parent level
•   Model is general enough for other sentence level
    classification tasks
•   Random word representations do very well!
                                                       16

Mais conteúdo relacionado

Mais procurados

Deep Learning: Recurrent Neural Network (Chapter 10)
Deep Learning: Recurrent Neural Network (Chapter 10) Deep Learning: Recurrent Neural Network (Chapter 10)
Deep Learning: Recurrent Neural Network (Chapter 10) Larry Guo
 
P04 restricted boltzmann machines cvpr2012 deep learning methods for vision
P04 restricted boltzmann machines cvpr2012 deep learning methods for visionP04 restricted boltzmann machines cvpr2012 deep learning methods for vision
P04 restricted boltzmann machines cvpr2012 deep learning methods for visionzukun
 
[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You Need[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You NeedDaiki Tanaka
 
RNN, LSTM and Seq-2-Seq Models
RNN, LSTM and Seq-2-Seq ModelsRNN, LSTM and Seq-2-Seq Models
RNN, LSTM and Seq-2-Seq ModelsEmory NLP
 
SOIAM (SOINN-AM)
SOIAM (SOINN-AM)SOIAM (SOINN-AM)
SOIAM (SOINN-AM)SOINN Inc.
 
Convolution Neural Networks
Convolution Neural NetworksConvolution Neural Networks
Convolution Neural NetworksAhmedMahany
 
Recurrent Neural Networks
Recurrent Neural NetworksRecurrent Neural Networks
Recurrent Neural NetworksCloudxLab
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRUananth
 
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14Daniel Lewis
 
Introduction to Tree-LSTMs
Introduction to Tree-LSTMsIntroduction to Tree-LSTMs
Introduction to Tree-LSTMsDaniel Perez
 
【DL輪読会】言語以外でのTransformerのまとめ (ViT, Perceiver, Frozen Pretrained Transformer etc)
【DL輪読会】言語以外でのTransformerのまとめ (ViT, Perceiver, Frozen Pretrained Transformer etc)【DL輪読会】言語以外でのTransformerのまとめ (ViT, Perceiver, Frozen Pretrained Transformer etc)
【DL輪読会】言語以外でのTransformerのまとめ (ViT, Perceiver, Frozen Pretrained Transformer etc)Deep Learning JP
 
PR-355: Masked Autoencoders Are Scalable Vision Learners
PR-355: Masked Autoencoders Are Scalable Vision LearnersPR-355: Masked Autoencoders Are Scalable Vision Learners
PR-355: Masked Autoencoders Are Scalable Vision LearnersJinwon Lee
 
Introduction to un supervised learning
Introduction to un supervised learningIntroduction to un supervised learning
Introduction to un supervised learningRishikesh .
 
Transfer Learning: An overview
Transfer Learning: An overviewTransfer Learning: An overview
Transfer Learning: An overviewjins0618
 
Foundations: Artificial Neural Networks
Foundations: Artificial Neural NetworksFoundations: Artificial Neural Networks
Foundations: Artificial Neural Networksananth
 
Intelligent Handwriting Recognition_MIL_presentation_v3_final
Intelligent Handwriting Recognition_MIL_presentation_v3_finalIntelligent Handwriting Recognition_MIL_presentation_v3_final
Intelligent Handwriting Recognition_MIL_presentation_v3_finalSuhas Pillai
 
Deep Learning
Deep LearningDeep Learning
Deep LearningJun Wang
 
Advanced Neural Machine Translation (D4L2 Deep Learning for Speech and Langua...
Advanced Neural Machine Translation (D4L2 Deep Learning for Speech and Langua...Advanced Neural Machine Translation (D4L2 Deep Learning for Speech and Langua...
Advanced Neural Machine Translation (D4L2 Deep Learning for Speech and Langua...Universitat Politècnica de Catalunya
 

Mais procurados (20)

Deep Learning: Recurrent Neural Network (Chapter 10)
Deep Learning: Recurrent Neural Network (Chapter 10) Deep Learning: Recurrent Neural Network (Chapter 10)
Deep Learning: Recurrent Neural Network (Chapter 10)
 
P04 restricted boltzmann machines cvpr2012 deep learning methods for vision
P04 restricted boltzmann machines cvpr2012 deep learning methods for visionP04 restricted boltzmann machines cvpr2012 deep learning methods for vision
P04 restricted boltzmann machines cvpr2012 deep learning methods for vision
 
[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You Need[Paper Reading] Attention is All You Need
[Paper Reading] Attention is All You Need
 
RNN, LSTM and Seq-2-Seq Models
RNN, LSTM and Seq-2-Seq ModelsRNN, LSTM and Seq-2-Seq Models
RNN, LSTM and Seq-2-Seq Models
 
SOIAM (SOINN-AM)
SOIAM (SOINN-AM)SOIAM (SOINN-AM)
SOIAM (SOINN-AM)
 
Convolution Neural Networks
Convolution Neural NetworksConvolution Neural Networks
Convolution Neural Networks
 
Recurrent Neural Networks
Recurrent Neural NetworksRecurrent Neural Networks
Recurrent Neural Networks
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
 
Introduction to Tree-LSTMs
Introduction to Tree-LSTMsIntroduction to Tree-LSTMs
Introduction to Tree-LSTMs
 
Autoencoder
AutoencoderAutoencoder
Autoencoder
 
【DL輪読会】言語以外でのTransformerのまとめ (ViT, Perceiver, Frozen Pretrained Transformer etc)
【DL輪読会】言語以外でのTransformerのまとめ (ViT, Perceiver, Frozen Pretrained Transformer etc)【DL輪読会】言語以外でのTransformerのまとめ (ViT, Perceiver, Frozen Pretrained Transformer etc)
【DL輪読会】言語以外でのTransformerのまとめ (ViT, Perceiver, Frozen Pretrained Transformer etc)
 
Angular and Deep Learning
Angular and Deep LearningAngular and Deep Learning
Angular and Deep Learning
 
PR-355: Masked Autoencoders Are Scalable Vision Learners
PR-355: Masked Autoencoders Are Scalable Vision LearnersPR-355: Masked Autoencoders Are Scalable Vision Learners
PR-355: Masked Autoencoders Are Scalable Vision Learners
 
Introduction to un supervised learning
Introduction to un supervised learningIntroduction to un supervised learning
Introduction to un supervised learning
 
Transfer Learning: An overview
Transfer Learning: An overviewTransfer Learning: An overview
Transfer Learning: An overview
 
Foundations: Artificial Neural Networks
Foundations: Artificial Neural NetworksFoundations: Artificial Neural Networks
Foundations: Artificial Neural Networks
 
Intelligent Handwriting Recognition_MIL_presentation_v3_final
Intelligent Handwriting Recognition_MIL_presentation_v3_finalIntelligent Handwriting Recognition_MIL_presentation_v3_final
Intelligent Handwriting Recognition_MIL_presentation_v3_final
 
Deep Learning
Deep LearningDeep Learning
Deep Learning
 
Advanced Neural Machine Translation (D4L2 Deep Learning for Speech and Langua...
Advanced Neural Machine Translation (D4L2 Deep Learning for Speech and Langua...Advanced Neural Machine Translation (D4L2 Deep Learning for Speech and Langua...
Advanced Neural Machine Translation (D4L2 Deep Learning for Speech and Langua...
 

Destaque

Deep Learning を実装する
Deep Learning を実装するDeep Learning を実装する
Deep Learning を実装するShuhei Iitsuka
 
Deep Learning 勉強会 (Chapter 7-12)
Deep Learning 勉強会 (Chapter 7-12)Deep Learning 勉強会 (Chapter 7-12)
Deep Learning 勉強会 (Chapter 7-12)Ohsawa Goodfellow
 
Learning Deep Architectures for AI (第 3 回 Deep Learning 勉強会資料; 松尾)
Learning Deep Architectures for AI (第 3 回 Deep Learning 勉強会資料; 松尾)Learning Deep Architectures for AI (第 3 回 Deep Learning 勉強会資料; 松尾)
Learning Deep Architectures for AI (第 3 回 Deep Learning 勉強会資料; 松尾)Ohsawa Goodfellow
 
Deep learning勉強会20121214ochi
Deep learning勉強会20121214ochiDeep learning勉強会20121214ochi
Deep learning勉強会20121214ochiOhsawa Goodfellow
 
論文輪読資料「Why regularized Auto-Encoders learn Sparse Representation?」DL Hacks
論文輪読資料「Why regularized Auto-Encoders learn Sparse Representation?」DL Hacks論文輪読資料「Why regularized Auto-Encoders learn Sparse Representation?」DL Hacks
論文輪読資料「Why regularized Auto-Encoders learn Sparse Representation?」DL Hackskurotaki_weblab
 
JSAI's AI Tool Introduction - Deep Learning, Pylearn2 and Torch7
JSAI's AI Tool Introduction - Deep Learning, Pylearn2 and Torch7JSAI's AI Tool Introduction - Deep Learning, Pylearn2 and Torch7
JSAI's AI Tool Introduction - Deep Learning, Pylearn2 and Torch7Kotaro Nakayama
 
論文輪読資料「A review of unsupervised feature learning and deep learning for time-s...
論文輪読資料「A review of unsupervised feature learning and deep learning for time-s...論文輪読資料「A review of unsupervised feature learning and deep learning for time-s...
論文輪読資料「A review of unsupervised feature learning and deep learning for time-s...Kaoru Nasuno
 
[DL輪読会]Semi supervised qa with generative domain-adaptive nets
[DL輪読会]Semi supervised qa with generative domain-adaptive nets[DL輪読会]Semi supervised qa with generative domain-adaptive nets
[DL輪読会]Semi supervised qa with generative domain-adaptive netsDeep Learning JP
 
[DL輪読会]Unsupervised Cross-Domain Image Generation
[DL輪読会]Unsupervised Cross-Domain Image Generation[DL輪読会]Unsupervised Cross-Domain Image Generation
[DL輪読会]Unsupervised Cross-Domain Image GenerationDeep Learning JP
 
[DL輪読会]Understanding deep learning requires rethinking generalization
[DL輪読会]Understanding deep learning requires rethinking generalization[DL輪読会]Understanding deep learning requires rethinking generalization
[DL輪読会]Understanding deep learning requires rethinking generalizationDeep Learning JP
 
IIBMP2016 深層生成モデルによる表現学習
IIBMP2016 深層生成モデルによる表現学習IIBMP2016 深層生成モデルによる表現学習
IIBMP2016 深層生成モデルによる表現学習Preferred Networks
 
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural NetworksMasahiro Suzuki
 
(DL hacks輪読) Variational Inference with Rényi Divergence
(DL hacks輪読) Variational Inference with Rényi Divergence(DL hacks輪読) Variational Inference with Rényi Divergence
(DL hacks輪読) Variational Inference with Rényi DivergenceMasahiro Suzuki
 
[Dl輪読会]dl hacks輪読
[Dl輪読会]dl hacks輪読[Dl輪読会]dl hacks輪読
[Dl輪読会]dl hacks輪読Deep Learning JP
 
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...Masahiro Suzuki
 
論文紹介 Semi-supervised Learning with Deep Generative Models
論文紹介 Semi-supervised Learning with Deep Generative Models論文紹介 Semi-supervised Learning with Deep Generative Models
論文紹介 Semi-supervised Learning with Deep Generative ModelsSeiya Tokui
 
猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoderSho Tatsuno
 
(DL hacks輪読) Seven neurons memorizing sequences of alphabetical images via sp...
(DL hacks輪読) Seven neurons memorizing sequences of alphabetical images via sp...(DL hacks輪読) Seven neurons memorizing sequences of alphabetical images via sp...
(DL hacks輪読) Seven neurons memorizing sequences of alphabetical images via sp...Masahiro Suzuki
 
(DL hacks輪読) Deep Kernel Learning
(DL hacks輪読) Deep Kernel Learning(DL hacks輪読) Deep Kernel Learning
(DL hacks輪読) Deep Kernel LearningMasahiro Suzuki
 
(研究会輪読) Weight Uncertainty in Neural Networks
(研究会輪読) Weight Uncertainty in Neural Networks(研究会輪読) Weight Uncertainty in Neural Networks
(研究会輪読) Weight Uncertainty in Neural NetworksMasahiro Suzuki
 

Destaque (20)

Deep Learning を実装する
Deep Learning を実装するDeep Learning を実装する
Deep Learning を実装する
 
Deep Learning 勉強会 (Chapter 7-12)
Deep Learning 勉強会 (Chapter 7-12)Deep Learning 勉強会 (Chapter 7-12)
Deep Learning 勉強会 (Chapter 7-12)
 
Learning Deep Architectures for AI (第 3 回 Deep Learning 勉強会資料; 松尾)
Learning Deep Architectures for AI (第 3 回 Deep Learning 勉強会資料; 松尾)Learning Deep Architectures for AI (第 3 回 Deep Learning 勉強会資料; 松尾)
Learning Deep Architectures for AI (第 3 回 Deep Learning 勉強会資料; 松尾)
 
Deep learning勉強会20121214ochi
Deep learning勉強会20121214ochiDeep learning勉強会20121214ochi
Deep learning勉強会20121214ochi
 
論文輪読資料「Why regularized Auto-Encoders learn Sparse Representation?」DL Hacks
論文輪読資料「Why regularized Auto-Encoders learn Sparse Representation?」DL Hacks論文輪読資料「Why regularized Auto-Encoders learn Sparse Representation?」DL Hacks
論文輪読資料「Why regularized Auto-Encoders learn Sparse Representation?」DL Hacks
 
JSAI's AI Tool Introduction - Deep Learning, Pylearn2 and Torch7
JSAI's AI Tool Introduction - Deep Learning, Pylearn2 and Torch7JSAI's AI Tool Introduction - Deep Learning, Pylearn2 and Torch7
JSAI's AI Tool Introduction - Deep Learning, Pylearn2 and Torch7
 
論文輪読資料「A review of unsupervised feature learning and deep learning for time-s...
論文輪読資料「A review of unsupervised feature learning and deep learning for time-s...論文輪読資料「A review of unsupervised feature learning and deep learning for time-s...
論文輪読資料「A review of unsupervised feature learning and deep learning for time-s...
 
[DL輪読会]Semi supervised qa with generative domain-adaptive nets
[DL輪読会]Semi supervised qa with generative domain-adaptive nets[DL輪読会]Semi supervised qa with generative domain-adaptive nets
[DL輪読会]Semi supervised qa with generative domain-adaptive nets
 
[DL輪読会]Unsupervised Cross-Domain Image Generation
[DL輪読会]Unsupervised Cross-Domain Image Generation[DL輪読会]Unsupervised Cross-Domain Image Generation
[DL輪読会]Unsupervised Cross-Domain Image Generation
 
[DL輪読会]Understanding deep learning requires rethinking generalization
[DL輪読会]Understanding deep learning requires rethinking generalization[DL輪読会]Understanding deep learning requires rethinking generalization
[DL輪読会]Understanding deep learning requires rethinking generalization
 
IIBMP2016 深層生成モデルによる表現学習
IIBMP2016 深層生成モデルによる表現学習IIBMP2016 深層生成モデルによる表現学習
IIBMP2016 深層生成モデルによる表現学習
 
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
 
(DL hacks輪読) Variational Inference with Rényi Divergence
(DL hacks輪読) Variational Inference with Rényi Divergence(DL hacks輪読) Variational Inference with Rényi Divergence
(DL hacks輪読) Variational Inference with Rényi Divergence
 
[Dl輪読会]dl hacks輪読
[Dl輪読会]dl hacks輪読[Dl輪読会]dl hacks輪読
[Dl輪読会]dl hacks輪読
 
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...
(DL hacks輪読) How to Train Deep Variational Autoencoders and Probabilistic Lad...
 
論文紹介 Semi-supervised Learning with Deep Generative Models
論文紹介 Semi-supervised Learning with Deep Generative Models論文紹介 Semi-supervised Learning with Deep Generative Models
論文紹介 Semi-supervised Learning with Deep Generative Models
 
猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder猫でも分かるVariational AutoEncoder
猫でも分かるVariational AutoEncoder
 
(DL hacks輪読) Seven neurons memorizing sequences of alphabetical images via sp...
(DL hacks輪読) Seven neurons memorizing sequences of alphabetical images via sp...(DL hacks輪読) Seven neurons memorizing sequences of alphabetical images via sp...
(DL hacks輪読) Seven neurons memorizing sequences of alphabetical images via sp...
 
(DL hacks輪読) Deep Kernel Learning
(DL hacks輪読) Deep Kernel Learning(DL hacks輪読) Deep Kernel Learning
(DL hacks輪読) Deep Kernel Learning
 
(研究会輪読) Weight Uncertainty in Neural Networks
(研究会輪読) Weight Uncertainty in Neural Networks(研究会輪読) Weight Uncertainty in Neural Networks
(研究会輪読) Weight Uncertainty in Neural Networks
 

Semelhante a Semi-Supervised Autoencoders for Predicting Sentiment Distributions(第 5 回 Deep Learning 勉強会資料; ダヌシカ)

Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2 Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2 Saurabh Kaushik
 
Recursive Neural Networks
Recursive Neural NetworksRecursive Neural Networks
Recursive Neural NetworksSangwoo Mo
 
A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relatio...
A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relatio...A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relatio...
A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relatio...Seokhwan Kim
 
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)Zachary S. Brown
 
Colloquium talk on modal sense classification using a convolutional neural ne...
Colloquium talk on modal sense classification using a convolutional neural ne...Colloquium talk on modal sense classification using a convolutional neural ne...
Colloquium talk on modal sense classification using a convolutional neural ne...Ana Marasović
 
Talk from NVidia Developer Connect
Talk from NVidia Developer ConnectTalk from NVidia Developer Connect
Talk from NVidia Developer ConnectAnuj Gupta
 
Chris Dyer - 2017 - Neural MT Workshop Invited Talk: The Neural Noisy Channel...
Chris Dyer - 2017 - Neural MT Workshop Invited Talk: The Neural Noisy Channel...Chris Dyer - 2017 - Neural MT Workshop Invited Talk: The Neural Noisy Channel...
Chris Dyer - 2017 - Neural MT Workshop Invited Talk: The Neural Noisy Channel...Association for Computational Linguistics
 
NLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLPNLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLPAnuj Gupta
 
Natural Language Processing Advancements By Deep Learning: A Survey
Natural Language Processing Advancements By Deep Learning: A SurveyNatural Language Processing Advancements By Deep Learning: A Survey
Natural Language Processing Advancements By Deep Learning: A SurveyRimzim Thube
 
Beyond the Symbols: A 30-minute Overview of NLP
Beyond the Symbols: A 30-minute Overview of NLPBeyond the Symbols: A 30-minute Overview of NLP
Beyond the Symbols: A 30-minute Overview of NLPMENGSAYLOEM1
 
sa-mincut-aditya.ppt
sa-mincut-aditya.pptsa-mincut-aditya.ppt
sa-mincut-aditya.pptaashnareddy1
 

Semelhante a Semi-Supervised Autoencoders for Predicting Sentiment Distributions(第 5 回 Deep Learning 勉強会資料; ダヌシカ) (20)

Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2 Engineering Intelligent NLP Applications Using Deep Learning – Part 2
Engineering Intelligent NLP Applications Using Deep Learning – Part 2
 
Word embedding
Word embedding Word embedding
Word embedding
 
wordembedding.pptx
wordembedding.pptxwordembedding.pptx
wordembedding.pptx
 
Recursive Neural Networks
Recursive Neural NetworksRecursive Neural Networks
Recursive Neural Networks
 
A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relatio...
A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relatio...A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relatio...
A Graph-based Cross-lingual Projection Approach for Weakly Supervised Relatio...
 
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
Deep Learning and Modern Natural Language Processing (AnacondaCon2019)
 
Colloquium talk on modal sense classification using a convolutional neural ne...
Colloquium talk on modal sense classification using a convolutional neural ne...Colloquium talk on modal sense classification using a convolutional neural ne...
Colloquium talk on modal sense classification using a convolutional neural ne...
 
Talk from NVidia Developer Connect
Talk from NVidia Developer ConnectTalk from NVidia Developer Connect
Talk from NVidia Developer Connect
 
Chris Dyer - 2017 - Neural MT Workshop Invited Talk: The Neural Noisy Channel...
Chris Dyer - 2017 - Neural MT Workshop Invited Talk: The Neural Noisy Channel...Chris Dyer - 2017 - Neural MT Workshop Invited Talk: The Neural Noisy Channel...
Chris Dyer - 2017 - Neural MT Workshop Invited Talk: The Neural Noisy Channel...
 
NLP Bootcamp
NLP BootcampNLP Bootcamp
NLP Bootcamp
 
Icdm2013 slides
Icdm2013 slidesIcdm2013 slides
Icdm2013 slides
 
Deep Learning for Machine Translation
Deep Learning for Machine TranslationDeep Learning for Machine Translation
Deep Learning for Machine Translation
 
NLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLPNLP Bootcamp 2018 : Representation Learning of text for NLP
NLP Bootcamp 2018 : Representation Learning of text for NLP
 
AINL 2016: Nikolenko
AINL 2016: NikolenkoAINL 2016: Nikolenko
AINL 2016: Nikolenko
 
Natural Language Processing Advancements By Deep Learning: A Survey
Natural Language Processing Advancements By Deep Learning: A SurveyNatural Language Processing Advancements By Deep Learning: A Survey
Natural Language Processing Advancements By Deep Learning: A Survey
 
sa-mincut-aditya.ppt
sa-mincut-aditya.pptsa-mincut-aditya.ppt
sa-mincut-aditya.ppt
 
Beyond the Symbols: A 30-minute Overview of NLP
Beyond the Symbols: A 30-minute Overview of NLPBeyond the Symbols: A 30-minute Overview of NLP
Beyond the Symbols: A 30-minute Overview of NLP
 
Deep learning for nlp
Deep learning for nlpDeep learning for nlp
Deep learning for nlp
 
sa-mincut-aditya.ppt
sa-mincut-aditya.pptsa-mincut-aditya.ppt
sa-mincut-aditya.ppt
 
sa.ppt
sa.pptsa.ppt
sa.ppt
 

Mais de Ohsawa Goodfellow

Open-ended Learning in Symmetric Zero-sum Games @ ICML19
Open-ended Learning in Symmetric Zero-sum Games @ ICML19 Open-ended Learning in Symmetric Zero-sum Games @ ICML19
Open-ended Learning in Symmetric Zero-sum Games @ ICML19 Ohsawa Goodfellow
 
PRML上巻勉強会 at 東京大学 資料 第1章後半
PRML上巻勉強会 at 東京大学 資料 第1章後半PRML上巻勉強会 at 東京大学 資料 第1章後半
PRML上巻勉強会 at 東京大学 資料 第1章後半Ohsawa Goodfellow
 
PRML上巻勉強会 at 東京大学 資料 第1章前半
PRML上巻勉強会 at 東京大学 資料 第1章前半PRML上巻勉強会 at 東京大学 資料 第1章前半
PRML上巻勉強会 at 東京大学 資料 第1章前半Ohsawa Goodfellow
 
Natural Language Processing (Almost) from Scratch(第 6 回 Deep Learning 勉強会資料; 榊)
Natural Language Processing (Almost) from Scratch(第 6 回 Deep Learning 勉強会資料; 榊)Natural Language Processing (Almost) from Scratch(第 6 回 Deep Learning 勉強会資料; 榊)
Natural Language Processing (Almost) from Scratch(第 6 回 Deep Learning 勉強会資料; 榊)Ohsawa Goodfellow
 
Deep Learning via Semi-Supervised Embedding (第 7 回 Deep Learning 勉強会資料; 大澤)
Deep Learning via Semi-Supervised Embedding (第 7 回 Deep Learning 勉強会資料; 大澤)Deep Learning via Semi-Supervised Embedding (第 7 回 Deep Learning 勉強会資料; 大澤)
Deep Learning via Semi-Supervised Embedding (第 7 回 Deep Learning 勉強会資料; 大澤)Ohsawa Goodfellow
 
Deep Auto-Encoder Neural Networks in Reiforcement Learnning (第 9 回 Deep Learn...
Deep Auto-Encoder Neural Networks in Reiforcement Learnning (第 9 回 Deep Learn...Deep Auto-Encoder Neural Networks in Reiforcement Learnning (第 9 回 Deep Learn...
Deep Auto-Encoder Neural Networks in Reiforcement Learnning (第 9 回 Deep Learn...Ohsawa Goodfellow
 
第9章 ネットワーク上の他の確率過程
第9章 ネットワーク上の他の確率過程第9章 ネットワーク上の他の確率過程
第9章 ネットワーク上の他の確率過程Ohsawa Goodfellow
 
XLWrapについてのご紹介
XLWrapについてのご紹介XLWrapについてのご紹介
XLWrapについてのご紹介Ohsawa Goodfellow
 
XLWrapについてのご紹介
XLWrapについてのご紹介XLWrapについてのご紹介
XLWrapについてのご紹介Ohsawa Goodfellow
 

Mais de Ohsawa Goodfellow (9)

Open-ended Learning in Symmetric Zero-sum Games @ ICML19
Open-ended Learning in Symmetric Zero-sum Games @ ICML19 Open-ended Learning in Symmetric Zero-sum Games @ ICML19
Open-ended Learning in Symmetric Zero-sum Games @ ICML19
 
PRML上巻勉強会 at 東京大学 資料 第1章後半
PRML上巻勉強会 at 東京大学 資料 第1章後半PRML上巻勉強会 at 東京大学 資料 第1章後半
PRML上巻勉強会 at 東京大学 資料 第1章後半
 
PRML上巻勉強会 at 東京大学 資料 第1章前半
PRML上巻勉強会 at 東京大学 資料 第1章前半PRML上巻勉強会 at 東京大学 資料 第1章前半
PRML上巻勉強会 at 東京大学 資料 第1章前半
 
Natural Language Processing (Almost) from Scratch(第 6 回 Deep Learning 勉強会資料; 榊)
Natural Language Processing (Almost) from Scratch(第 6 回 Deep Learning 勉強会資料; 榊)Natural Language Processing (Almost) from Scratch(第 6 回 Deep Learning 勉強会資料; 榊)
Natural Language Processing (Almost) from Scratch(第 6 回 Deep Learning 勉強会資料; 榊)
 
Deep Learning via Semi-Supervised Embedding (第 7 回 Deep Learning 勉強会資料; 大澤)
Deep Learning via Semi-Supervised Embedding (第 7 回 Deep Learning 勉強会資料; 大澤)Deep Learning via Semi-Supervised Embedding (第 7 回 Deep Learning 勉強会資料; 大澤)
Deep Learning via Semi-Supervised Embedding (第 7 回 Deep Learning 勉強会資料; 大澤)
 
Deep Auto-Encoder Neural Networks in Reiforcement Learnning (第 9 回 Deep Learn...
Deep Auto-Encoder Neural Networks in Reiforcement Learnning (第 9 回 Deep Learn...Deep Auto-Encoder Neural Networks in Reiforcement Learnning (第 9 回 Deep Learn...
Deep Auto-Encoder Neural Networks in Reiforcement Learnning (第 9 回 Deep Learn...
 
第9章 ネットワーク上の他の確率過程
第9章 ネットワーク上の他の確率過程第9章 ネットワーク上の他の確率過程
第9章 ネットワーク上の他の確率過程
 
XLWrapについてのご紹介
XLWrapについてのご紹介XLWrapについてのご紹介
XLWrapについてのご紹介
 
XLWrapについてのご紹介
XLWrapについてのご紹介XLWrapについてのご紹介
XLWrapについてのご紹介
 

Último

Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationIES VE
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfDianaGray10
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8DianaGray10
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding TeamAdam Moalla
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsSafe Software
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureEric D. Schabell
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Adtran
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Websitedgelyza
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioChristian Posta
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopBachir Benyammi
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.YounusS2
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1DianaGray10
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXTarek Kalaji
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesMd Hossain Ali
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfAijun Zhang
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UbiTrack UK
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfJamie (Taka) Wang
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfDaniel Santiago Silva Capera
 

Último (20)

Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve DecarbonizationUsing IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
Using IESVE for Loads, Sizing and Heat Pump Modeling to Achieve Decarbonization
 
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdfUiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
UiPath Solutions Management Preview - Northern CA Chapter - March 22.pdf
 
UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8UiPath Studio Web workshop series - Day 8
UiPath Studio Web workshop series - Day 8
 
9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team9 Steps For Building Winning Founding Team
9 Steps For Building Winning Founding Team
 
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration WorkflowsIgniting Next Level Productivity with AI-Infused Data Integration Workflows
Igniting Next Level Productivity with AI-Infused Data Integration Workflows
 
OpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability AdventureOpenShift Commons Paris - Choose Your Own Observability Adventure
OpenShift Commons Paris - Choose Your Own Observability Adventure
 
Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™Meet the new FSP 3000 M-Flex800™
Meet the new FSP 3000 M-Flex800™
 
COMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a WebsiteCOMPUTER 10 Lesson 8 - Building a Website
COMPUTER 10 Lesson 8 - Building a Website
 
Comparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and IstioComparing Sidecar-less Service Mesh from Cilium and Istio
Comparing Sidecar-less Service Mesh from Cilium and Istio
 
20150722 - AGV
20150722 - AGV20150722 - AGV
20150722 - AGV
 
NIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 WorkshopNIST Cybersecurity Framework (CSF) 2.0 Workshop
NIST Cybersecurity Framework (CSF) 2.0 Workshop
 
Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.Basic Building Blocks of Internet of Things.
Basic Building Blocks of Internet of Things.
 
20230104 - machine vision
20230104 - machine vision20230104 - machine vision
20230104 - machine vision
 
Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1Secure your environment with UiPath and CyberArk technologies - Session 1
Secure your environment with UiPath and CyberArk technologies - Session 1
 
VoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBXVoIP Service and Marketing using Odoo and Asterisk PBX
VoIP Service and Marketing using Odoo and Asterisk PBX
 
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just MinutesAI Fame Rush Review – Virtual Influencer Creation In Just Minutes
AI Fame Rush Review – Virtual Influencer Creation In Just Minutes
 
Machine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdfMachine Learning Model Validation (Aijun Zhang 2024).pdf
Machine Learning Model Validation (Aijun Zhang 2024).pdf
 
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
UWB Technology for Enhanced Indoor and Outdoor Positioning in Physiological M...
 
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
activity_diagram_combine_v4_20190827.pdfactivity_diagram_combine_v4_20190827.pdf
 
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdfIaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
IaC & GitOps in a Nutshell - a FridayInANuthshell Episode.pdf
 

Semi-Supervised Autoencoders for Predicting Sentiment Distributions(第 5 回 Deep Learning 勉強会資料; ダヌシカ)

  • 1. Semi-Supervised Autoencoders for Predicting Sentiment Distributions Socher, Pennington, Huang, Ng, Manning (Stanford) Presented by Danushka Bollegala
  • 2. Task • Predict sentence level sentiment • white blood cells destroying an infection: + • an infection destroying white blood cells: - • Predict the distribution of sentiment 1. Sorry, Hugs: User offers condolences to author. 2. You Rock: Indicating approval, congratulations. 3. Teehee: User found the anecdote amusing. 4. I Understand: Show of empathy. 5. Wow, Just Wow: Expression of surprise,shock. 2
  • 3. Approach • Learn a representation for the entire sentence using autoencoders (unsupervised) • A tree structure is learnt (node and edges) • Learn a sentiment classifier using the representation learnt in the previous step (supervised) • Together, the approach becomes a semi-supervised learning task. 3
  • 4. Neural Word Representations • Randomly initialize the word representations • For a vector x representing a word, sample it from a zero mean Guassian x 2 Rn , x ⇠ N (0, 2 ) • Works well when the task is supervised because given training data we can later tune the weights in the representation. • Pre-trained word representations • Bengio et al. 2003 • Given a context vector c, remove a word x that co-occur in c, and use the remaining features in c to guess the occurrence of x. • Similar to Ando et al 2003 suggestion for transfer learning via Alternating Structure Optimization (ASO). • Can take into account co-occurrence information. • Rich syntactic and semantic representations for words! 4
  • 5. Representing the input • L: word-representation matrix • Each word in the vocabulary is represented by an n- dimensional vector • Those vectors are then stacked in columns to create the matrix L • Given a word, k, it is represented as 1-of-k binary vector bk. Then the vector representing the word k is given by, x = L bk. • This continuous representation is better than the original binary representation because internal sigmoid activators are continuous. 5
  • 6. Autoencoders (Tree Given) • binary trees are assumed (each parent have two children). • Given child node representations iteratively compute the representation of their parent node. • Concatenate the two vectors for the child nodes and apply W and bias b, followed up by Sigmoid function f. 6
  • 7. Structure Prediction • Build the tree as well as node representations • Concept • Given a sentence (a sequence of words), generate all possible trees with those words. • For each tree learn autoencoders at each non-leaf node and compute the reconstruction error . • Total reconstruction error of a tree is the sum of reconstruction errors at each node in the tree. • Select the tree with the minimum total reconstruction error. Too expensive (slow) in practice! 7
  • 8. Greedy Unsupervised RAE • For each consecutive word pairs, compute their parent. • Repeat this process until we are left with one parent. Given n child nodes at some step, we are left with n-1 parents, thus reducing the number of nodes to process as we go up the tree. • Now, select the parent at each level that has the minimum reconstruction error. • Weighted reconstruction • The number of descendants of a parent must be considered when computing its reconstruction error. • Parent vectors are L2 normalized (p/||p||) to avoid hidden layer (W) becoming smaller (thereby reducing the reconstruction error) 8
  • 9. Semi-Supervised RAE • Each parent node is assigned with an 2n dimensional feature vector • We can learn a weight vector (2n dim) on top of those parent vectors • Logistic sigmoid function is used as the output • The sum of cross-entropy is minimized over all parent nodes in the tree • Similar to logistic regression 9
  • 10. Equations Sigmoid output at a parent node p Cross entropy loss Aggregate the loss over all sentence x, label t pairs in the corpus Total loss of a tree is the sum of losses at all parent nodes Loss at a parent node comes from two sources: reconstruction error + cross entropy loss 10
  • 11. Learning • Compute the gradient w.r.t. parameters (Ws, b) and use LBFGS • The objective function is not convex. • Only local optima can be achieved with this procedure. • Works well in practice. 11
  • 12. EP Dataset • Experience Project (EP) dataset • People write confessions and others tag them. • Five categories: 12
  • 13. Baselines • Random (20%) • Most frequent (38.1%) • I understand • Binary Bag of words (46.4%) • Represent each sentence as a binary bag of words and do not use pre-trained representations. • Features (47.0%) • Use sentiment lexicons, training data (supervised sentiment classification with SVMs) • Word vectors (45.5%) • Ignore tree structure learnt by RAE. Aggregate the pre-trained representations for each word in the sentence and train an SVM. • Proposed method (50.1%) • Learn the tree structure with RAE using unlabeled data (sentences only). • Softmax layer is trained on each parent node using labeled data. 13
  • 15. Random vectors!!! • Randomly initializing the word vectors does very well! • Because supervision occurs later, it is OK to initialize randomly • Randomly initializing RAEs have shown similar good performances in other tasks • On random weights and unsupervised feature learning ICML’11 15
  • 16. Summary • Learn a sentiment classifier using recursive autoencoders • Structure is also learn using unlabeled data • greedy algorithm for tree construction • Semi-supervision at the parent level • Model is general enough for other sentence level classification tasks • Random word representations do very well! 16