SlideShare uma empresa Scribd logo
1 de 13
Automated
Parkinson’s
Disease
Detection and
Affective Analysis
from Emotional
EEG Signals
B Y N I S H A M O H A N D E VA D I G A
M S S E I N D ATA S C I E N C E
Introduction
• PARKINSON’S disease (PD) is a neurodegenerative
disorder of the central nervous system that affects
movements, often causing tremors.
• Characterized by the progressive loss of
dopaminergic neurons in the substantia nigra that
cause motor dysfunctions. Also, causes cognitive,
behavioral and emotional defects.
• Our study examines the utility of affective
Electroencephalography (EEG) signals to
understand emotional differences between PD vs
Healthy Controls (HC), and for automated PD
detection.
*Disclaimer: Image Sources – Google*
Introduction
• It performs comparative analysis where both
low-level EEG descriptors such as Spectral
Power Vectors (SPV) and Common Spatial
Patterns (CSP) are applied to enable emotion
detection and Convolutional Neural Networks
(CNNs) are applied to automatically learn
cognitive and emotional correlates.
• Our results reveal that PD patients comprehend
arousal better than valence, and amongst
emotion categories, fear, disgust and surprise
less accurately, and sadness most accurately.
*Disclaimer: Image Sources – Google*
Human Emotions
Ekman Emotions (Sadness, Happiness, Fear,
Disgust, Surprise, Anger)
Pipeline
Dataset
EEG Signals- 20 Non-
Demented PD And 20 HC
Subject.
Audio-visual Stimuli To
Induce The Six Ekman
Emotions (Sadness,
Happiness, Fear, Disgust,
Surprise, Anger), resulting
in a Total Of 1440
Samples . (2 Classes X
20 Subjects X 6 Emotions
X 6 Trials/Emotion)
Pipeline
Pre-
processing Discarding outliers(Samples limited to
85μV).
Followed by an IIR bandpass
Butterworth filter to retain the 8-49 Hz
range.
Segmentation into 5-second epochs to
preserve temporal information (13193
epochs for feature extraction)
Then z-normalized followed by
Principal Component Analysis (PCA)
to retain 95% data variance
FEATURE
EXTRACTION:
SPECTRAL
POWER
VECTOR
(SPV)
Power Spectral Analysis was performed to
estimate EEG spectral density upon spectral
transformation.
On each epoch, a Butterworth bandpass filter
was applied to extract the ↵ (8-13 HZ), (13-30
HZ) and (30-49 HZ) spectral bands.
A Fast Fourier Transform (FFFT) was performed,
followed by summation of squared FFT values.
COMMON
SPATIAL
PATTERNS
(CSP):
Common spatial patterns were extracted by
learning a linear combination of the original
features.
Filters (transformations) so that the transformed
signal variance was maximal for one class and
minimal for the other.
Enables recovery of the original signal by
gathering relevant info over popular EEG
features.
Learn the spatial transform w, which maximizes
the function:
*Disclaimer: Image Sources – Google*
CLASSICAL
MACHINE
LEARNING
ALGORITHM
K-nearest Neighbour (KNN)  test sample is
assigned, label corresponding to the mode of its k-
closest neighbors based on a suitable distance metric.
Support vector machine (SVM)input data are
transformed to a high-dimensional space where the
two classes are linearly separable, and the inter-class
distance is maximum.
Gaussian naive bayes (GNB)  classifier assuming
class-conditional feature independence.
Decision tree (DT)  uses a tree-like graph structure
where each leaf node represents a category label.
Linear discriminant analysis (LDA) linearly
transforms data to achieve maximal inter-class
distance.
Logistic regression (LR)  maps the input to class
labels via the sigmoid function.
*Disclaimer: Image Sources – Google*
Input
Convolution Layer -1
Average
Pooling -1
Convolution Layer - 2
Average
Pooling -2 Average
Pooling-3
Convolution Layer - 3
Fully Connected
SoftMax
Basic architecture of 1D/2D/3D - CNN
• The 3D-CNN input dimensionality is 5 X
32 X 32 X 3.
• Three convolutional layers convolve the
input signal with a stride of 3, and
comprise 16, 32 and 32 filters of size 3,
3X 3 and 3X3X3, respectively.
• Each convolutional layer is
followed by average pooling over
2-unit regions.
• Batch normalization is applied
to normalize prior activations,
and a dropout of 0.1 - 0.5 is
employed for regularization.
• Dense layer comprises 128
neurons, followed by a SoftMax
layer composed of two neurons
(for dimensional emotion and PD
vs HC classification) or 6 neurons
(for categorical emotion
recognition) conveying class
probabilities.
• CNN hyper-parameters (learning rate =
[10 -5 , . . . , 10-1 ], optimizer = {SGD,
Adam, RMS Propagation} and dropout
rate) via 10-fold cross-validation identical to
the ML models
Performance Evaluation
• All models were fine-tuned via exhaustive grid-search and performance evaluated via ten-fold cross-validation.
• Valence Classification :
 Happiness and Surprise were grouped in the high valence (HV) category
 Sadness, Fear, disgust and Anger data were grouped in the low valence (LV) category. The HV:LV ratio within PD, HC and full data is 1:2.
Classification with PD Data:
Higher F1-scores were observed with the 1D-CNN for all
features, revealing the superior learning ability of CNNs. 2D
and 3D-CNN achieved even higher F1-scores, conveying that
the EEG-image and movie representations are most efficient
for valence prediction.
Classification using HC Data:
Higher F1-scores were noted with the 1D-CNN, with all
features performing similarly. 2D and 3D-CNN performed
better than the 1D-CNN, with the 3D-CNN achieving the
best mean F1- score of 0.93.
PD vs HC F1 Comparison:
Lower sensitivity scores were again noted on PD data with the
1D, 2D and 3D-CNN even if the differences were insignificant.
Cumulatively, these results reveal lower sensitivity for PD EEG
data.
Result
Performance Evaluation
• Arousal Classification :
 Anger, disgust, fear, happiness and surprise data in the high arousal (HA) category, with the sadness samples constituting the
low arousal (LA) category as in the circumplex model.
 The HA:LA class ratio within PD, HC and full data is 5:1.
Classification with PD Data:
Higher F1-scores were obtained for the 1D-CNN with spectral
features performing best, even if the differences among
descriptors were not significant. The 2D and 3D-CNN models
achieved an identical, near-ceiling F1 of 0.98. Higher
sensitivity than specificity was achieved on PD data with the
different models.
Classification using HC Data:
The 1D-CNN achieved a higher F1 of 0.95 with CSP
features.F1-scores of 0.94 and 0.97 were achieved with the
2D and 3D-CNN, respectively, revealing that the spectral
EEG image and movie descriptors effectively encode
emotion information.
PD vs HC F1 Comparison:
Very similar F1-scores were found on PD and HC data for ML
algorithms and 1D / 3D CNN.
while the trend reversed for the 2D-CNN.
Result
Performance Evaluation
• Categorical Emotion Classification :
 An equal number of samples were available for the sadness, happiness, fear, disgust, surprise and anger emotion classes for
both PD and HC subjects. F1-scores averaged over all emotion classes.
Classification with PD Data:
Sadness was easiest to recognize with all three models (F1=
0.87, 0.93 and 0.94 for 1D, 2D and 3D-CNN), while disgust
(F1= 0.78), surprise (F1= 0.80) and fear (F1= 0.85) were
recognized worst by the 1D, 2D and 3D-CNN, respectively.
Across models, sadness was easiest to recognize, while
disgust, fear and surprise were commonly confused with other
emotions.
Classification using HC Data:
Trends similar to PD were observed with HC data.
Misclassification Analyses:
The happiness and surprise high-valence emotions are
mislabeled as low-valence emotions, namely, sadness, fear and
anger for both PD and HC data
Among low-valence emotions, sadness is consistently predicted
as happiness with PD data.
Our findings convey that valence-related differences are not
effectively encoded in PD EEG responses.
Result
Performance Evaluation
• PD vs HC Classification:
 Attempted classification from emotion-agnostic (Full) and emotion-specific data. Data included equal proportions of the PD and
HC classes (1:1 class ratio).
Classification using Full Data:
• ML algorithms on full data achieved a marginally higher
sensitivity (0.97) than specificity (0.96); with the 1D CNN,
the specificity (0.99) was marginally higher than sensitivity
(0.98).
• Identical sensitivity and specificity rates of 0.99 were noted
on full data with the 2D CNN.
Classification with emotion-specific Data:
• Facile PD vs HC classification was achieved with all
emotion classes.
• 2D- CNN, near-identical sensitivity and specificity were
noted for all emotion categories.
Result
Conclusion
• Categorical emotion recognition results revealed that disgust, fear and surprise were
associated with low recognition rates on PD data, while sadness was well recognized.
• Empirical results for PD vs HC emotional responses revealed that differences were
apparent for both emotion-specific and emotion-agnostic data.
• Key finding is that both emotion and PD recognition can be reliably performed from EEG
responses passively compiled during audio-visual stimulus viewing.
• Many EEG differences between PD and HC groups have been noted from rest-state
analysis, the exact relation between EEG and motor symptoms is unknown.

Mais conteúdo relacionado

Semelhante a Automated Parkinson’s Disease Detection and Affective Analysis from Emotional EEG Signals

Interpretation of dna typing results and codis
Interpretation of dna typing results and codis Interpretation of dna typing results and codis
Interpretation of dna typing results and codis Neha Agarwal
 
Speech emotion recognition
Speech emotion recognitionSpeech emotion recognition
Speech emotion recognitionsaniya shaikh
 
Microarray and its application
Microarray and its applicationMicroarray and its application
Microarray and its applicationprateek kumar
 
ANALYSIS OF GENE REGULATION IN T-LYMPHOCYTES USING MICROARRAYS AND GENE EXPRE...
ANALYSIS OF GENE REGULATION IN T-LYMPHOCYTES USING MICROARRAYS AND GENE EXPRE...ANALYSIS OF GENE REGULATION IN T-LYMPHOCYTES USING MICROARRAYS AND GENE EXPRE...
ANALYSIS OF GENE REGULATION IN T-LYMPHOCYTES USING MICROARRAYS AND GENE EXPRE...Christos Argyropoulos
 
SPEECH EMOTION RECOGNITION SYSTEM USING RNN
SPEECH EMOTION RECOGNITION SYSTEM USING RNNSPEECH EMOTION RECOGNITION SYSTEM USING RNN
SPEECH EMOTION RECOGNITION SYSTEM USING RNNIRJET Journal
 
Microarray Statistics
Microarray StatisticsMicroarray Statistics
Microarray StatisticsA Roy
 
Improved target recognition response using collaborative brain-computer inter...
Improved target recognition response using collaborative brain-computer inter...Improved target recognition response using collaborative brain-computer inter...
Improved target recognition response using collaborative brain-computer inter...Kyongsik Yun
 
Predicting phenotype from genotype with machine learning
Predicting phenotype from genotype with machine learningPredicting phenotype from genotype with machine learning
Predicting phenotype from genotype with machine learningPatricia Francis-Lyon
 
EEG Based Classification of Emotions with CNN and RNN
EEG Based Classification of Emotions with CNN and RNNEEG Based Classification of Emotions with CNN and RNN
EEG Based Classification of Emotions with CNN and RNNijtsrd
 
Nafiz prasented an eeg-based brain computer interface for
Nafiz prasented an eeg-based brain computer interface forNafiz prasented an eeg-based brain computer interface for
Nafiz prasented an eeg-based brain computer interface forNafiz Ishtiaque Ahmed
 
Stats 3000 Week 1 - Winter 2011
Stats 3000 Week 1 - Winter 2011Stats 3000 Week 1 - Winter 2011
Stats 3000 Week 1 - Winter 2011Lauren Crosby
 
Final Poster 2014
Final Poster 2014Final Poster 2014
Final Poster 2014Eva Kool
 
BMES_Abstract_Microscope_Comparison_Meyer
BMES_Abstract_Microscope_Comparison_MeyerBMES_Abstract_Microscope_Comparison_Meyer
BMES_Abstract_Microscope_Comparison_MeyerAndrew Meyer
 
EEG Signal Classification using Deep Neural Network
EEG Signal Classification using Deep Neural NetworkEEG Signal Classification using Deep Neural Network
EEG Signal Classification using Deep Neural NetworkIRJET Journal
 
Day 4 Topic 1 - Ivanovic.ppt
Day 4 Topic 1 - Ivanovic.pptDay 4 Topic 1 - Ivanovic.ppt
Day 4 Topic 1 - Ivanovic.pptMeghanaDBengalur
 

Semelhante a Automated Parkinson’s Disease Detection and Affective Analysis from Emotional EEG Signals (20)

Interpretation of dna typing results and codis
Interpretation of dna typing results and codis Interpretation of dna typing results and codis
Interpretation of dna typing results and codis
 
Healthcare
HealthcareHealthcare
Healthcare
 
Speech emotion recognition
Speech emotion recognitionSpeech emotion recognition
Speech emotion recognition
 
Devin Petersohn Poster
Devin Petersohn PosterDevin Petersohn Poster
Devin Petersohn Poster
 
Microarray and its application
Microarray and its applicationMicroarray and its application
Microarray and its application
 
ANALYSIS OF GENE REGULATION IN T-LYMPHOCYTES USING MICROARRAYS AND GENE EXPRE...
ANALYSIS OF GENE REGULATION IN T-LYMPHOCYTES USING MICROARRAYS AND GENE EXPRE...ANALYSIS OF GENE REGULATION IN T-LYMPHOCYTES USING MICROARRAYS AND GENE EXPRE...
ANALYSIS OF GENE REGULATION IN T-LYMPHOCYTES USING MICROARRAYS AND GENE EXPRE...
 
Overview
OverviewOverview
Overview
 
Presentation(lokesh)
Presentation(lokesh)Presentation(lokesh)
Presentation(lokesh)
 
SPEECH EMOTION RECOGNITION SYSTEM USING RNN
SPEECH EMOTION RECOGNITION SYSTEM USING RNNSPEECH EMOTION RECOGNITION SYSTEM USING RNN
SPEECH EMOTION RECOGNITION SYSTEM USING RNN
 
Microarray Statistics
Microarray StatisticsMicroarray Statistics
Microarray Statistics
 
Improved target recognition response using collaborative brain-computer inter...
Improved target recognition response using collaborative brain-computer inter...Improved target recognition response using collaborative brain-computer inter...
Improved target recognition response using collaborative brain-computer inter...
 
DETECTION THEORY CHAPTER 1
DETECTION THEORY CHAPTER 1DETECTION THEORY CHAPTER 1
DETECTION THEORY CHAPTER 1
 
Predicting phenotype from genotype with machine learning
Predicting phenotype from genotype with machine learningPredicting phenotype from genotype with machine learning
Predicting phenotype from genotype with machine learning
 
EEG Based Classification of Emotions with CNN and RNN
EEG Based Classification of Emotions with CNN and RNNEEG Based Classification of Emotions with CNN and RNN
EEG Based Classification of Emotions with CNN and RNN
 
Nafiz prasented an eeg-based brain computer interface for
Nafiz prasented an eeg-based brain computer interface forNafiz prasented an eeg-based brain computer interface for
Nafiz prasented an eeg-based brain computer interface for
 
Stats 3000 Week 1 - Winter 2011
Stats 3000 Week 1 - Winter 2011Stats 3000 Week 1 - Winter 2011
Stats 3000 Week 1 - Winter 2011
 
Final Poster 2014
Final Poster 2014Final Poster 2014
Final Poster 2014
 
BMES_Abstract_Microscope_Comparison_Meyer
BMES_Abstract_Microscope_Comparison_MeyerBMES_Abstract_Microscope_Comparison_Meyer
BMES_Abstract_Microscope_Comparison_Meyer
 
EEG Signal Classification using Deep Neural Network
EEG Signal Classification using Deep Neural NetworkEEG Signal Classification using Deep Neural Network
EEG Signal Classification using Deep Neural Network
 
Day 4 Topic 1 - Ivanovic.ppt
Day 4 Topic 1 - Ivanovic.pptDay 4 Topic 1 - Ivanovic.ppt
Day 4 Topic 1 - Ivanovic.ppt
 

Último

Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpinRaunakKeshri1
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfAyushMahapatra5
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfchloefrazer622
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsTechSoup
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104misteraugie
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...PsychoTech Services
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDThiyagu K
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 

Último (20)

Student login on Anyboli platform.helpin
Student login on Anyboli platform.helpinStudent login on Anyboli platform.helpin
Student login on Anyboli platform.helpin
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Class 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdfClass 11th Physics NEET formula sheet pdf
Class 11th Physics NEET formula sheet pdf
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Disha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdfDisha NEET Physics Guide for classes 11 and 12.pdf
Disha NEET Physics Guide for classes 11 and 12.pdf
 
Advance Mobile Application Development class 07
Advance Mobile Application Development class 07Advance Mobile Application Development class 07
Advance Mobile Application Development class 07
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
 
Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104Nutritional Needs Presentation - HLTH 104
Nutritional Needs Presentation - HLTH 104
 
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
IGNOU MSCCFT and PGDCFT Exam Question Pattern: MCFT003 Counselling and Family...
 
Measures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SDMeasures of Dispersion and Variability: Range, QD, AD and SD
Measures of Dispersion and Variability: Range, QD, AD and SD
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 

Automated Parkinson’s Disease Detection and Affective Analysis from Emotional EEG Signals

  • 1. Automated Parkinson’s Disease Detection and Affective Analysis from Emotional EEG Signals B Y N I S H A M O H A N D E VA D I G A M S S E I N D ATA S C I E N C E
  • 2. Introduction • PARKINSON’S disease (PD) is a neurodegenerative disorder of the central nervous system that affects movements, often causing tremors. • Characterized by the progressive loss of dopaminergic neurons in the substantia nigra that cause motor dysfunctions. Also, causes cognitive, behavioral and emotional defects. • Our study examines the utility of affective Electroencephalography (EEG) signals to understand emotional differences between PD vs Healthy Controls (HC), and for automated PD detection. *Disclaimer: Image Sources – Google*
  • 3. Introduction • It performs comparative analysis where both low-level EEG descriptors such as Spectral Power Vectors (SPV) and Common Spatial Patterns (CSP) are applied to enable emotion detection and Convolutional Neural Networks (CNNs) are applied to automatically learn cognitive and emotional correlates. • Our results reveal that PD patients comprehend arousal better than valence, and amongst emotion categories, fear, disgust and surprise less accurately, and sadness most accurately. *Disclaimer: Image Sources – Google* Human Emotions Ekman Emotions (Sadness, Happiness, Fear, Disgust, Surprise, Anger)
  • 4. Pipeline Dataset EEG Signals- 20 Non- Demented PD And 20 HC Subject. Audio-visual Stimuli To Induce The Six Ekman Emotions (Sadness, Happiness, Fear, Disgust, Surprise, Anger), resulting in a Total Of 1440 Samples . (2 Classes X 20 Subjects X 6 Emotions X 6 Trials/Emotion)
  • 5. Pipeline Pre- processing Discarding outliers(Samples limited to 85μV). Followed by an IIR bandpass Butterworth filter to retain the 8-49 Hz range. Segmentation into 5-second epochs to preserve temporal information (13193 epochs for feature extraction) Then z-normalized followed by Principal Component Analysis (PCA) to retain 95% data variance
  • 6. FEATURE EXTRACTION: SPECTRAL POWER VECTOR (SPV) Power Spectral Analysis was performed to estimate EEG spectral density upon spectral transformation. On each epoch, a Butterworth bandpass filter was applied to extract the ↵ (8-13 HZ), (13-30 HZ) and (30-49 HZ) spectral bands. A Fast Fourier Transform (FFFT) was performed, followed by summation of squared FFT values. COMMON SPATIAL PATTERNS (CSP): Common spatial patterns were extracted by learning a linear combination of the original features. Filters (transformations) so that the transformed signal variance was maximal for one class and minimal for the other. Enables recovery of the original signal by gathering relevant info over popular EEG features. Learn the spatial transform w, which maximizes the function: *Disclaimer: Image Sources – Google*
  • 7. CLASSICAL MACHINE LEARNING ALGORITHM K-nearest Neighbour (KNN)  test sample is assigned, label corresponding to the mode of its k- closest neighbors based on a suitable distance metric. Support vector machine (SVM)input data are transformed to a high-dimensional space where the two classes are linearly separable, and the inter-class distance is maximum. Gaussian naive bayes (GNB)  classifier assuming class-conditional feature independence. Decision tree (DT)  uses a tree-like graph structure where each leaf node represents a category label. Linear discriminant analysis (LDA) linearly transforms data to achieve maximal inter-class distance. Logistic regression (LR)  maps the input to class labels via the sigmoid function. *Disclaimer: Image Sources – Google*
  • 8. Input Convolution Layer -1 Average Pooling -1 Convolution Layer - 2 Average Pooling -2 Average Pooling-3 Convolution Layer - 3 Fully Connected SoftMax Basic architecture of 1D/2D/3D - CNN • The 3D-CNN input dimensionality is 5 X 32 X 32 X 3. • Three convolutional layers convolve the input signal with a stride of 3, and comprise 16, 32 and 32 filters of size 3, 3X 3 and 3X3X3, respectively. • Each convolutional layer is followed by average pooling over 2-unit regions. • Batch normalization is applied to normalize prior activations, and a dropout of 0.1 - 0.5 is employed for regularization. • Dense layer comprises 128 neurons, followed by a SoftMax layer composed of two neurons (for dimensional emotion and PD vs HC classification) or 6 neurons (for categorical emotion recognition) conveying class probabilities. • CNN hyper-parameters (learning rate = [10 -5 , . . . , 10-1 ], optimizer = {SGD, Adam, RMS Propagation} and dropout rate) via 10-fold cross-validation identical to the ML models
  • 9. Performance Evaluation • All models were fine-tuned via exhaustive grid-search and performance evaluated via ten-fold cross-validation. • Valence Classification :  Happiness and Surprise were grouped in the high valence (HV) category  Sadness, Fear, disgust and Anger data were grouped in the low valence (LV) category. The HV:LV ratio within PD, HC and full data is 1:2. Classification with PD Data: Higher F1-scores were observed with the 1D-CNN for all features, revealing the superior learning ability of CNNs. 2D and 3D-CNN achieved even higher F1-scores, conveying that the EEG-image and movie representations are most efficient for valence prediction. Classification using HC Data: Higher F1-scores were noted with the 1D-CNN, with all features performing similarly. 2D and 3D-CNN performed better than the 1D-CNN, with the 3D-CNN achieving the best mean F1- score of 0.93. PD vs HC F1 Comparison: Lower sensitivity scores were again noted on PD data with the 1D, 2D and 3D-CNN even if the differences were insignificant. Cumulatively, these results reveal lower sensitivity for PD EEG data. Result
  • 10. Performance Evaluation • Arousal Classification :  Anger, disgust, fear, happiness and surprise data in the high arousal (HA) category, with the sadness samples constituting the low arousal (LA) category as in the circumplex model.  The HA:LA class ratio within PD, HC and full data is 5:1. Classification with PD Data: Higher F1-scores were obtained for the 1D-CNN with spectral features performing best, even if the differences among descriptors were not significant. The 2D and 3D-CNN models achieved an identical, near-ceiling F1 of 0.98. Higher sensitivity than specificity was achieved on PD data with the different models. Classification using HC Data: The 1D-CNN achieved a higher F1 of 0.95 with CSP features.F1-scores of 0.94 and 0.97 were achieved with the 2D and 3D-CNN, respectively, revealing that the spectral EEG image and movie descriptors effectively encode emotion information. PD vs HC F1 Comparison: Very similar F1-scores were found on PD and HC data for ML algorithms and 1D / 3D CNN. while the trend reversed for the 2D-CNN. Result
  • 11. Performance Evaluation • Categorical Emotion Classification :  An equal number of samples were available for the sadness, happiness, fear, disgust, surprise and anger emotion classes for both PD and HC subjects. F1-scores averaged over all emotion classes. Classification with PD Data: Sadness was easiest to recognize with all three models (F1= 0.87, 0.93 and 0.94 for 1D, 2D and 3D-CNN), while disgust (F1= 0.78), surprise (F1= 0.80) and fear (F1= 0.85) were recognized worst by the 1D, 2D and 3D-CNN, respectively. Across models, sadness was easiest to recognize, while disgust, fear and surprise were commonly confused with other emotions. Classification using HC Data: Trends similar to PD were observed with HC data. Misclassification Analyses: The happiness and surprise high-valence emotions are mislabeled as low-valence emotions, namely, sadness, fear and anger for both PD and HC data Among low-valence emotions, sadness is consistently predicted as happiness with PD data. Our findings convey that valence-related differences are not effectively encoded in PD EEG responses. Result
  • 12. Performance Evaluation • PD vs HC Classification:  Attempted classification from emotion-agnostic (Full) and emotion-specific data. Data included equal proportions of the PD and HC classes (1:1 class ratio). Classification using Full Data: • ML algorithms on full data achieved a marginally higher sensitivity (0.97) than specificity (0.96); with the 1D CNN, the specificity (0.99) was marginally higher than sensitivity (0.98). • Identical sensitivity and specificity rates of 0.99 were noted on full data with the 2D CNN. Classification with emotion-specific Data: • Facile PD vs HC classification was achieved with all emotion classes. • 2D- CNN, near-identical sensitivity and specificity were noted for all emotion categories. Result
  • 13. Conclusion • Categorical emotion recognition results revealed that disgust, fear and surprise were associated with low recognition rates on PD data, while sadness was well recognized. • Empirical results for PD vs HC emotional responses revealed that differences were apparent for both emotion-specific and emotion-agnostic data. • Key finding is that both emotion and PD recognition can be reliably performed from EEG responses passively compiled during audio-visual stimulus viewing. • Many EEG differences between PD and HC groups have been noted from rest-state analysis, the exact relation between EEG and motor symptoms is unknown.