SlideShare uma empresa Scribd logo
1 de 13
Non intrusive vision and acoustic
based emotion recognition of
driver in Advanced Driver
Assistance System
Motivation
• Driving is one of the most dangerous tasks in
our everyday lives.
• Some statistics in Vijayawada

http://www.aptransport.org/html/accidents.htm
• Majority of the accidents on roads are mainly due
to the driver’s inattentiveness.
• The main reason to the poor attention of the
driver in the driving is caused by various
emotions/moods (for example
sad, angry, joy, pleasure, despair and irritation) of
the driver.
• The emotions are generally measured by
analyzing either head movement patterns or
eyelid movements or face expressions or all
together.
• In this project, we develop a system to identify
emotions of the driver using non intrusive
methods.
Emotion
• There are more than 300 crisply identified emotions by researchers.
However all of them are not experienced in day-to-day life.
• Palette theory quotes, any emotion is composition of 6 primary emotions as
any color is combination of 3 primary colors.
• Anger, disgust, fear, happiness, sadness and surprise are considered as the
primary or basic emotions and also referred to as archetypal emotions

http://2wanderlust.files.wordpress.com/2009/03/picture-2.png
Face recognition techniques
Different face recognition techniques are
• Model based, a 3D model is constructed based on the facial variations in the image
Disadvantages:
* need high expensive camera (Stereo vision).
* construction of 3D model is difficult and takes more time.
• Appearance based, performance depends on the quality of extracted features.
• Feature based, the overall technique describes the position and size of each feature
(eye, nose, mouth or face outline)
Disadvantages:
* Extracting features in different poses (viewing condition) and lighting
conditions is a very complex task.
* For applications with large database, large set of features with
different sizes and positions, feature points identification difficult.
Feature Extraction from the Visual
Information
• The appearance based linear subspace techniques
extract the global features, as these techniques use the
statistical properties like the mean and variance of the
image.
• Challenge: The major difficulty in applying these
techniques over large databases is that the
computational load and memory requirements for
calculating features increase dramatically for large
databases
• Solution: In order to increase the performance of the
feature extraction techniques, the nonlinear feature
extraction techniques are introduced.
Nonlinear feature extraction techniques
• Radon transform
• Wavelet transform.
The radon transform based nonlinear feature
extraction gives the direction of the local
features.
When features are extracted using radon
transform, the variations in this facial
frequency are also boosted. The wavelet
transform gives the spatial and frequency
components present in an image.
Performance comparison of different
face recognition approaches
Feature Extraction from acoustic information
The important voice features to consider for emotion classification are:
• Fundamental frequency (F0) or Pitch,
• Intensity (Energy),
• Speaking rate,
• Voice quality and many other features that may be extracted/calculated from the
voice information are the formants,
• the vocal tract cross-section areas,
• the MFCC (Mel Frequency Cepstral Coefficient),
• Linear frequency cepstrum coefficients (LFCC),
• Linear Predictive Coding (LPC) and
• the teager energy operator-based features
Pitch is the fundamental frequency of audio signals (highness or lowness of a sound).
The MFCC is “spectrum of the spectrum” used to find the number of voices in the
speech.
The teager energy operator is used to find the number of harmonics due to nonlinear air
flow in the vocal track
The LPC provides an accurate and economical representation of the envelope of the
short-time power spectrum.
The LFCC is similar to MFCC but without the perceptually oriented transformation
into the Mel frequency scale; emphasize changes or periodicity in the spectrum,
while being relatively robust against noise. These features are measured from the
mean, range, variance and transmission duration between utterances .
Advantages and Disadvantages of using
acoustic features for detecting emotions
Advantages:
• We can often detect a speaker’s emotion even if we can not
understand the language.
• Speech is easy to record even under extreme environmental
conditions (temperature, high humidity and bright light),
requires cheap, durable and maintenance free sensors
Disadvantages:
Depends on age and gender. Angry males show higher
levels of energy than angry females. It is found that males
express anger with a slow speech rate as opposed to females
who employ a fast speech rate
Previous Work On Emotion Detection
From Speech
•
•
•
•
•
•
•
•

•

Schuller et al. [3]used Hidden Markov Model based approach for speech emotion
recognition. They achieved an overall accuracy of about 87%.
In [4] using spectral features and GMM supervector based SVMs emotion
recognition reached an accuracy level of more than 90% in some cases.
Many other approaches for emotion recognition has been tried like decision tree
based approach in [5],
rough set and SVM based approach in [6].
ANN and HMM based Multilevel speech recognition work was done in [7]
Some authors have done comparative study of two or more approaches for emotion
detection using speech [8] [9].
Speaker dependent and Speaker independent studies has also been done
[9] and proved that different approaches will give different accuracy level for the
two cases.
Different features used affect the emotion recognition [10] and hence proper
feature set must be taken for emotion recognition.
Since large number of features can be extracted for audio, few works related to
feature selection method has also been done [11].
Recent work
• Using 3D shape information:
Increased availability of 3D databases and
affordable 3D sensors .
3D shape information provides invariance
against head pose and illumination conditions.
• Using Thermal cameras.
• Integration of audio, video and body language.
References
•
•
•
•
•
•
•
•
•
•

•
•
•

[1] H. D. Vankayalapati and K. Kyamakya, "Nonlinear Feature Extraction Approaches for Scalable Face
Recognition Applications," ISAST transactions on computers and intelligent systems, vol. 2, 2009.
[2] Extraction of visual and acoustic features of the driver for real-time driver monitoring system - Sandeep Kotte
[3] Schuller, B.; Rigoll, G.; Lang, M.; ”Hidden Markov Model-based Speech Emotion Recognition” IEEE
International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 2003
[4] Hao Hu; Ming-Xing Xu; Wei Wu; ”GMM Supervector Based SVM with Spectral Features for Speech Emotion
Recognition” IEEE International Conference on Acoustics, Speech and Signal Processing, 2007. ICASSP 2007.
[5] Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan, ”Emotion recognition
using a hierarchical binary decision tree approach”, in: Proceedings of Inter-Speech, 2009.
[6] Jian Zhou1,Guoyin Wang, Yong Yang, Peijun Chen ”Speech Emotilon Ruecognition Based on Rough Set and
SVM” 5th IEEE International Conference on Cognitive Informatics, 2006. ICCI 2006.
[7] Xia Mao, Lijiang Chen, Liqin Fu. ”Multi-Level Speech Emotion Recognition based on HMM and ANN”,World
Congress on Computer Science and Information Engineering, 2009
[8] Razak, A.A.; Komiya, R.; Izani, M.; Abidin, Z.; ”Comparison between Fuzzy and NN method for Speech
Emotion Recognition” Third International Conference on Information Technology and Applications, 2005. ICITA
2005.
[9] Iliou, Theodoros; Anagnostopoulos, Christos-Nikolaos; ”SVM-MLP-PNN Classifiers on Speech Emotion
Recognition Field A Comparative Study” Fifth International Conference on DigitalTelecommunications 2010
[10] Anton Batliner, Stefan Steidl, Bjorn Schuller, Dino Seppi, Thurid Vogt, Johannes Wagner, Laurence
Devillers, Laurence Vidrascu, Vered Aharonson, Loic Kessous, Noam Amir,”Searching for the Most Important
Feature Types Signalling Emotion-Related User States in Speech”, 2009, Computer Science & Language
[11] Ling Cen, Wee Ser, Zhu Liang Yu , ”Speech Emotion Recogni- tion Using Canonical Correlation Analysis and
Probabilistic Neu-ral Network” 2008 Seventh International Conference on Machine Learning and Applications
[12] Dimitrios Ververidis and Constantine Kotropoulos. Emotional speech recognition: Resources, features, and
methods. Speech Communication, 48(9):1162 -1181, 2006.
[13] Emotion Recognition using Speech Features By K. Sreenivasa Rao, Shashidhar G. Koolagudi

Mais conteúdo relacionado

Mais procurados

Facial emotion recognition
Facial emotion recognitionFacial emotion recognition
Facial emotion recognition
Anukriti Dureha
 
4837410 automatic-facial-emotion-recognition
4837410 automatic-facial-emotion-recognition4837410 automatic-facial-emotion-recognition
4837410 automatic-facial-emotion-recognition
Ngaire Taylor
 
Facial Emoji Recognition
Facial Emoji RecognitionFacial Emoji Recognition
Facial Emoji Recognition
ijtsrd
 

Mais procurados (20)

EMOTION DETECTION USING AI
EMOTION DETECTION USING AIEMOTION DETECTION USING AI
EMOTION DETECTION USING AI
 
Facial Expression Recognition System using Deep Convolutional Neural Networks.
Facial Expression Recognition  System using Deep Convolutional Neural Networks.Facial Expression Recognition  System using Deep Convolutional Neural Networks.
Facial Expression Recognition System using Deep Convolutional Neural Networks.
 
Facial emotion recognition
Facial emotion recognitionFacial emotion recognition
Facial emotion recognition
 
4837410 automatic-facial-emotion-recognition
4837410 automatic-facial-emotion-recognition4837410 automatic-facial-emotion-recognition
4837410 automatic-facial-emotion-recognition
 
Facial Expression Recognition
Facial Expression Recognition Facial Expression Recognition
Facial Expression Recognition
 
Emotion based music player
Emotion based music playerEmotion based music player
Emotion based music player
 
Emotion detection using cnn.pptx
Emotion detection using cnn.pptxEmotion detection using cnn.pptx
Emotion detection using cnn.pptx
 
Introduction to emotion detection
Introduction to emotion detectionIntroduction to emotion detection
Introduction to emotion detection
 
Facial Emoji Recognition
Facial Emoji RecognitionFacial Emoji Recognition
Facial Emoji Recognition
 
Speech emotion recognition
Speech emotion recognitionSpeech emotion recognition
Speech emotion recognition
 
Face recognition technology
Face recognition technologyFace recognition technology
Face recognition technology
 
Face detection and recognition
Face detection and recognitionFace detection and recognition
Face detection and recognition
 
Predicting Emotions through Facial Expressions
Predicting Emotions through Facial Expressions  Predicting Emotions through Facial Expressions
Predicting Emotions through Facial Expressions
 
Facial emotion detection on babies' emotional face using Deep Learning.
Facial emotion detection on babies' emotional face using Deep Learning.Facial emotion detection on babies' emotional face using Deep Learning.
Facial emotion detection on babies' emotional face using Deep Learning.
 
Facial Expression Recognitino
Facial Expression RecognitinoFacial Expression Recognitino
Facial Expression Recognitino
 
Face detection ppt
Face detection pptFace detection ppt
Face detection ppt
 
face detection
face detectionface detection
face detection
 
Human Emotion Recognition using Machine Learning
Human Emotion Recognition using Machine LearningHuman Emotion Recognition using Machine Learning
Human Emotion Recognition using Machine Learning
 
Emotion Recognition Based On Audio Speech
Emotion Recognition Based On Audio SpeechEmotion Recognition Based On Audio Speech
Emotion Recognition Based On Audio Speech
 
Facial expression recognition
Facial expression recognitionFacial expression recognition
Facial expression recognition
 

Semelhante a Emotion recognition using facial expressions and speech

A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...
ijtsrd
 
Paper id 23201490
Paper id 23201490Paper id 23201490
Paper id 23201490
IJRAT
 

Semelhante a Emotion recognition using facial expressions and speech (20)

Audio-
Audio-Audio-
Audio-
 
De4201715719
De4201715719De4201715719
De4201715719
 
Speech Emotion Recognition Using Machine Learning
Speech Emotion Recognition Using Machine LearningSpeech Emotion Recognition Using Machine Learning
Speech Emotion Recognition Using Machine Learning
 
Speech Emotion Recognition Using Neural Networks
Speech Emotion Recognition Using Neural NetworksSpeech Emotion Recognition Using Neural Networks
Speech Emotion Recognition Using Neural Networks
 
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...
A Study to Assess the Effectiveness of Planned Teaching Programme on Knowledg...
 
IRJET- Study of Effect of PCA on Speech Emotion Recognition
IRJET- Study of Effect of PCA on Speech Emotion RecognitionIRJET- Study of Effect of PCA on Speech Emotion Recognition
IRJET- Study of Effect of PCA on Speech Emotion Recognition
 
IRJET- Comparative Analysis of Emotion Recognition System
IRJET- Comparative Analysis of Emotion Recognition SystemIRJET- Comparative Analysis of Emotion Recognition System
IRJET- Comparative Analysis of Emotion Recognition System
 
Speech emotion recognition with light gradient boosting decision trees machine
Speech emotion recognition with light gradient boosting decision trees machineSpeech emotion recognition with light gradient boosting decision trees machine
Speech emotion recognition with light gradient boosting decision trees machine
 
76201926
7620192676201926
76201926
 
Recognizing Facial Expression Through Frequency Neural Network.pptx
Recognizing Facial Expression Through Frequency Neural Network.pptxRecognizing Facial Expression Through Frequency Neural Network.pptx
Recognizing Facial Expression Through Frequency Neural Network.pptx
 
A Review Paper on Speech Based Emotion Detection Using Deep Learning
A Review Paper on Speech Based Emotion Detection Using Deep LearningA Review Paper on Speech Based Emotion Detection Using Deep Learning
A Review Paper on Speech Based Emotion Detection Using Deep Learning
 
An Enhanced Independent Component-Based Human Facial Expression Recognition ...
An Enhanced Independent Component-Based Human Facial Expression Recognition  ...An Enhanced Independent Component-Based Human Facial Expression Recognition  ...
An Enhanced Independent Component-Based Human Facial Expression Recognition ...
 
Efficient Speech Emotion Recognition using SVM and Decision Trees
Efficient Speech Emotion Recognition using SVM and Decision TreesEfficient Speech Emotion Recognition using SVM and Decision Trees
Efficient Speech Emotion Recognition using SVM and Decision Trees
 
A survey on Enhancements in Speech Recognition
A survey on Enhancements in Speech RecognitionA survey on Enhancements in Speech Recognition
A survey on Enhancements in Speech Recognition
 
EMOTION RECOGNITION SYSTEMS: A REVIEW
EMOTION RECOGNITION SYSTEMS: A REVIEWEMOTION RECOGNITION SYSTEMS: A REVIEW
EMOTION RECOGNITION SYSTEMS: A REVIEW
 
Mind Reading Computer
Mind Reading ComputerMind Reading Computer
Mind Reading Computer
 
Course report-islam-taharimul (1)
Course report-islam-taharimul (1)Course report-islam-taharimul (1)
Course report-islam-taharimul (1)
 
Paper id 23201490
Paper id 23201490Paper id 23201490
Paper id 23201490
 
Identity authentication using voice biometrics technique
Identity authentication using voice biometrics techniqueIdentity authentication using voice biometrics technique
Identity authentication using voice biometrics technique
 
Utterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANNUtterance Based Speaker Identification Using ANN
Utterance Based Speaker Identification Using ANN
 

Mais de Lakshmi Sarvani Videla

Mais de Lakshmi Sarvani Videla (20)

Data Science Using Python
Data Science Using PythonData Science Using Python
Data Science Using Python
 
Programs on multithreading
Programs on multithreadingPrograms on multithreading
Programs on multithreading
 
Menu Driven programs in Java
Menu Driven programs in JavaMenu Driven programs in Java
Menu Driven programs in Java
 
Recursion in C
Recursion in CRecursion in C
Recursion in C
 
Simple questions on structures concept
Simple questions on structures conceptSimple questions on structures concept
Simple questions on structures concept
 
Errors incompetitiveprogramming
Errors incompetitiveprogrammingErrors incompetitiveprogramming
Errors incompetitiveprogramming
 
Relational Operators in C
Relational Operators in CRelational Operators in C
Relational Operators in C
 
Recursive functions in C
Recursive functions in CRecursive functions in C
Recursive functions in C
 
Function Pointer in C
Function Pointer in CFunction Pointer in C
Function Pointer in C
 
Functions
FunctionsFunctions
Functions
 
Java sessionnotes
Java sessionnotesJava sessionnotes
Java sessionnotes
 
Singlelinked list
Singlelinked listSinglelinked list
Singlelinked list
 
Graphs
GraphsGraphs
Graphs
 
B trees
B treesB trees
B trees
 
Functions in python3
Functions in python3Functions in python3
Functions in python3
 
Dictionary
DictionaryDictionary
Dictionary
 
Sets
SetsSets
Sets
 
Lists
ListsLists
Lists
 
DataStructures notes
DataStructures notesDataStructures notes
DataStructures notes
 
Solutionsfor co2 C Programs for data structures
Solutionsfor co2 C Programs for data structuresSolutionsfor co2 C Programs for data structures
Solutionsfor co2 C Programs for data structures
 

Último

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 

Último (20)

Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
HTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation StrategiesHTML Injection Attacks: Impact and Mitigation Strategies
HTML Injection Attacks: Impact and Mitigation Strategies
 
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
Apidays Singapore 2024 - Building Digital Trust in a Digital Economy by Veron...
 
Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024Manulife - Insurer Innovation Award 2024
Manulife - Insurer Innovation Award 2024
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
Deploy with confidence: VMware Cloud Foundation 5.1 on next gen Dell PowerEdg...
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
Artificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : UncertaintyArtificial Intelligence Chap.5 : Uncertainty
Artificial Intelligence Chap.5 : Uncertainty
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024Tata AIG General Insurance Company - Insurer Innovation Award 2024
Tata AIG General Insurance Company - Insurer Innovation Award 2024
 

Emotion recognition using facial expressions and speech

  • 1. Non intrusive vision and acoustic based emotion recognition of driver in Advanced Driver Assistance System
  • 2. Motivation • Driving is one of the most dangerous tasks in our everyday lives. • Some statistics in Vijayawada http://www.aptransport.org/html/accidents.htm
  • 3. • Majority of the accidents on roads are mainly due to the driver’s inattentiveness. • The main reason to the poor attention of the driver in the driving is caused by various emotions/moods (for example sad, angry, joy, pleasure, despair and irritation) of the driver. • The emotions are generally measured by analyzing either head movement patterns or eyelid movements or face expressions or all together. • In this project, we develop a system to identify emotions of the driver using non intrusive methods.
  • 4. Emotion • There are more than 300 crisply identified emotions by researchers. However all of them are not experienced in day-to-day life. • Palette theory quotes, any emotion is composition of 6 primary emotions as any color is combination of 3 primary colors. • Anger, disgust, fear, happiness, sadness and surprise are considered as the primary or basic emotions and also referred to as archetypal emotions http://2wanderlust.files.wordpress.com/2009/03/picture-2.png
  • 5. Face recognition techniques Different face recognition techniques are • Model based, a 3D model is constructed based on the facial variations in the image Disadvantages: * need high expensive camera (Stereo vision). * construction of 3D model is difficult and takes more time. • Appearance based, performance depends on the quality of extracted features. • Feature based, the overall technique describes the position and size of each feature (eye, nose, mouth or face outline) Disadvantages: * Extracting features in different poses (viewing condition) and lighting conditions is a very complex task. * For applications with large database, large set of features with different sizes and positions, feature points identification difficult.
  • 6. Feature Extraction from the Visual Information • The appearance based linear subspace techniques extract the global features, as these techniques use the statistical properties like the mean and variance of the image. • Challenge: The major difficulty in applying these techniques over large databases is that the computational load and memory requirements for calculating features increase dramatically for large databases • Solution: In order to increase the performance of the feature extraction techniques, the nonlinear feature extraction techniques are introduced.
  • 7. Nonlinear feature extraction techniques • Radon transform • Wavelet transform. The radon transform based nonlinear feature extraction gives the direction of the local features. When features are extracted using radon transform, the variations in this facial frequency are also boosted. The wavelet transform gives the spatial and frequency components present in an image.
  • 8. Performance comparison of different face recognition approaches
  • 9. Feature Extraction from acoustic information The important voice features to consider for emotion classification are: • Fundamental frequency (F0) or Pitch, • Intensity (Energy), • Speaking rate, • Voice quality and many other features that may be extracted/calculated from the voice information are the formants, • the vocal tract cross-section areas, • the MFCC (Mel Frequency Cepstral Coefficient), • Linear frequency cepstrum coefficients (LFCC), • Linear Predictive Coding (LPC) and • the teager energy operator-based features Pitch is the fundamental frequency of audio signals (highness or lowness of a sound). The MFCC is “spectrum of the spectrum” used to find the number of voices in the speech. The teager energy operator is used to find the number of harmonics due to nonlinear air flow in the vocal track The LPC provides an accurate and economical representation of the envelope of the short-time power spectrum. The LFCC is similar to MFCC but without the perceptually oriented transformation into the Mel frequency scale; emphasize changes or periodicity in the spectrum, while being relatively robust against noise. These features are measured from the mean, range, variance and transmission duration between utterances .
  • 10. Advantages and Disadvantages of using acoustic features for detecting emotions Advantages: • We can often detect a speaker’s emotion even if we can not understand the language. • Speech is easy to record even under extreme environmental conditions (temperature, high humidity and bright light), requires cheap, durable and maintenance free sensors Disadvantages: Depends on age and gender. Angry males show higher levels of energy than angry females. It is found that males express anger with a slow speech rate as opposed to females who employ a fast speech rate
  • 11. Previous Work On Emotion Detection From Speech • • • • • • • • • Schuller et al. [3]used Hidden Markov Model based approach for speech emotion recognition. They achieved an overall accuracy of about 87%. In [4] using spectral features and GMM supervector based SVMs emotion recognition reached an accuracy level of more than 90% in some cases. Many other approaches for emotion recognition has been tried like decision tree based approach in [5], rough set and SVM based approach in [6]. ANN and HMM based Multilevel speech recognition work was done in [7] Some authors have done comparative study of two or more approaches for emotion detection using speech [8] [9]. Speaker dependent and Speaker independent studies has also been done [9] and proved that different approaches will give different accuracy level for the two cases. Different features used affect the emotion recognition [10] and hence proper feature set must be taken for emotion recognition. Since large number of features can be extracted for audio, few works related to feature selection method has also been done [11].
  • 12. Recent work • Using 3D shape information: Increased availability of 3D databases and affordable 3D sensors . 3D shape information provides invariance against head pose and illumination conditions. • Using Thermal cameras. • Integration of audio, video and body language.
  • 13. References • • • • • • • • • • • • • [1] H. D. Vankayalapati and K. Kyamakya, "Nonlinear Feature Extraction Approaches for Scalable Face Recognition Applications," ISAST transactions on computers and intelligent systems, vol. 2, 2009. [2] Extraction of visual and acoustic features of the driver for real-time driver monitoring system - Sandeep Kotte [3] Schuller, B.; Rigoll, G.; Lang, M.; ”Hidden Markov Model-based Speech Emotion Recognition” IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP ’03). 2003 [4] Hao Hu; Ming-Xing Xu; Wei Wu; ”GMM Supervector Based SVM with Spectral Features for Speech Emotion Recognition” IEEE International Conference on Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. [5] Chi-Chun Lee, Emily Mower, Carlos Busso, Sungbok Lee and Shrikanth S. Narayanan, ”Emotion recognition using a hierarchical binary decision tree approach”, in: Proceedings of Inter-Speech, 2009. [6] Jian Zhou1,Guoyin Wang, Yong Yang, Peijun Chen ”Speech Emotilon Ruecognition Based on Rough Set and SVM” 5th IEEE International Conference on Cognitive Informatics, 2006. ICCI 2006. [7] Xia Mao, Lijiang Chen, Liqin Fu. ”Multi-Level Speech Emotion Recognition based on HMM and ANN”,World Congress on Computer Science and Information Engineering, 2009 [8] Razak, A.A.; Komiya, R.; Izani, M.; Abidin, Z.; ”Comparison between Fuzzy and NN method for Speech Emotion Recognition” Third International Conference on Information Technology and Applications, 2005. ICITA 2005. [9] Iliou, Theodoros; Anagnostopoulos, Christos-Nikolaos; ”SVM-MLP-PNN Classifiers on Speech Emotion Recognition Field A Comparative Study” Fifth International Conference on DigitalTelecommunications 2010 [10] Anton Batliner, Stefan Steidl, Bjorn Schuller, Dino Seppi, Thurid Vogt, Johannes Wagner, Laurence Devillers, Laurence Vidrascu, Vered Aharonson, Loic Kessous, Noam Amir,”Searching for the Most Important Feature Types Signalling Emotion-Related User States in Speech”, 2009, Computer Science & Language [11] Ling Cen, Wee Ser, Zhu Liang Yu , ”Speech Emotion Recogni- tion Using Canonical Correlation Analysis and Probabilistic Neu-ral Network” 2008 Seventh International Conference on Machine Learning and Applications [12] Dimitrios Ververidis and Constantine Kotropoulos. Emotional speech recognition: Resources, features, and methods. Speech Communication, 48(9):1162 -1181, 2006. [13] Emotion Recognition using Speech Features By K. Sreenivasa Rao, Shashidhar G. Koolagudi

Notas do Editor

  1. It should be noted that the principal axis of PCA for an image rotates when the image rotates.Radon transform, computed with respect to this axis, tenders robust features.
  2. LDA is often referred to as a Fisher's Linear Discriminant (FLD). The images in the training set are divided into the corresponding classes. LDA then finds a set of Vectors such that Fisher Discriminant Criterion is maximizedThese regions are: one low frequency region LL (approximate component), and three high-frequency regions,namely LH (horizontal component), HL (vertical component),and HH (diagonal component) The low frequency region in decompositions at different levels is the blurred version of the input image, while the high frequency regions contain the finer detail or edge information contained in the input image