SlideShare uma empresa Scribd logo
1 de 96
Baixar para ler offline
Sound Events and Emotions: Investigating the
Relation of Rhythmic Characteristics and Arousal
K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1
1Digital Audio Processing & Applications Group, Audiovisual Signal Processing
Laboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece
2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, Aristotle
University of Thessaloniki, Thessaloniki, Greece
Agenda
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Structured form of sound
General sound
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Everyday life
Everyday life
Stimuli
Many stimuli, e.g.
Visual
Aural
Structured form of sound
General sound
Interaction with stimuli
Reactions
Emotions felt
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions
Music
Structured form of sound
Primary used to mimic and extent voice characteristics
Enhancing emotion(s) conveyance
Relation to emotions through (but not olny) MIR and MER
Music Information Retrieval
Music Emotion Recognition
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
Music and emotions (cont’d)
Research results & Applications
Accuracy results up to 85%
In large music data bases
Opposed to typical “Artist/Genre/Year” classification
Categorisation according to emotion
Retrieval based on emotion
Preliminary applications for synthesis of music based on emotion
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events
Sound events
Non structured/general form of sound
Additional information regarding:
Source attributes
Environment attributes
Sound producing mechanism attributes
Apparent almost any time, if not always, in everyday life
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound and emotions
From music to sound events (cont’d)
Sound events
Used in many applications:
Audio interactions/interfaces
Video games
Soundscapes
Artificial acoustic environments
Trigger reactions
Elicit emotions
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Objectives of current study
Data
Aural stimuli can elicit emotions
Music is a structured form of sound
Sound events are not
Music’s rhythm is arousing
Research question
Is sound events’ rhythm arousing too?
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Experimental Procedure’s Layout
Layout
Emotional model selection
Annotated sound corpus collection
Processing of sounds
Pre-processing
Feature extraction
Machine learning tests
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Sound corpus & emotional model
General Characteristics
Sound Corpus
International Affective Digital Sounds (IADS)
167 sounds
Duration: 6 secs
Variable sampling frequency
Emotional model
Continuous model
Sounds annotated using Arousal-Valence-Dominance
Self Assessment Manikin (S.A.M.) used
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Pre-processing
General procedure
Normalisation
Segmentation / Windowing
Clustering in two classes
Class A: 24 members
Class B: 143 members
Segmentation/Windowing
Different window lengths
Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm related features
For each segment
Statistical measures
Total of 26 features’ set
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Processing of Sounds
Feature Extraction
Extracted features
Only rhythm
related features
For each
segment
Statistical
measures
Extracted Features Statistical Measures
Beat spectrum Mean
Onsets Standard deviation
Tempo Gradient
Fluctuation Kurtosis
Event density Skewness
Pulse clarity
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Feature Evaluation
Objectives
Most valuable features for arousal detection
Dependencies between selected features and different window
lengths
Algorithms used
InfoGainAttributeEval
SVMAttributeEval
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Machine learning tests
Classification
Objectives
Arousal classification based on rhythm
Dependencies between different window lengths and
classification results
Algorithms used
Artificial neural networks
Logistic regression
K Nearest Neighbors
WEKA software
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
General results
Two groups of features formed, 13 features each group
An upper and a lower group
Inner group rank not always the same
Upper and lower groups constant for all window lengths
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Feature Evaluation
Upper group
Beatspectrum std
Event density std
Onsets gradient
Fluctuation kurtosis
Beatspectrum gradient
Pulse clarity std
Fluctuation mean
Fluctuation std
Fluctuation skewness
Onsets skewness
Pulse clarity kurtosis
Event density kurtosis
Onsets mean
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification
Classification
General results
Relative un-corelation of features
Variations regarding different window lengths
Accuracy results
Highest accuracy score: 88.37%
Window length: 1.0 second
Algorithm utilised: Logistic regression
Lowest accuracy score: 71.26%
Window length: 1.4 seconds
Algorithm utilised: Artificial Neural network
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature evaluation
Features
Most informative features:
Rhythm’s periodicity at auditory channel (fluctuation)
Onsets
Event density
Beatspecturm
Pulse clarity
Independent from semantic content
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Classification results
Algorithms related
LR’s minimum score was: 81.44%
KNN’s minimum score was: 82.05%
Results related
Sound events’ rhythm affects arousal
Thus, sound’s rhythm affect arousal
1. Introduction
Everyday life
Sound and emotions
2. Objectives
3. Experimental Sequence
Experimental Procedure’s Layout
Sound corpus & emotional model
Processing of Sounds
Machine learning tests
4. Results
Feature evaluation
Classification
5. Discussion & Conclusions
6. Feature work
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature work
Features & Dimensions
Other features related to arousal
Connection of features with valence
Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work
Feature work
Features & Dimensions
Other features related to arousal
Connection of features with valence
Thank you!

Mais conteúdo relacionado

Mais procurados

北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)kthrlab
 
machine learning x music
machine learning x musicmachine learning x music
machine learning x musicYi-Hsuan Yang
 
Poster vega north
Poster vega northPoster vega north
Poster vega northAcxelVega
 
Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)Yi-Hsuan Yang
 
Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021Yi-Hsuan Yang
 
Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1a2media15b
 
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesLearning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesYi-Hsuan Yang
 
Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)Paul Lamere
 
Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)Yi-Hsuan Yang
 
The echo nest-music_discovery(1)
The echo nest-music_discovery(1)The echo nest-music_discovery(1)
The echo nest-music_discovery(1)Sophia Yeiji Shin
 
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)Yi-Hsuan Yang
 
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...Gilberto Bernardes
 
How to create a podcast
How to create a podcastHow to create a podcast
How to create a podcastFALE - UFMG
 

Mais procurados (14)

北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
北原研究室の研究事例紹介:ベーシストの旋律分析とイコライザーの印象分析(Music×Analytics Meetup vol.5 ロングトーク)
 
machine learning x music
machine learning x musicmachine learning x music
machine learning x music
 
Poster vega north
Poster vega northPoster vega north
Poster vega north
 
Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)Machine learning for creative AI applications in music (2018 nov)
Machine learning for creative AI applications in music (2018 nov)
 
Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021Automatic Music Composition with Transformers, Jan 2021
Automatic Music Composition with Transformers, Jan 2021
 
Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1Radio trailer analysis_sheet 1
Radio trailer analysis_sheet 1
 
Unit 22 radio script
Unit 22 radio scriptUnit 22 radio script
Unit 22 radio script
 
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR TechniquesLearning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
Learning to Generate Jazz & Pop Piano Music from Audio via MIR Techniques
 
Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)Social Tags and Music Information Retrieval (Part I)
Social Tags and Music Information Retrieval (Part I)
 
Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)Machine Learning for Creative AI Applications in Music (2018 May)
Machine Learning for Creative AI Applications in Music (2018 May)
 
The echo nest-music_discovery(1)
The echo nest-music_discovery(1)The echo nest-music_discovery(1)
The echo nest-music_discovery(1)
 
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
ISMIR 2019 tutorial: Generating music with generative adverairal networks (GANs)
 
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
The Creative Process Behind Dialogismos I: Theoretical and Technical Consider...
 
How to create a podcast
How to create a podcastHow to create a podcast
How to create a podcast
 

Destaque

Using Technology to Create Emotions
Using Technology to Create EmotionsUsing Technology to Create Emotions
Using Technology to Create EmotionsmyQaa
 
TV Drama - Editing
TV Drama - EditingTV Drama - Editing
TV Drama - EditingZoe Lorenz
 
TV Drama - Sound
TV Drama - SoundTV Drama - Sound
TV Drama - SoundZoe Lorenz
 
Camera shots in tv drama
Camera shots in tv dramaCamera shots in tv drama
Camera shots in tv dramaNaamah Hill
 
Sound in TV Drama
Sound in TV DramaSound in TV Drama
Sound in TV DramaZoe Lorenz
 
Module 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From ItModule 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From ItMichael DeBlis III, Esq., LLM
 

Destaque (6)

Using Technology to Create Emotions
Using Technology to Create EmotionsUsing Technology to Create Emotions
Using Technology to Create Emotions
 
TV Drama - Editing
TV Drama - EditingTV Drama - Editing
TV Drama - Editing
 
TV Drama - Sound
TV Drama - SoundTV Drama - Sound
TV Drama - Sound
 
Camera shots in tv drama
Camera shots in tv dramaCamera shots in tv drama
Camera shots in tv drama
 
Sound in TV Drama
Sound in TV DramaSound in TV Drama
Sound in TV Drama
 
Module 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From ItModule 17: How Actors Create Emotions And What We Can Learn From It
Module 17: How Actors Create Emotions And What We Can Learn From It
 

Semelhante a Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal

Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docxMusic 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docxrosemarybdodson23141
 
groman_shin_finalreport
groman_shin_finalreportgroman_shin_finalreport
groman_shin_finalreportDaniel Shin
 
Beginning band meeting
Beginning band meetingBeginning band meeting
Beginning band meetingIMS Bands
 
Introduction to emotion detection
Introduction to emotion detectionIntroduction to emotion detection
Introduction to emotion detectionTyler Schnoebelen
 
The Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioThe Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioKendra Cote
 
The Effect of Music on Memory
The Effect of Music on MemoryThe Effect of Music on Memory
The Effect of Music on MemoryJacobCrane
 
2 audiological evaluation
2 audiological evaluation2 audiological evaluation
2 audiological evaluationDr_Mo3ath
 
TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)Domenica Fanelli
 
Research Proposal
Research ProposalResearch Proposal
Research ProposalRyan Pelon
 
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptxAIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptxZawarali786
 
In this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docxIn this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docxjaggernaoma
 
Musician instructor talk
Musician instructor talkMusician instructor talk
Musician instructor talkMark Rauterkus
 
Acoustics and basic audiometry
Acoustics and basic audiometryAcoustics and basic audiometry
Acoustics and basic audiometrybethfernandezaud
 
Attention working memory
Attention working memoryAttention working memory
Attention working memory혜원 정
 
Audio Engineering Program
Audio Engineering ProgramAudio Engineering Program
Audio Engineering ProgramJessica Lopez
 
Attention and the refinement of auditory expectations: Hafter festschrift talk
Attention and the refinement of auditory expectations: Hafter festschrift talkAttention and the refinement of auditory expectations: Hafter festschrift talk
Attention and the refinement of auditory expectations: Hafter festschrift talkPsyche Loui
 
Introduction to Language and Linguistics 003: Introduction to Phonology
Introduction to Language and Linguistics 003: Introduction to PhonologyIntroduction to Language and Linguistics 003: Introduction to Phonology
Introduction to Language and Linguistics 003: Introduction to PhonologyMeagan Louie
 

Semelhante a Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal (20)

Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docxMusic 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
Music 100.rtfd1__#[email protected]!#__button.gif__MACOSX.docx
 
groman_shin_finalreport
groman_shin_finalreportgroman_shin_finalreport
groman_shin_finalreport
 
Beginning band meeting
Beginning band meetingBeginning band meeting
Beginning band meeting
 
Introduction to emotion detection
Introduction to emotion detectionIntroduction to emotion detection
Introduction to emotion detection
 
The Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res AudioThe Importance Of Enjoying Hi-Res Audio
The Importance Of Enjoying Hi-Res Audio
 
The Effect of Music on Memory
The Effect of Music on MemoryThe Effect of Music on Memory
The Effect of Music on Memory
 
2 audiological evaluation
2 audiological evaluation2 audiological evaluation
2 audiological evaluation
 
TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)TRI Research Day 2015 Poster-REAL (+EV)
TRI Research Day 2015 Poster-REAL (+EV)
 
Research Proposal
Research ProposalResearch Proposal
Research Proposal
 
Acoustic analyzer
Acoustic analyzerAcoustic analyzer
Acoustic analyzer
 
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptxAIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
AIOU Course 682 Speech And Hearing Semester Spring 2022 Assignment 1.pptx
 
In this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docxIn this essay, you will write an explication of the poem I Have B.docx
In this essay, you will write an explication of the poem I Have B.docx
 
Automatic Speech Recognition
Automatic Speech RecognitionAutomatic Speech Recognition
Automatic Speech Recognition
 
Musician instructor talk
Musician instructor talkMusician instructor talk
Musician instructor talk
 
Acoustics and basic audiometry
Acoustics and basic audiometryAcoustics and basic audiometry
Acoustics and basic audiometry
 
Attention working memory
Attention working memoryAttention working memory
Attention working memory
 
Audio Engineering Program
Audio Engineering ProgramAudio Engineering Program
Audio Engineering Program
 
Attention and the refinement of auditory expectations: Hafter festschrift talk
Attention and the refinement of auditory expectations: Hafter festschrift talkAttention and the refinement of auditory expectations: Hafter festschrift talk
Attention and the refinement of auditory expectations: Hafter festschrift talk
 
Introduction to Language and Linguistics 003: Introduction to Phonology
Introduction to Language and Linguistics 003: Introduction to PhonologyIntroduction to Language and Linguistics 003: Introduction to Phonology
Introduction to Language and Linguistics 003: Introduction to Phonology
 
Musicians 9.1.07
Musicians 9.1.07Musicians 9.1.07
Musicians 9.1.07
 

Último

Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontologyjohnbeverley2021
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMKumar Satyam
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...apidays
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKJago de Vreede
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistandanishmna97
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamUiPathCommunity
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...apidays
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard37
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businesspanagenda
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDropbox
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...apidays
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)Samir Dash
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024The Digital Insurer
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAndrey Devyatkin
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Victor Rentea
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...apidays
 

Último (20)

Six Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal OntologySix Myths about Ontologies: The Basics of Formal Ontology
Six Myths about Ontologies: The Basics of Formal Ontology
 
WSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering DevelopersWSO2's API Vision: Unifying Control, Empowering Developers
WSO2's API Vision: Unifying Control, Empowering Developers
 
Introduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDMIntroduction to use of FHIR Documents in ABDM
Introduction to use of FHIR Documents in ABDM
 
Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..Understanding the FAA Part 107 License ..
Understanding the FAA Part 107 License ..
 
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
Apidays New York 2024 - Passkeys: Developing APIs to enable passwordless auth...
 
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUKSpring Boot vs Quarkus the ultimate battle - DevoxxUK
Spring Boot vs Quarkus the ultimate battle - DevoxxUK
 
CNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In PakistanCNIC Information System with Pakdata Cf In Pakistan
CNIC Information System with Pakdata Cf In Pakistan
 
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data DiscoveryTrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
TrustArc Webinar - Unlock the Power of AI-Driven Data Discovery
 
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 AmsterdamDEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
DEV meet-up UiPath Document Understanding May 7 2024 Amsterdam
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
Apidays New York 2024 - APIs in 2030: The Risk of Technological Sleepwalk by ...
 
JohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptxJohnPollard-hybrid-app-RailsConf2024.pptx
JohnPollard-hybrid-app-RailsConf2024.pptx
 
Why Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire businessWhy Teams call analytics are critical to your entire business
Why Teams call analytics are critical to your entire business
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
AI+A11Y 11MAY2024 HYDERBAD GAAD 2024 - HelloA11Y (11 May 2024)
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024Finding Java's Hidden Performance Traps @ DevoxxUK 2024
Finding Java's Hidden Performance Traps @ DevoxxUK 2024
 
Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...Apidays New York 2024 - The value of a flexible API Management solution for O...
Apidays New York 2024 - The value of a flexible API Management solution for O...
 

Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal

  • 1. Sound Events and Emotions: Investigating the Relation of Rhythmic Characteristics and Arousal K. Drossos 1 R. Kotsakis 2 G. Kalliris 2 A. Floros 1 1Digital Audio Processing & Applications Group, Audiovisual Signal Processing Laboratory, Dept. of Audiovisual Arts, Ionian University, Corfu, Greece 2Laboratory of Electronic Media, Dept. of Journalism and Mass Communication, Aristotle University of Thessaloniki, Thessaloniki, Greece
  • 2. Agenda 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 3. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 4. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 5. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 6. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 7. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 8. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Interaction with stimuli Reactions Emotions felt
  • 9. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Structured form of sound General sound Interaction with stimuli Reactions Emotions felt
  • 10. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Everyday life Everyday life Stimuli Many stimuli, e.g. Visual Aural Structured form of sound General sound Interaction with stimuli Reactions Emotions felt
  • 11. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 12. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 13. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 14. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 15. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 16. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions Music Structured form of sound Primary used to mimic and extent voice characteristics Enhancing emotion(s) conveyance Relation to emotions through (but not olny) MIR and MER Music Information Retrieval Music Emotion Recognition
  • 17. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 18. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 19. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 20. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 21. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 22. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions Music and emotions (cont’d) Research results & Applications Accuracy results up to 85% In large music data bases Opposed to typical “Artist/Genre/Year” classification Categorisation according to emotion Retrieval based on emotion Preliminary applications for synthesis of music based on emotion
  • 23. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 24. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 25. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 26. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 27. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 28. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events Sound events Non structured/general form of sound Additional information regarding: Source attributes Environment attributes Sound producing mechanism attributes Apparent almost any time, if not always, in everyday life
  • 29. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 30. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 31. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 32. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 33. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 34. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound and emotions From music to sound events (cont’d) Sound events Used in many applications: Audio interactions/interfaces Video games Soundscapes Artificial acoustic environments Trigger reactions Elicit emotions
  • 35. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 36. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 37. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 38. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 39. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Objectives of current study Data Aural stimuli can elicit emotions Music is a structured form of sound Sound events are not Music’s rhythm is arousing Research question Is sound events’ rhythm arousing too?
  • 40. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 41. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 42. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 43. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 44. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 45. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 46. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Experimental Procedure’s Layout Layout Emotional model selection Annotated sound corpus collection Processing of sounds Pre-processing Feature extraction Machine learning tests
  • 47. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 48. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 49. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 50. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 51. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 52. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Sound corpus & emotional model General Characteristics Sound Corpus International Affective Digital Sounds (IADS) 167 sounds Duration: 6 secs Variable sampling frequency Emotional model Continuous model Sounds annotated using Arousal-Valence-Dominance Self Assessment Manikin (S.A.M.) used
  • 53. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 54. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 55. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 56. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 57. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 58. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Pre-processing General procedure Normalisation Segmentation / Windowing Clustering in two classes Class A: 24 members Class B: 143 members Segmentation/Windowing Different window lengths Ranging from 0.8 to 2.0 seconds, with 0.2 seconds increment
  • 59. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 60. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 61. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 62. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Total of 26 features’ set
  • 63. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Processing of Sounds Feature Extraction Extracted features Only rhythm related features For each segment Statistical measures Extracted Features Statistical Measures Beat spectrum Mean Onsets Standard deviation Tempo Gradient Fluctuation Kurtosis Event density Skewness Pulse clarity
  • 64. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 65. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 66. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Feature Evaluation Objectives Most valuable features for arousal detection Dependencies between selected features and different window lengths Algorithms used InfoGainAttributeEval SVMAttributeEval WEKA software
  • 67. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 68. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 69. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Machine learning tests Classification Objectives Arousal classification based on rhythm Dependencies between different window lengths and classification results Algorithms used Artificial neural networks Logistic regression K Nearest Neighbors WEKA software
  • 70. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 71. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 72. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 73. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 74. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation General results Two groups of features formed, 13 features each group An upper and a lower group Inner group rank not always the same Upper and lower groups constant for all window lengths
  • 75. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Feature Evaluation Upper group Beatspectrum std Event density std Onsets gradient Fluctuation kurtosis Beatspectrum gradient Pulse clarity std Fluctuation mean Fluctuation std Fluctuation skewness Onsets skewness Pulse clarity kurtosis Event density kurtosis Onsets mean
  • 76. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 77. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 78. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Lowest accuracy score: 71.26%
  • 79. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26%
  • 80. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26%
  • 81. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification Classification General results Relative un-corelation of features Variations regarding different window lengths Accuracy results Highest accuracy score: 88.37% Window length: 1.0 second Algorithm utilised: Logistic regression Lowest accuracy score: 71.26% Window length: 1.4 seconds Algorithm utilised: Artificial Neural network
  • 82. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 83. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 84. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 85. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 86. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 87. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 88. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature evaluation Features Most informative features: Rhythm’s periodicity at auditory channel (fluctuation) Onsets Event density Beatspecturm Pulse clarity Independent from semantic content
  • 89. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 90. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 91. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 92. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Classification results Algorithms related LR’s minimum score was: 81.44% KNN’s minimum score was: 82.05% Results related Sound events’ rhythm affects arousal Thus, sound’s rhythm affect arousal
  • 93. 1. Introduction Everyday life Sound and emotions 2. Objectives 3. Experimental Sequence Experimental Procedure’s Layout Sound corpus & emotional model Processing of Sounds Machine learning tests 4. Results Feature evaluation Classification 5. Discussion & Conclusions 6. Feature work
  • 94. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature work Features & Dimensions Other features related to arousal Connection of features with valence
  • 95. Introduction Objectives Experimental Sequence Results Discussion & Conclusions Feature work Feature work Features & Dimensions Other features related to arousal Connection of features with valence