SlideShare uma empresa Scribd logo
1 de 15
Heart Sounds Segmentatio
and Classification
Li Sun
Lyu Yaopengfei
Lei Zhang
Jie Cao
Cheng Fang
Introduction
Task: Detect heart disease from
heart sound audio. (Clinically
meaningful segments: heart muscle
contraction S1 and relaxation S2.)
 Segmentation: heart sound
needs to be segmented into
components. Find locations and
intervals of S1 and S2.
 Classification: Machine learning
techniques instead of medical
diagnosis.
About Data From the Classifying Heart Sounds PASCAL Challen
Four categories:
• Normal: it contains the normal human heartbeat sound with only S1 and S2.
• Murmur: there is a noise between either S1 and S2 or S2 and S1. They can be a
symptom of many heart disorders.
• Extra Heart Sound: there is additional sounds between either S1 and S2 or S2
and S1.
In some situations it is an important sign of disease.
• Artifact: there are a wide range of different sounds.
Different length, between
1-10 seconds
Classification pipeline
Signal
Preprocessing
De-noise by
wavelet
decomposition.
Shannon energy
representation
Feature
selection
Time Domain
and frequency
domain.
Classification
Challenges: 1. noisy background.
2. S1 and S2 identification.
3. appropriate features selection
SVM, Random
forest.
Signal Preprocessing
De-noise by wavelet decomposition
Representing signal by Shannon energy
Time Domain Features
We use k-means to classify the projection in Y
axis of the de-noised signal.
Extract Cardiac Cycle by Autocorrelation
A cardiac cycle time is a whole
period of one human heart
sound include S1-S2 and S2-
S1
Time Domain Features
1. Get local minimum and maximum by
check for derivation equal 0.
2. Get peaks by the rules of this sequence
(min, max, min).
3. Obtain peaks tranigle areas.
4. Thresholding the areas
5. 2 or 3 peaks only in
each cardiac cycle.
Extract S1 S2 locations by Salman’s method
Time Domain Features
Representing these features
1. Obtain the time differences between S1
and S2 locations.
2. Due to the differences all lies within 0-
1s.
3. Using a 20 bins histogram to
representing it by a 1*20 vector within
range 0-1s.
4. Plus a “mean”and “standard deviation”
of the time differences in the end of the
vector to make it more sufficient as a
1*22 vector.
Time Domain Features
Frequency domain segmentation
 Each special class heart
sound represents the similar
frequency spectrum
distribution.
 It make sense that
frequency distribution could
be a feature for
classification
Using Fourier
transform
Segmenting
one heart
sound to 20
parts
Compute
sum of each
part
Frequency domain segmentation
Bag of Visual Word methods
Time domain features
Time domain features:
1. Shannon energy (time)
2. Histogram (time)
Classifier: SVM(one vs one)
BoVW:
1. Divide features into N segments with same length.
2. Construct a dictionary
3. Use this dictionary to describe each sample
Frequency and time domain features
1. De-noise original signal by
wavelet, then Fourier
transform. (frequency)
2. Shannon energy (time)
3. Histogram (time)
Classifier: SVM(one-vs-one)
Conclusion:
• Combining frequency and time domain
information could improve performance
of classifier very little.
• The value of dictionary size is not
significant.
Results and conclusion
The best correct rate we can get is
about 70% - 75% obtained by using 10
dimension combined features with
random forest classification method.
Thank you
Reference:
[1]P. Bentley, G. Nordehn, M. Coimbra, and S. Mannor, “The PAS- CAL Classifying Heart Sounds
Challenge 2011 (CHSC2011) Results,” http://www.peterjbentley.com/heartchallenge/index.html.
[2] Y. Deng and P. J. Bentley, “A robust heart sound segmentation and classification al- gorithm using
wavelet decomposition and spectrogram,” Extended Abstract in the First PASCAL ..., 2012.
[3] A. H. Salman, N. Ahmadi, R. Mengko, A. Z. R. Langi, and T. L. R. Mengko, Automatic
segmentation and detection of heart sound components S1, S2, S3 and S4. IEEE, 2015.
[4] D. Gradolewski and G. Redlarski, “Wavelet-based denoising method for real phonocar- diography
signal recorded by mobile devices in noisy environment,” Computers in biology and medicine, vol. 52,
pp. 119–129, Sep. 2014.
[5] H. Liang, S. Lukkarinen, and I. Hartimo, “Heart sound segmentation algorithm based on heart
sound envelogram,” in Computers in Cardiology 1997. IEEE, 1997, pp. 105–108.
[6] S. Debbal and F. Bereksi-Reguig, “Computerized heart sounds analysis,” vol. 38, no. 2, pp. 263–
280, 02 2008.

Mais conteúdo relacionado

Destaque

La fonetica y la fonologia
La fonetica y la fonologiaLa fonetica y la fonologia
La fonetica y la fonologiaCarmen Fuentes
 
Allophone and phoneme. persentation
Allophone and phoneme. persentationAllophone and phoneme. persentation
Allophone and phoneme. persentationDessy Restu Restu
 
Basic Phonetics for Teachers
Basic Phonetics for TeachersBasic Phonetics for Teachers
Basic Phonetics for Teachersferryangelas
 
Basic concepts in phonetics
Basic concepts in phoneticsBasic concepts in phonetics
Basic concepts in phoneticsWilliam Orellana
 
phonetics and phonology
phonetics and phonologyphonetics and phonology
phonetics and phonologyWu Heping
 
Phonetics and phonology
Phonetics and phonologyPhonetics and phonology
Phonetics and phonologyMarlene Reyes
 
Importancia de la fonética y la fonología en la lengua inglesa
Importancia de la fonética y la fonología en la lengua inglesaImportancia de la fonética y la fonología en la lengua inglesa
Importancia de la fonética y la fonología en la lengua inglesaDolo Fainello
 
Importancia de la fonética y fonología
Importancia de la fonética y fonologíaImportancia de la fonética y fonología
Importancia de la fonética y fonologíaDolo Fainello
 

Destaque (11)

La fonetica y la fonologia
La fonetica y la fonologiaLa fonetica y la fonologia
La fonetica y la fonologia
 
Allophone and phoneme. persentation
Allophone and phoneme. persentationAllophone and phoneme. persentation
Allophone and phoneme. persentation
 
Basic Phonetics for Teachers
Basic Phonetics for TeachersBasic Phonetics for Teachers
Basic Phonetics for Teachers
 
Basic concepts in phonetics
Basic concepts in phoneticsBasic concepts in phonetics
Basic concepts in phonetics
 
Speech mechanism
Speech mechanismSpeech mechanism
Speech mechanism
 
The organs of speech and their function
The organs of speech and their functionThe organs of speech and their function
The organs of speech and their function
 
phonetics and phonology
phonetics and phonologyphonetics and phonology
phonetics and phonology
 
Phonetics and phonology
Phonetics and phonologyPhonetics and phonology
Phonetics and phonology
 
Phonetics powerpoint
Phonetics powerpointPhonetics powerpoint
Phonetics powerpoint
 
Importancia de la fonética y la fonología en la lengua inglesa
Importancia de la fonética y la fonología en la lengua inglesaImportancia de la fonética y la fonología en la lengua inglesa
Importancia de la fonética y la fonología en la lengua inglesa
 
Importancia de la fonética y fonología
Importancia de la fonética y fonologíaImportancia de la fonética y fonología
Importancia de la fonética y fonología
 

Semelhante a Heart Sound Segmentation and Classification Using Time and Frequency Features

A Novel Method for Silence Removal in Sounds Produced by Percussive Instruments
A Novel Method for Silence Removal in Sounds Produced by Percussive InstrumentsA Novel Method for Silence Removal in Sounds Produced by Percussive Instruments
A Novel Method for Silence Removal in Sounds Produced by Percussive InstrumentsIJMTST Journal
 
Islamic University Sample Question Solution 2019 (Biomedical Signal Processing)
Islamic University Sample Question Solution 2019 (Biomedical Signal Processing)Islamic University Sample Question Solution 2019 (Biomedical Signal Processing)
Islamic University Sample Question Solution 2019 (Biomedical Signal Processing)Rakibul Hasan Pranto
 
Application of Fisher Linear Discriminant Analysis to Speech/Music Classifica...
Application of Fisher Linear Discriminant Analysis to Speech/Music Classifica...Application of Fisher Linear Discriminant Analysis to Speech/Music Classifica...
Application of Fisher Linear Discriminant Analysis to Speech/Music Classifica...Lushanthan Sivaneasharajah
 
Design of ultrasound transducer
Design of ultrasound transducer Design of ultrasound transducer
Design of ultrasound transducer SudarshanKanse1
 
Robot navigation in unknown environment with obstacle recognition using laser...
Robot navigation in unknown environment with obstacle recognition using laser...Robot navigation in unknown environment with obstacle recognition using laser...
Robot navigation in unknown environment with obstacle recognition using laser...IJECEIAES
 
Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based F...
Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based F...Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based F...
Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based F...Juan Camilo Vasquez
 
ultra sound.pptx
ultra sound.pptxultra sound.pptx
ultra sound.pptxAliMRiyath
 
Graphical visualization of musical emotions
Graphical visualization of musical emotionsGraphical visualization of musical emotions
Graphical visualization of musical emotionsPranay Prasoon
 
Sound level meter.pptx
Sound level meter.pptxSound level meter.pptx
Sound level meter.pptxRobaFikadu
 
Ultrasound imaging
Ultrasound imagingUltrasound imaging
Ultrasound imagingNIVETA SINGH
 
Feasibility of EEG Super-Resolution Using Deep Convolutional Networks
Feasibility of EEG Super-Resolution Using Deep Convolutional NetworksFeasibility of EEG Super-Resolution Using Deep Convolutional Networks
Feasibility of EEG Super-Resolution Using Deep Convolutional NetworksSangjun Han
 
A novel speech enhancement technique
A novel speech enhancement techniqueA novel speech enhancement technique
A novel speech enhancement techniqueeSAT Publishing House
 
Robust Sound Field Reproduction against Listener’s Movement Utilizing Image ...
Robust Sound Field Reproduction against  Listener’s Movement Utilizing Image ...Robust Sound Field Reproduction against  Listener’s Movement Utilizing Image ...
Robust Sound Field Reproduction against Listener’s Movement Utilizing Image ...奈良先端大 情報科学研究科
 
enhancement of ecg signal using wavelet transfform
enhancement of ecg signal using wavelet transfformenhancement of ecg signal using wavelet transfform
enhancement of ecg signal using wavelet transfformU Reshmi
 
20575-38936-1-PB.pdf
20575-38936-1-PB.pdf20575-38936-1-PB.pdf
20575-38936-1-PB.pdfIjictTeam
 
Neural signal processing by mustafa rasheed & zeena saadon & walaa kahtan 2015
Neural signal processing by mustafa rasheed & zeena saadon & walaa kahtan 2015Neural signal processing by mustafa rasheed & zeena saadon & walaa kahtan 2015
Neural signal processing by mustafa rasheed & zeena saadon & walaa kahtan 2015Mustafa AL-Timemmie
 

Semelhante a Heart Sound Segmentation and Classification Using Time and Frequency Features (20)

I027050057
I027050057I027050057
I027050057
 
A Novel Method for Silence Removal in Sounds Produced by Percussive Instruments
A Novel Method for Silence Removal in Sounds Produced by Percussive InstrumentsA Novel Method for Silence Removal in Sounds Produced by Percussive Instruments
A Novel Method for Silence Removal in Sounds Produced by Percussive Instruments
 
Speech Signal Processing
Speech Signal ProcessingSpeech Signal Processing
Speech Signal Processing
 
Islamic University Sample Question Solution 2019 (Biomedical Signal Processing)
Islamic University Sample Question Solution 2019 (Biomedical Signal Processing)Islamic University Sample Question Solution 2019 (Biomedical Signal Processing)
Islamic University Sample Question Solution 2019 (Biomedical Signal Processing)
 
Application of Fisher Linear Discriminant Analysis to Speech/Music Classifica...
Application of Fisher Linear Discriminant Analysis to Speech/Music Classifica...Application of Fisher Linear Discriminant Analysis to Speech/Music Classifica...
Application of Fisher Linear Discriminant Analysis to Speech/Music Classifica...
 
Design of ultrasound transducer
Design of ultrasound transducer Design of ultrasound transducer
Design of ultrasound transducer
 
Robot navigation in unknown environment with obstacle recognition using laser...
Robot navigation in unknown environment with obstacle recognition using laser...Robot navigation in unknown environment with obstacle recognition using laser...
Robot navigation in unknown environment with obstacle recognition using laser...
 
Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based F...
Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based F...Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based F...
Emotion Recognition from Speech with Acoustic, Non-Linear and Wavelet-based F...
 
ultra sound.pptx
ultra sound.pptxultra sound.pptx
ultra sound.pptx
 
Graphical visualization of musical emotions
Graphical visualization of musical emotionsGraphical visualization of musical emotions
Graphical visualization of musical emotions
 
Sound level meter.pptx
Sound level meter.pptxSound level meter.pptx
Sound level meter.pptx
 
Ultrasound imaging
Ultrasound imagingUltrasound imaging
Ultrasound imaging
 
Feasibility of EEG Super-Resolution Using Deep Convolutional Networks
Feasibility of EEG Super-Resolution Using Deep Convolutional NetworksFeasibility of EEG Super-Resolution Using Deep Convolutional Networks
Feasibility of EEG Super-Resolution Using Deep Convolutional Networks
 
A novel speech enhancement technique
A novel speech enhancement techniqueA novel speech enhancement technique
A novel speech enhancement technique
 
Final presentation
Final presentationFinal presentation
Final presentation
 
Robust Sound Field Reproduction against Listener’s Movement Utilizing Image ...
Robust Sound Field Reproduction against  Listener’s Movement Utilizing Image ...Robust Sound Field Reproduction against  Listener’s Movement Utilizing Image ...
Robust Sound Field Reproduction against Listener’s Movement Utilizing Image ...
 
enhancement of ecg signal using wavelet transfform
enhancement of ecg signal using wavelet transfformenhancement of ecg signal using wavelet transfform
enhancement of ecg signal using wavelet transfform
 
20575-38936-1-PB.pdf
20575-38936-1-PB.pdf20575-38936-1-PB.pdf
20575-38936-1-PB.pdf
 
Neural signal processing by mustafa rasheed & zeena saadon & walaa kahtan 2015
Neural signal processing by mustafa rasheed & zeena saadon & walaa kahtan 2015Neural signal processing by mustafa rasheed & zeena saadon & walaa kahtan 2015
Neural signal processing by mustafa rasheed & zeena saadon & walaa kahtan 2015
 
T26123129
T26123129T26123129
T26123129
 

Heart Sound Segmentation and Classification Using Time and Frequency Features

  • 1. Heart Sounds Segmentatio and Classification Li Sun Lyu Yaopengfei Lei Zhang Jie Cao Cheng Fang
  • 2. Introduction Task: Detect heart disease from heart sound audio. (Clinically meaningful segments: heart muscle contraction S1 and relaxation S2.)  Segmentation: heart sound needs to be segmented into components. Find locations and intervals of S1 and S2.  Classification: Machine learning techniques instead of medical diagnosis.
  • 3. About Data From the Classifying Heart Sounds PASCAL Challen Four categories: • Normal: it contains the normal human heartbeat sound with only S1 and S2. • Murmur: there is a noise between either S1 and S2 or S2 and S1. They can be a symptom of many heart disorders. • Extra Heart Sound: there is additional sounds between either S1 and S2 or S2 and S1. In some situations it is an important sign of disease. • Artifact: there are a wide range of different sounds. Different length, between 1-10 seconds
  • 4. Classification pipeline Signal Preprocessing De-noise by wavelet decomposition. Shannon energy representation Feature selection Time Domain and frequency domain. Classification Challenges: 1. noisy background. 2. S1 and S2 identification. 3. appropriate features selection SVM, Random forest.
  • 5. Signal Preprocessing De-noise by wavelet decomposition Representing signal by Shannon energy
  • 6. Time Domain Features We use k-means to classify the projection in Y axis of the de-noised signal.
  • 7. Extract Cardiac Cycle by Autocorrelation A cardiac cycle time is a whole period of one human heart sound include S1-S2 and S2- S1 Time Domain Features
  • 8. 1. Get local minimum and maximum by check for derivation equal 0. 2. Get peaks by the rules of this sequence (min, max, min). 3. Obtain peaks tranigle areas. 4. Thresholding the areas 5. 2 or 3 peaks only in each cardiac cycle. Extract S1 S2 locations by Salman’s method Time Domain Features
  • 9. Representing these features 1. Obtain the time differences between S1 and S2 locations. 2. Due to the differences all lies within 0- 1s. 3. Using a 20 bins histogram to representing it by a 1*20 vector within range 0-1s. 4. Plus a “mean”and “standard deviation” of the time differences in the end of the vector to make it more sufficient as a 1*22 vector. Time Domain Features
  • 10. Frequency domain segmentation  Each special class heart sound represents the similar frequency spectrum distribution.  It make sense that frequency distribution could be a feature for classification
  • 11. Using Fourier transform Segmenting one heart sound to 20 parts Compute sum of each part Frequency domain segmentation
  • 12. Bag of Visual Word methods Time domain features Time domain features: 1. Shannon energy (time) 2. Histogram (time) Classifier: SVM(one vs one) BoVW: 1. Divide features into N segments with same length. 2. Construct a dictionary 3. Use this dictionary to describe each sample
  • 13. Frequency and time domain features 1. De-noise original signal by wavelet, then Fourier transform. (frequency) 2. Shannon energy (time) 3. Histogram (time) Classifier: SVM(one-vs-one) Conclusion: • Combining frequency and time domain information could improve performance of classifier very little. • The value of dictionary size is not significant.
  • 14. Results and conclusion The best correct rate we can get is about 70% - 75% obtained by using 10 dimension combined features with random forest classification method.
  • 15. Thank you Reference: [1]P. Bentley, G. Nordehn, M. Coimbra, and S. Mannor, “The PAS- CAL Classifying Heart Sounds Challenge 2011 (CHSC2011) Results,” http://www.peterjbentley.com/heartchallenge/index.html. [2] Y. Deng and P. J. Bentley, “A robust heart sound segmentation and classification al- gorithm using wavelet decomposition and spectrogram,” Extended Abstract in the First PASCAL ..., 2012. [3] A. H. Salman, N. Ahmadi, R. Mengko, A. Z. R. Langi, and T. L. R. Mengko, Automatic segmentation and detection of heart sound components S1, S2, S3 and S4. IEEE, 2015. [4] D. Gradolewski and G. Redlarski, “Wavelet-based denoising method for real phonocar- diography signal recorded by mobile devices in noisy environment,” Computers in biology and medicine, vol. 52, pp. 119–129, Sep. 2014. [5] H. Liang, S. Lukkarinen, and I. Hartimo, “Heart sound segmentation algorithm based on heart sound envelogram,” in Computers in Cardiology 1997. IEEE, 1997, pp. 105–108. [6] S. Debbal and F. Bereksi-Reguig, “Computerized heart sounds analysis,” vol. 38, no. 2, pp. 263– 280, 02 2008.

Notas do Editor

  1. in some situations it is an important sign of disease.
  2. Thanks xxx, in this part, I will introduce the frequency domain segmentation or we can call frequency domain feature extraction. We believe that every sound consists of different frequency signals. But, the same class heart sound represents the similar frequency distribution, So we consider that frequency could be a feature for the classification. You can see these three parts of the picture shows the three classes heard sound’s frequency distribution. Extrhals looks distribute on the middle of the spectrum, murmur looks distribute on around 50 Hz, and normal looks focuse on 75 Hz. So, how can we extract the useful features from a series of frequency signals.
  3. In here, we try to segment the frequency to N part with frequency, for example we set 0-15 Hz as first part, 15-30 Hz as second part, and so on. This picture shows that a murmur heart sound signal is segmented to 20 parts. And after that, we get the sum of the each part as one dimension of the frequency feature. So in the end, we can get a feature with 20 dimension number from one heart sound. And put this feature to training. It’s sounds reasonable.