O slideshow foi denunciado.
Utilizamos seu perfil e dados de atividades no LinkedIn para personalizar e exibir anúncios mais relevantes. Altere suas preferências de anúncios quando desejar.

Analysis and Classification of ECG Signal using Neural Network

2.240 visualizações

Publicada em

  • Login to see the comments

Analysis and Classification of ECG Signal using Neural Network

  1. 1. This final year project report is submitted to Faculty of Engineering Multimedia University in partial fulfilment for Bachelor of Engineering FACULTY OF ENGINEERING MULTIMEDIA UNIVERSITY APRIL 2010 ANALYSIS and CLASSIFICATION of EEG SIGNALS using NEURAL NETWORK by LAM ZHENG YAN (1061108486) B.Eng.(Hons) Electronics Majoring in Nanotechnology Session 2009/2010
  2. 2. ECG ANALYSIS and CLASSIFICATION of SIGNALS using NEURAL NETWORK LAM ZHENG YAN 1061105869 2009/2010 FACULTY OF ENGINEERING MULTIMEDIA UNIVERSITY APRIL 2010
  3. 3. The copyright of this report belongs to the author under the terms of the Copyright Act 1987 as qualified by Regulation 4(1) of the Multimedia University Intellectual Property Regulations. Due acknowledgement shall always be made of the use of any material contained in, or derived from, this report.
  4. 4. DECLARATION I hereby declare that this work has been done by myself and no portion of the work contained in this report has been submitted in support of any application for any other degree or qualification of this or any other university or institute of learning. I also declare that pursuant to the provisions of the Copyright Act 1987, I have not engaged in any unauthorised act of copying or reproducing or attempt to copy / reproduce or cause to copy / reproduce or permit the copying / reproducing or the sharing and / or downloading of any copyrighted material or an attempt to do so whether by use of the Universitys facilities or outside networks / facil- ities whether in hard copy or soft copy format, of any material protected under the provisions of sections 3 and 7 of the Act whether for payment or otherwise save as specifically provided for therein. This shall include but not be limited to any lecture notes, course packs, thesis, text books, exam questions, any works of authorship fixed in any tangible medium of expression whether provided by the University or otherwise. I hereby further declare that in the event of any infringement of the provisions of the Act whether knowingly or unknowingly the University shall not be liable for the same in any manner whatsoever and undertakes to indemnify and keep indemnified the University against all such claims and actions. Signature: ............................................. Name: LAM ZHENG YAN Student ID: 1061108486 Date: APRIL 2010
  5. 5. Acknowledgements I would like to express my most sincere thanks to my project supervisor Dr. Khaz- aimatol Shima Subari for her constant guidance, endless support and encourage- ment. Her vast technical knowledge, unique insight and experience have inspired and motivate me during my research on this project. Special thanks to the research officer namely Mr. Justin Leo Cheang Loong and Mr. Muhammad Kamil bin Abdullah for giving their technical assistance on the Matlab computing software. I extend my thanks to all my course mates and friends for their fruitful discussion and suggestion on doing this project. In particular, Im grateful to Veren Teoh Chin Lai, Yeong Kai Bin, Faliq Rizal Madzri and Noorain Ismail for sharing with me their knowledge. On top of that, special thanks to Ko Shao Peng for helping me to proof read this thesis. Finally, I would like to thank my parents for their care and love which keep me going strong throughout the entire process of doing this project. Their words of wisdom and encouragement have provided me with optimism and hope in life.
  6. 6. Abstract This project presents the study and analysis of electrocardiogram classified by neural network known as biometrics system. The first part of this project describes the basic theory of the electrocardiogram (ECG) and setup the experiment of ECG signal acquisition. Three test subjects have involved in first part of experiment which will modify the procedure of experiment for large number of test subjects. Preprocessing of ECG signal such as filtering, segmentation and feature extraction will be done before classification. The moving average filter is used to remove the noise of ECG. Signal frequency spectrum has shown the frequency range from 0 Hz to 45 Hz which presented the characteristic of ECG. The design of IIR filter is according to the cutoff frequency that given by 45 Hz. In second part of this project, we setup two experiments which is experiment A and experiment B. Experiment A was 11 test subjects recorded 10 set of ECG data in one day. Experiment B was 4 subject recorded 30 set of ECG data in three days. We segment the ECG into 15 pieces per period of signal and extract the features by wavelet decomposition for each segment. We designed a systematic and flexible data structure which will help us in analysis and discussion. Feed- forward backpropagation neural network is used to classify our test subjects. The recognition rate of experiment A and B are 80.89% and 93.75%. We concluded that larger set of training data has a better recognition rate and test subject must follow certain rules to provide better quality of ECG signal.
  7. 7. Contents DECLARATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I 1 Project Overview 1 1.1 Motivation and Objective . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Scope of Project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Thesis Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Introduction 4 2.1 Heart Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Electrocardiogram (ECG) . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 ECG Signal Acquisition . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4 Characteristics of ECG . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.5 Artifacts of ECG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 IV I II
  8. 8. CONTENTS V 3 Classification 14 3.1 Concepts of Pattern Recognition . . . . . . . . . . . . . . . . . . . . 14 3.2 Brief History of Neural Networks . . . . . . . . . . . . . . . . . . . 15 3.3 Neural Network-based Classifier . . . . . . . . . . . . . . . . . . . . 16 3.4 Firing Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3.5 Architecture of Neural Networks . . . . . . . . . . . . . . . . . . . . 18 3.5.1 Feed-forward Network . . . . . . . . . . . . . . . . . . . . . 18 3.5.2 Feedback Network . . . . . . . . . . . . . . . . . . . . . . . 18 3.5.3 Perceptrons . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.6 Applications of Neural Network . . . . . . . . . . . . . . . . . . . . 21 3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 4 Experimental Setup and Methodology 23 4.1 Experimental Layout . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.2 Test Subject . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.3 Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.4 Features Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.5 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.5.1 Data Structure . . . . . . . . . . . . . . . . . . . . . . . . . 30 4.5.2 Classifier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
  9. 9. CONTENTS VI 5 Result and Discussion 40 5.1 Result of Experiment A . . . . . . . . . . . . . . . . . . . . . . . . 40 5.2 Result of Experiment B . . . . . . . . . . . . . . . . . . . . . . . . 44 5.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.5 Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Appendix: A Experiment Setup i A.1 Procedure of ECG Acquisition . . . . . . . . . . . . . . . . . . . . . i A.2 Experiment Result . . . . . . . . . . . . . . . . . . . . . . . . . . . iii A.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix A.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix B Program Validation xi B.1 Data Construction . . . . . . . . . . . . . . . . . . . . . . . . . . . xi B.2 Filtering Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii B.3 Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii B.4 Features Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii B.5 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii C Source Code xxxi Bibliography l
  10. 10. List of Figures 2.1 Structure of the heart . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Propagation of ECG . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 The leads and Einthoven’s triangle. . . . . . . . . . . . . . . . . . . 9 2.4 The location of chest electrodes. . . . . . . . . . . . . . . . . . . . . 9 2.5 A sample of an ECG signal. . . . . . . . . . . . . . . . . . . . . . . 11 3.1 A three input of and one output neuron unit . . . . . . . . . . . . . 15 3.2 Simple neural network structure . . . . . . . . . . . . . . . . . . . . 16 3.3 Feed-forward neural network . . . . . . . . . . . . . . . . . . . . . . 19 3.4 Feedback neural network . . . . . . . . . . . . . . . . . . . . . . . . 19 3.5 Perceptron network . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 4.1 Layout of laboratory . . . . . . . . . . . . . . . . . . . . . . . . . . 24 4.2 Noise in ECG signal . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.3 Frequency response of an IIR filter . . . . . . . . . . . . . . . . . . 27 4.4 Wavelet transform . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 4.5 Features of subject: MH . . . . . . . . . . . . . . . . . . . . . . . . 31 VII
  11. 11. LIST OF FIGURES VIII 4.6 Features of subject: YN . . . . . . . . . . . . . . . . . . . . . . . . 32 4.7 The algorithm of biometrics system. . . . . . . . . . . . . . . . . . . 33 4.8 Matrix of data structure in 2D . . . . . . . . . . . . . . . . . . . . . 34 4.9 Matrix of data structure in 3D . . . . . . . . . . . . . . . . . . . . . 34 4.10 Wavelet decomposition level 4 . . . . . . . . . . . . . . . . . . . . . 35 4.11 Complete data structure in 3D . . . . . . . . . . . . . . . . . . . . . 36 4.12 Complete data structure for classification . . . . . . . . . . . . . . . 37 4.13 Target data matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.14 Structure of neural network . . . . . . . . . . . . . . . . . . . . . . 39 5.1 Confusion matrix of experiment A . . . . . . . . . . . . . . . . . . . 41 5.2 Error rate of experiment A . . . . . . . . . . . . . . . . . . . . . . . 43 5.3 Error rate of experiment B . . . . . . . . . . . . . . . . . . . . . . . 44 5.4 Feature comparison from set 5, 6, 10 . . . . . . . . . . . . . . . . . 47 A.1 Default electrode configuration . . . . . . . . . . . . . . . . . . . . . ii A.2 MH ECG signal with noise . . . . . . . . . . . . . . . . . . . . . . . iv A.3 FQ ECG signal with noise . . . . . . . . . . . . . . . . . . . . . . . v A.4 YN ECG signal with noise . . . . . . . . . . . . . . . . . . . . . . . vi A.5 ECG signal without noise . . . . . . . . . . . . . . . . . . . . . . . vii A.6 ECG signal frequency spectrum . . . . . . . . . . . . . . . . . . . . viii A.7 Two different ECG comparison . . . . . . . . . . . . . . . . . . . . ix B.1 Experiment A data matrix . . . . . . . . . . . . . . . . . . . . . . . xii
  12. 12. LIST OF FIGURES IX B.2 Experiment B data matrix . . . . . . . . . . . . . . . . . . . . . . . xii B.3 Comparison of ECG with and without noise - FQ . . . . . . . . . . xiv B.4 Comparison of ECG with and without noise - AR . . . . . . . . . . xv B.5 ECG before segmentation . . . . . . . . . . . . . . . . . . . . . . . xvi B.6 ECG after segmentation . . . . . . . . . . . . . . . . . . . . . . . . xvii B.7 ECG after wavelet decomposition . . . . . . . . . . . . . . . . . . . xix B.8 15 peaks of feature coefficient . . . . . . . . . . . . . . . . . . . . . xx B.9 1 peaks of feature coefficient . . . . . . . . . . . . . . . . . . . . . . xxi B.10 11 subject feature coefficient . . . . . . . . . . . . . . . . . . . . . . xxii B.11 Confusion plot for the trained neural network - correct data . . . . xxiii B.12 Confusion plot for the trained neural network - incorrect data . . . xxiv B.13 Features of subject AR - set 6 . . . . . . . . . . . . . . . . . . . . . xxv B.14 Features of subject AR - set 5 . . . . . . . . . . . . . . . . . . . . . xxvi B.15 Features of subject AR - set 5 . . . . . . . . . . . . . . . . . . . . . xxvii B.16 Features of subject AA - set 6 . . . . . . . . . . . . . . . . . . . . . xxviii B.17 Features of subject AA - set 5 . . . . . . . . . . . . . . . . . . . . . xxix B.18 Features of subject AA - set 5 . . . . . . . . . . . . . . . . . . . . . xxx C.1 The program tree of biometrics system. . . . . . . . . . . . . . . . . xxxii
  13. 13. List of Tables 3.1 Truth table of firing rules . . . . . . . . . . . . . . . . . . . . . . . . 18 3.2 Truth table after trained by firing rules . . . . . . . . . . . . . . . . 18 4.1 Data acquisition detail from test subjects . . . . . . . . . . . . . . . 25 5.1 Table of subjects name for experiment A . . . . . . . . . . . . . . . 41 5.2 Recognition rate for experiment A . . . . . . . . . . . . . . . . . . . 42 5.3 Table of subjects name for experiment B . . . . . . . . . . . . . . . 43 5.4 Recognition rate of experiment B . . . . . . . . . . . . . . . . . . . 45 A.1 Summary of test subject details. . . . . . . . . . . . . . . . . . . . . i A.2 Dissimilarity of ECG peak . . . . . . . . . . . . . . . . . . . . . . . iii X
  14. 14. Chapter 1 Project Overview This thesis studies the ECG signal and to use the signal as a classification tool. This is achieved by extracting the features of the signal according to Cvetkovic paper [UCC07] and those features are classified using a neural network algorithm [AM01]. In this chapter, we describe the motivation and objective of biometrics system using ECG signal. 1.1 Motivation and Objective Electrocardiogram (ECG) had recently been used to recognize a person. The ap- plication of ECG mostly used in biomedical detection [FC08]. The ECG of a person can represent health condition or disease. A person should have unique- ness ECG when he is still alive. That would be an advantage compare to other biometrics signal such as finger print or pattern of a retina. So ECG can be one of the biometrics’s key. This is a novel ideal to use ECG in the biometrics’s system. There is a large demand on the security system which can detect a person in the database. Some places like airport or restricted area were really just allowing a small amount of people to access. ECG might be the efficient key for them. Other than allow a small amount of people to access, we can have a blacklist of criminal or terrorist to recognize them without their finger print or face. The finger print or 1
  15. 15. 1.2. SCOPE OF PROJECT 2 face can be removed and face-lifting. So ECG has the advantages like universality, uniqueness, acceptability and circumvention. Hence, the objective of this project include: 1. To study and analysis electrocardiogram. 2. To record the ECG signal from peoples. 3. To extract the features from electrocardiogram. 4. To classify ECG signal. 1.2 Scope of Project Basically, this project is divided into two parts. The first part of this project is to do theoretical research on ECG signal used in human classification. This part of the project primarily deals with the signal acquisition method and pre-processing method. Pre-processing such as filtering, segmentation and feature extraction will be done before classification. The experimental testing has been done for a standard procedure before signal acquisition. Filtering process will be done in the first part. As for the second part of this project, the data acquisition experiment has done, which is following the procedure from the previous experiment in first part. The feature extraction has done by the wavelet decomposition method in MatLab. Classification used the feed-forward back propagation neural network to classify the ECG. The quality of ECG signal is being investigated when low accuracy occur. The features of ECG are also being studied.
  16. 16. 1.3. THESIS OVERVIEW 3 1.3 Thesis Overview Chapter 1 provides a brief introduction of this project, which is to study and design of the biometrics’s system. Chapter 2 explains the electrocardiogram characteristics and the ECG signal ac- quisition from standard experiment. Artifacts of ECG have been studied from other literatures. Chapter 3 introduces the concept of neural network. The working principles of neural network will be explained in this chapter. What are the applications for neural network? Chapter 4 describes the experimental setup for data acquisition. The detail of all experiment subjects will produce in this chapter. The pre-processing such as filtering and feature extraction will be explained into detail. Chapter 5 introduces the novel way to construct data structure for this project. The input data set and train data set for neural network classifier is created. The structure of neural network will discuss in that chapter. Chapter 6 summarized the whole project and provides some recommendation for future works.
  17. 17. Chapter 2 Introduction Electrocardiogram (ECG) signal has been used in many medical applications for the few decades. There are few models and novel applications by using ECG for biometrics’s recognition. In this chapter, we will discuss about the heart structure, ECG signal and features of the ECG signal. 2.1 Heart Structure The heart is made up of four powerful muscles called Myocardium. The My- ocardium is composed of cardiac muscle fibers that contract and cause a pumping type of action. The size of your heart is a little larger than the size of your fist. The location of the heart is about the left-center of your chest. The heart consists of two separate pumps that continuously send the blood throughout our body and carrying nutrients and oxygen. The right side of the heart receives blood low in oxygen. The left side of the heart receives blood that has been oxygenated by the lungs. The blood is then pumped out into the Aorta and to all parts of the body. The Heart Diagram below and the information that follows will give a better understanding of the heart structure see Figure 2.1. The heart is divided into several components, described below: 4
  18. 18. 2.1. HEART STRUCTURE 5 Figure 2.1: The component of the heart. The arrow directions show of blood flow (Robert Hall from CMA, 1989). Right Atrium: The right atrium is larger than the left atrium. It has two major veins that return blood to the heart from all parts of the body. Both veins are returning the blood to the heart are the superior vena cava that returns the de- oxygenated blood from the upper part of the body and the inferior vena cava that returns the blood from the lower part of the body. After the blood is collected in the right atrium it is pumped into the right ventricle through the tricuspid valve also known as three leaf valve [Ran02]. Left Atrium: The left atrium receives blood from four pulmonary veins. The blood received from the lungs has been oxygenated. The oxygenated blood that is collected in the left atrium is then pumped into the left ventricle through the mitral valve[Ran02]. Right Ventricle: The right ventricle receives blood from the right atrium. When the heart contract the blood is forced out through the pulmonary valve into the pulmonary artery. The pulmonary valve is a three flap valve that stops the back flow of blood. The walls of the right ventricle are a little thicker than the right atrium[Ran02]. Left Ventricle: The chamber of the left ventricle has walls that are three times the thickness of the right ventricle. This is important because the oxygenated blood that it receives from the left atrium has to be pump throughout the body. The
  19. 19. 2.2. ELECTROCARDIOGRAM (ECG) 6 mitral valve closes and the blood is collected in the left ventricle. The closing of the mitral valve stops the back flow of blood. When the heart muscle con- tracts the blood is forced through the aortic valve which has the same features as the pulmonary valve. The blood then passes through the aortic valve into the aorta[Ran02]. Aorta: The aorta is the largest blood vessel in the body. The inner diameter of the aorta is about 1 inch. The aorta carries oxygenated blood to every other part of the body. The aorta receives its blood from the left ventricle[Ran02]. Septum: The septum is a partition that separates the right and left sides of the heart. There are two separate regions of the septum. They are the interatrial septum that separates the atriums and the interventricular septum that separates the ventricles. The interatrial septum is only present in the fetal period and is open during this period. The interatrial septum closes at the time of birth. [Ran02]. 2.2 Electrocardiogram (ECG) The ECG is the electrical manifestation of the contraction activity of the heart. It can be recorded by surface electrodes on the limbs or chest. The ECG is the most commonly known and used in biomedical signal. The rhythm of the heart in terms of beats per minute (bpm) may be easily estimated by counting the readily identifiable waves [Ran02]. The electric system of the heart is a cycle of electrical current flow. Coordinated electrical events and a specialized conduction system intrinsic and unique to the heart play major roles in the rhythmic contractile activity of the heart. The sino- artrial (SA) node is the basic, natural cardiac pacemaker that triggers its own train of action potential.[Ran02]. Figure 2.2 showed the sequence of events (a to d) and waves in a cardiac cycle is as follows:
  20. 20. 2.2. ELECTROCARDIOGRAM (ECG) 7 Figure 2.2: Propagation of the ECG through the heart [Rus76].
  21. 21. 2.3. ECG SIGNAL ACQUISITION 8 1. The SA node fires. 2. Electrical activity is propagated through the atrial musculature at compar- atively low rates, causing slow-moving depolarization (contraction) of the atria. This result in the P wave in the ECG. Due to the slow contraction of the atria and their small size, the P wave is a slow, low-amplitude wave, with an amplitude of about 0.1 to 0.2 mV and a duration of about 60 to 80 ms [Ran02]. 3. The excitation wave faces a propagation delay at the atrio-ventricular (AV) node, which result in a normally iso-electric segment of about 60-80ms after the P wave in the ECG, known as the PQ segment. The pause assists in the completion of the transfer of the blood from the atria to the ventricles [Ran02]. 4. The wave of stimulus spreads rapidly from the apex of the heart upwards is causing rapid depolarization (contraction) of the ventricles. The QRS wave of the ECThe wave of stimulus spreads rapidly from the apex of the heart upwards is causing rapid depolarization (contraction) of the ventricles. QRS wave of the ECG signal with a sharp biphasic or triphasic wave of about 1mV amplitude and 80 ms duration [Ran02].G with a sharp biphasic or triphasic wave of about 1mV amplitude and 80 ms duration [Ran02]. 5. Ventricular muscle cells possess a relatively long action potential duration of causes a normally iso-electric segment of about 100-200 ms after the QRS, known as the ST segment [Ran02]. 6. Repolarization (relaxation) of the ventricles causes the slow T wave, with an amplitude of 0.1 to 0.3 mV and duration of 120 to 160 ms [Ran02]. 7. The PQRST wave is explained in more detail in section 1.4.
  22. 22. 2.3. ECG SIGNAL ACQUISITION 9 Figure 2.3: The leads and Einthoven’s triangle. Figure 2.4: Location of chest electrodes in 4th and 5th intercostal spaces. V1: right 4th intercostal space. V2: left 4th intercostal space. V3: halfway between V2 and V4. V4: left 5th intercostal space, mid-clavicular line. V5: horizontal to V4, anterior axillary line. V6: horizontal to V5, mid-axillary line (cardionet, 2009).
  23. 23. 2.3. ECG SIGNAL ACQUISITION 10 2.3 ECG Signal Acquisition The standard 12-channel ECG is obtained using four limb leads and chest leads in six positions. The right leg is used to place the reference electrode. The left arm, right arm, and left leg are used to get, I, II, III. A combined reference known as Wilson’s central terminal is formed by combining the left arm, right arm, left leg leads, and is used as the reference for the chest leads. The augmented limb leads known as aVR, aVL, aVF (aV for the augmented lead, R for the right arm, L for the left arm, and F for the left foot) are obtained by using the reference being Wilson’s central terminal without the exploring limb lead. The hypothetical equilateral triangle formed by leads, I, II, III is known as the Einthoven’s triangle sees Figure 2.3. The heart is assumed to be placed at the center of the triangle. The six leads measure projections of the 3D cardiac elec- trical vector onto the axes illustrated above. The six axes sample the 0 degree to 180 degree range in steps of approximately 30 degrees. The projections facili- tate viewing and analysis of the electrical activity of the heart and from different perspectives in the frontal plane. The six chest leads (V1 to V6) are obtained from six standardized positions on the chest with Wilson’s central terminal as the reference sees Figure 2.4. The positions for placement of the precordial (chest) leads are indicated in figure above. The V1 and V2 leads are placed at the fourth intercostal space just to the right and left of the midclavicular line. The V3 lead is placed half-way between the V2 and V4 leads. The V5 and V6 leads are located at the same level as the V4 lead, but at the anterior axillary line and the midaxilla line, respectively. The six chest leads permit viewing and V6 are most sensitive to leave ventricular activity; V3 and V4 depict the septal activities best; V1 and V2 reflect well activity in the right-half of the heart. The experiment of record ECG signal will take place at the laboratory. The details while take the measurement will represent at appendix A. Four subjects are recorded by the same experiment procedure and the signal will also show in
  24. 24. 2.4. CHARACTERISTICS OF ECG 11 Figure 2.5: The sample of ECG signal and various peaks in the ECG (virtualmed- icalcentre.com). appendix A. This will be the first part experiment in this project. 2.4 Characteristics of ECG A full cycle of ECG signal divided into several parts that show in Figure 2.5: P wave: Contraction of the atria is triggered by the SA-node impulse. The atria do not possess any specialized conduction nerves as the ventricles do; as such, contraction of the atria muscles takes place in a slow squeezing manner, with the excitation stimulus being propagated by the muscle cells. The P wave is the epoch related to the event of atria contraction. PQ segment: The AV node provides a delay to facilitate completion of atrial contraction and transfer of blood to the ventricles before ventricular contraction is initiated. The resulting PQ segment, of about 80 ms duration, is thus a non- event; however, it is important in recognizing the base-line as the interval is almost always iso-electric. QRS wave: The specialized system of Purkinje fiber stimulated contraction of ventricular muscles in a rapid sequence from the apex upwards. The almost-
  25. 25. 2.5. ARTIFACTS OF ECG 12 simultaneous contraction of the entire ventricular results in a sharp and tall QRS complex of about 1 mV amplitude and 80 to 100 ms duration. The event of ventricular contraction is represented by the QRS epoch. ST segment: The normally flat (iso-electric) ST segment is related to the plateau in the action potential of the left ventricular muscle cells. The ST segment may also be termed as a non-event. However, myocardial ischemia or infarction could change the action potentials of a portion of the left ventricular musculature, and cause the ST segment to be depressed or elevated. The PQ segment serves as a useful reference when the iso-electric nature of the ST segment needs to be verified. T wave: The T wave appears in a normal ECG signal as a discrete wave separated from the QRS by an iso-electric ST segment. However, it relates to the phase of the action potential of ventricular muscle cells, when the potential returns from the plateau of the depolarized state to the resting potential through the process of repolarization. The T wave is commonly referred to as the wave corresponding to ventricular relaxation. 2.5 Artifacts of ECG The ECG signal contains information of the heart conditions. However, it also contains noise due to the sensitivity of the electrodes, any movement of the leads or clips will make the noise signal to the ECG signal. The ECG machine and the main power supply also have to be create grounded on floor and such movement should be kept to a minimum. There are several methods to filter the noise signal to get a more accurate ECG signal to analysis. The testing filter has done in first part of project which can refer to appendix A.
  26. 26. 2.6. SUMMARY 13 2.6 Summary In this chapter, we have explained how the heart generated ECG signal and the characteristics of ECG signal. The ECG is unique biometrics’s information to identify a person. The main challenges of this project are the classifier to produce an accurate and fast reaction of ECG signal matching. The following chapter will discuss the introduction of classifier that will be used in this project.
  27. 27. Chapter 3 Classification In previous chapter, we have introduced the ECG signal and the characteristic of ECG signal. Now, classification of ECG signal is the main objective in this project, to identify an input ECG signal to match with our ECG databases. So, an accurate classifier should be designed and tested. In this chapter, we will discuss about neural network-based classifier. The algorithm and example will also discuss in this chapter. 3.1 Concepts of Pattern Recognition Before we start to discuss about the neural network, pattern recognition is to recognize a class of objects, the data about those objects is gathered and then a set of feature measurements are extracted from those data. Most of the features that we will consider are the numerical feature, act as input to a mathematical pattern recognizer. The availability of high speed computers and efficient algorithms make use in computer-aided tomography (CAT) for image editor or image processing. 14
  28. 28. 3.2. BRIEF HISTORY OF NEURAL NETWORKS 15 Figure 3.1: Three input of and one output neuron unit [?]. 3.2 Brief History of Neural Networks The neural network (NN) is invented from the idea of network of the biological neuron model see Figure 3.1. NN start from simple logic function and computer simulation. The researchers continue to improve the solution of NN-based compu- tational problems such as pattern recognition. Currently, the NN found in many applications on pattern recognition and data classification. When 1943, in principle, Warren McCulloch and Walfer Pitts showed that net- works of the artificial neurons could compute any arithmetic or logical function. This is the origin of neural network field. Donald Hebb (1949) proposed that clas- sical conditioning is present because of the properties of an individual neurons. The invention of the perception network and associated learning rule by Frank Rosenblatt (1958), but the basic perception network could solve only a limited class of a problem. Although Bernard Widrow and Ted Hoff (1960) invent a new learning algorithm and to train an adaptive linear neural network, the problem and limitation of neural network still exist. So Martin Minsky and Seymour Papert (1969) claimed that further study on neural network is a dead end and for a decade neural network research was largely suspended. This condition is changed, when Teuvo Kohonen and James Anderson (1972) invent new neural network that could act as memories separately to solve the limitation. Stephen Grossberg (1976) had investigated on the self-organizing network. Most of the limitations or problems are solved by powerful computer and new ideal.
  29. 29. 3.3. NEURAL NETWORK-BASED CLASSIFIER 16 Figure 3.2: Simple neural network structure that output compare with the target and feedback to the NN until the output matches the target (Matlab R2006b & Mathworks, 2006). 3.3 Neural Network-based Classifier Neural network is the model of information processing. The key element of this model is the structure of the information processing system. NN can compose a large number of interconnected elements (neurons) working parallel to solve a problem, through a learning process. Learning in biological systems involves adjustments to the synaptic connection that exist between the neurons, which applied to NN as well [SSB96]. The remarkable ability of NN is to derive meaning from complicated or inaccurate data, which are complex for human or other computer techniques. Trained NN can easily analysis specify the category of information and data. Another ability of NN can be defined as the ability to learn how to do tasks based on the data given for training and create its own organization of the information it receives during learning time. NN is also able carried out parallel computing, which improve the hardware design. A neural network is consisted of units (neurons), arranged in order, which convert input vector into output. Each unit is taken an input, often is applying a non-linear function to it and transfer the output to the next order of units see Figure 3.2
  30. 30. 3.4. FIRING RULES 17 3.4 Firing Rules The important concept of NN is the firing rule. It decided the calculation of a neuron, whether it should fire for any input data. Not only one node, but all the input and the neuron unit are related. A simple firing rule can be implemented by Hamming distance technique. In Figure 3.1, there are three inputs to neuron and one output from a neuron. The neuron takes a collection of training pattern, 1-taught set of pattern cause it to fire, others, which prevent it to fire are 0-taught set of pattern. Some of the patterns not in the collection also cause the neuron fire by comparison, which more input data with the nearest pattern in the 1-taught set. If there is a tie, the output will remain in an undefined state. For an example, three inputs are given that X1, X2, X3, 000 and 001 have 0s as the output, therefore, 101 and 111 have 1s as the output. The Table 3.1 is the truth table after apply the firing rules. As an example of applied the firing rules, taking the pattern of 010 to compare with 000, 001, 101, and 111. When 000 is the nearest pattern as it is 1 element different from 010, 001 is 2 elements different, 101 is 3 elements different. So according to firing rules, pattern 010 have nearest distance to the 1-taught set to compare to 0-taught set, pattern 010 should not fire. Another example, this time we take pattern 011 to compare with the 1-taught sets and 0-taught set, 2 elements different for 000, 1 element different for 001 (0-taught set); 2 elements different for 101, 1 element different for 111 (1-taught set). From this result, we can see that 011 have been same distant from both taught set, pattern 011 will remain undefined. After we have done the same comparison using pattern 100, 110, a new truth Table 3.2 can be obtained that called generalization of a neuron. The firing rule gives the neuron a sense of similarity and enables it to respond ’sensibly’ to patterns not seen during training [?].
  31. 31. 3.5. ARCHITECTURE OF NEURAL NETWORKS 18 X1: 0 0 0 0 1 1 1 1 X2: 0 0 1 1 0 0 1 1 X3: 0 1 0 1 0 1 0 1 Output: 0 0 0/1 0/1 0/1 1 0/1 1 Table 3.1: Truth table of firing rules, two taught set is given. 0’s taught set is 000 and 001, 1’s taught set is 101 and 111. X1: 0 0 0 0 1 1 1 1 X2: 0 0 1 1 0 0 1 1 X3: 0 1 0 1 0 1 0 1 Output: 0 0 0 0/1 0/1 1 1 1 Table 3.2: Truth table after applying the firing rules, also called generalization of neuron. 3.5 Architecture of Neural Networks In this section, we will discuss some architecture of neural networks and that will used in this project. So understanding the architecture of NN is important as this project will have its own pattern of neural network based on those architectures. 3.5.1 Feed-forward Network Feed-forward NN is defined from its transition direction of information or signal, only one direction will be the information or signal travel from input to output. There is not feedback from each output, so the output does not affect the same layer of a neuron. Feed-forward NN is also used in pattern recognition see Fig- ure 3.3. 3.5.2 Feedback Network Feedback network is traveling in two directions by including the loops. Feedback network is powerful but it can be very complicated network. The state of network is changing continuously until it reached an equilibrium point. When the input
  32. 32. 3.5. ARCHITECTURE OF NEURAL NETWORKS 19 Figure 3.3: Left side is the feed-forward network, data or signal move forward until it reached the output state. Figure 3.4: Right side is the feedback network that signal can reverse back to the previous state.
  33. 33. 3.5. ARCHITECTURE OF NEURAL NETWORKS 20 Figure 3.5: Simple perceptron networks that used for pattern recognition. is changed, the state information or data will change until it reached another equilibrium point. Figure 3.4 shows that the input, hidden and output layers. The activity of input units represents the raw information that fed into the network. The activity of hidden units is determined by the activities of input units and the weight on the connection between input and hidden units. The activity of output units depends on the activity of hidden units and the weight between hidden and output units. 3.5.3 Perceptrons In 1950s, Frank Rosenblatt and several researchers developed a class of NN called perceptrons. The major different from other NN is the introduction of learning rules for the training perceptron networks see Figure 3.5 to solve the pattern recognition problem. He proved that the learning rule will always converge to the correct network weights, if weight existed that solve the problem. There is a limitation of perceptron network for incapable of implementing certain elementary function. This is found by Minsky and Papert, to overcome this limitation were improving the perceptron networks into multi layer and associated learning rules. Learning rules mean a procedure for modifying the weight and biases of a network
  34. 34. 3.6. APPLICATIONS OF NEURAL NETWORK 21 also referred as the training algorithm. The purpose of the learning rule is to train the network to perform some task [HDB02]. 3.6 Applications of Neural Network Neural network is widely applied in various fields and given great improvement in each field. NN is best at identifying pattern or trends in data, so NN is well suited for prediction or forecasting in sales forecasting, industrial process control, and data validation and so on. There are few application examples that related to this project. Signature verification technique based on two-stage neural network classifier that published by Baltzakis and Papamarkos, 2001. They had implemented a special two-stage perceptron OCON (one class one network) classification structure. In the first stage, the classifier combines the decision results of the neural networks and the Euclidean distance obtained using the three feature sets. The results of the first-stage classifier feed a second-stage radial base function (RBF) neural network structure, which makes the final decision. Feature-based decision aggregation in modular neural network classifier that pub- lished by Wanas et al., 1999. In modular NN, the individual decisions at its level have to be integrated together using a voting scheme. All voting schemes use the output of individual modules to produce a global output without affect the infor- mation from the problem feature space. This makes the choice of the collection procedure very subjective. They focus on making decision fusion a more dynamic process by integrated learning into voting scheme. Dynamic mean the collection procedure has the flexibility to modify in the input. Minimax classifier based on NN that published by Alaiz-Rodriguez et al., 2005. The problem of designing a classifier when prior probabilities are not known or are not representative of the underlying data distribution is discussed in this pa- per. They address the problem of designing a neural-based minimax classifier
  35. 35. 3.7. SUMMARY 22 and propose two different algorithms: a learning rate scaling algorithm and a gradient-based algorithm. Experimental results show that both succeed in finding the minimax solution, and it is also pointed out the differences between common approaches to cope with this uncertainty in priors and the minimax classifier. A patient adaptable ECG beat classifier based on NN that published by Gaetano et al., 2009. The basic idea behind this paper is to consider an ECG digital recording of two consecutive R-wave segments (RRR interval) as a noisy sample of an underlying function to be approximated by a fixed number of Radial Basis Functions (RBF). The linear expansion coefficients of the RRR interval represent the input signal of a feed-forward neural network which classifies a single beat as normal or ischemic. The system has been evaluated using several patient records taken from the European ST-T database. Experimental results show that the proposed beat classifier is very reliable, and that it may be a useful practical tool for the automatic detection of ischemic episodes. 3.7 Summary From the discussion above, we can confirm that the NN classifier is widely and successfully applied to the field of pattern recognition and ECG signal recognition. The accuracy of pattern recognition also can be reliable on this project. Since we have to design the structure of the classifier and the learning rule of the NN, this will discuss at the future chapter. The next chapter will present the experimental setup and feature extraction that mentioned in second part of this project.
  36. 36. Chapter 4 Experimental Setup and Methodology After we have learned the working principle of neural network. In this chapter, we will discuss about the experimental setup. The structure of laboratory and the equipments will be introduced. The similar result will be produce in this experiment if the procedure is according to our project procedure. The filtering method and the feature extraction method also will introduce in this chapter. We will move into detail to explain the methodology of biometrics system. The algorithm of biometrics system will present in figures. We will classify two set of ECG data from experiment A (11 test subjects) and B (4 test subjects). 4.1 Experimental Layout The experiment was conducted at indoor room that has been air-conditioned, well lit and quiet. As shown figure 4.1, the test subject is seated facing a monitor. There are two dividers that limit the view of the test subject in order to concentrate on the experiment. Two laboratory technicians are seated in front of the control station that manages the computer to start or pause for the experiment. The procedure of the experiment can be referring to appendix A. 23
  37. 37. 4.2. TEST SUBJECT 24 Figure 4.1: The layout of laboratory where experiment is conducted. 4.2 Test Subject There are 11 test subjects were involved in this experiment as shown in table 4.1. All subjects are male between the age of 21 to 25, which are students and stuffs from Multimedia University. The test subject cannot have some drink, which contained caffeine or analeptic. That will make their ECG signal not similar to normal condition. 11 test subjects are required to record 10 set of ECG signal in one day called experiment A. 4 test subjects are required to record 30 set of ECG signal in three days called experiment B. 4.3 Filtering Process The recorded ECG signal is contaminated with noise and unwanted signals. Fil- tering process is necessary to reduce and eliminate noise in the ECG. The ECG contains mainly three kinds of noises as shown in figure 4.2. In figure 4.2(a), 60 Hz power-line interference that came from the power line of ECG signal measurement systems despite proper grounding [ZC06]. In figure 4.2(b), electromyogram (EMG) is produced by muscle electrical activity. EMG interference appears as rapid fluc-
  38. 38. 4.3. FILTERING PROCESS 25 Subject Name Day 1 Day 2 Day 3 Total Set FQ 19/01/2010 11/02/2010 24/02/2010 30 YN 21/01/2010 11/02/2010 24/02/2010 30 VR 20/01/2010 24/02/2010 12/03/2010 30 YG 20/01/2010 24/01/2010 18/03/2010 30 AR 22/01/2010 - - 10 KSP 20/01/2010 - - 10 LJX 21/01/2010 - - 10 ML 21/01/2010 - - 10 SH 21/01/2010 - - 10 AA 22/01/2010 - - 10 NA 26/01/2010 - - 10 Table 4.1: Data acquisition date and total set of data from test subjects. Figure 4.2: Noise of ECG signal: (a) 60 Hz power-line interference; (b) Elec- tromyogram(EMG); (c) Motion artifacts [YW08].
  39. 39. 4.3. FILTERING PROCESS 26 tuations, which vary faster then ECG waves because the frequency ranges from dc to 10 Hz [FJJ+ 90]. In figure 4.2(c), motion artifact results from the motion of the electrode in relation to the test subject’s skin and it produce a large amplitude of noise [FJJ+ 90]. There are many kinds of filters that can be used to remove the noise in ECG. In this project, a digital infinite-duration impulse response (IIR) filter is used to remove the ECG noise. The primary advantage of IIR filter is the filter has a lower filter order compare to other finite-duration impulse responses (FIR) filters for the same performance. An elliptic filter was selected to preprocess the ECG signal because it has equalized ripple behavior in both bandpass and stop band and gives the lowest order and narrower transition of any supported filter type. In MatLab, the function ellip designs an n-order low-pass digital elliptic filter with normalized passband edge frequency Wp, the desired ripple in the passband and stopband as specified by the user in units of dB. The function returns the filter coefficients in length n + 1 row vectors b and a, with coefficients in descending powers of z as shown in equation 4.1: H(z) = B(z) A(z) = b(1) + b(2)z−1 + · · · + b(n + 1)z−n 1 + a(2)z−1 + · · · + a(n + 1)z−n (4.1) The filter order, n and cutoff frequency, Wp is obtained directly from MATLAB function ellipord. Wp, the passband corner frequency or cutoff frequency. Ws, the stop band corner frequency also corresponding to the normalized Nyquist frequency. Rp, the passband ripple that gives the maximum permissible passband loss in decibels. Rs, the stopband attenuation also in dB with respect to the passband response. Figure 4.3, IIR elliptic low-pass filter with an order of 15 and 41.6 Hz cutoff frequency. Rp and Rs set to 2dB and 90dB according to matwork, 2009. An example of filtered signal and how to find the cutoff frequency spectrum is shown in appendix A.
  40. 40. 4.4. FEATURES EXTRACTION 27 Figure 4.3: The frequency response of IIR elliptic filter that used in this project. 4.4 Features Extraction A feature is a distinctive or characteristic measurement, transform, structural component extracted from a segment of a pattern. Features are used to represent a pattern with the goal of minimizing the loss of important information [UCC07]. The feature extraction method could be based on either calculating statistical characteristics or producing syntactic descriptions. The wavelet transform (WT) provides general techniques, which can be applied to many tasks in signal processing. Wavelet transforms is ideally suited for the analysis of sudden short-duration signal changes. It can compute and manipulate data in compressed parameters, which are often called features [Dau90]. Thus, the ECG signal is time-varying and consists of many data points, the wavelet can compress ECG signal into a few parameters. The parameters used to represent the time-varying ECG signal as features can be used for recognition purposes [UCC07]. The wavelet transform is designed for non-stationary signals. Signals will represent a time function in term of simple, fixed building blocks, termed wavelets. The building blocks are derived from a single generating function called the mother
  41. 41. 4.4. FEATURES EXTRACTION 28 Figure 4.4: Subband decomposition of discrete wavelet transform implementation; g[n] is the high-pass filter, h[n] is the low-pass filter [UCC07]. wavelet by dilation operations. Dilation known as scaling, compresses or stretches the mother wavelet and translation shifts signal along the time axis [Dau90, UG05, Sol02, UA96]. The wavelet transform can be categorized into continuous and discrete. Contin- uous wavelet transform (CWT) is defined by equation 4.2 . From equation 4.2, where x(t) represents the ECG signal, a and b represent the scaling factor and translation along the time axis respectively and the superscript asterisk denotes the complex conjugation: CWT(a, b) = +∞ −∞ x(t)ψ∗ a,b(t)dt (4.2) The term ψa,b(·) is obtained by scaling the wavelet at time b and scale a : ψa,b(t) = 1 |a| ψ( t − b a ) (4.3) , where ψ(t) represent the mother wavelet [Dau90]. All wavelet transforms can be specified in terms of a low-pass filter h[n] in equa-
  42. 42. 4.4. FEATURES EXTRACTION 29 tion 4.4, which satisfies the standard quadrature mirror filter condition : H(z)H(z−1 ) + H(−z)H(−z−1 ) = 1 (4.4) , where H(z) is the z-transform of the filter h. Its complementary high-pass filter can be defined as show in figure 4.4 G(z) = zH(−z−1 ). (4.5) A sequence of filter with increasing length (indexed by i) can be obtained : Hi+1(z) = H(z2i )Hi(z), Gi+1(z) = G(z2i )Hi(z), i = 0, . . . , I − 1 (4.6) with the initial condition H0(z) = 1. It is expressed as a two-scale relation in time domain: hi+1(k) = [k]↑2i ∗ hi(k), gi+1(k) = [g]↑2i ∗ hi(k), (4.7) where the subscript [·] ↑ m indicates the up-sampling by a factor of m and k is the equally sampled discrete time . The normalized wavelet and scale basis function ϕi,l(k), ψi,l(k) can be defined as: ϕi,l(k) = 2i/2 hi(k − 2i l), ψi,l(k) = 2i/2 gi(k − 2i l), (4.8) where the factor 2i/2 is an inner product normalization, i and l are the scale parameter and the translation parameter, The discrete wavelet transform (DWT) decomposition can be described as: ai(l) = x(k) ∗ ϕi,l(k), di(l) = x(k) ∗ ψi,l(k), (4.9) Where ai(l) and di(l) are the approximation coefficient and the detail coefficient at resolution i [Dau90].
  43. 43. 4.5. METHODOLOGY 30 In this project, a Daubechies wavelet of order 2 will be used the smoothing feature as made DWT more suitable to detect the changes of the ECG signal [UCC07]. The computed discrete wavelet coefficient provided a compact representation that shows the energy distribution of the signal in time and frequency. The statistical features used to represent the time-frequency distribution of the ECG signal is shown in figure 4.5 and figure 4.6, where the first row is the maximum, the second row is the minimum, the third row is the mean and the fourth row is the standard deviation of the wavelet coefficient in each subband. 4.5 Methodology We discussed about the experimental setup and preprocessing steps in previous chapter. In this chapter, we will move into detail to explain the methodology of biometrics system. The algorithm of biometrics system will present in figures. We will classify two set of ECG data from experiment A (11 test subjects) and B (4 test subjects). Result of classification and discussion will include in this chapter. The algorithm of biometric system will be explained in figure 4.7. The system can be simply divided into five parts. First, subject’s ECG data will extract from each recording MAT files that we recorded from our subject earlier. There are several ways to extract all the data in MAT files. 4.5.1 Data Structure This is important to have a systematic and flexibility data structure. The data structure has included all test subject’s data sets from each recording. The data structure is a three dimension matrix shown in figure 4.9. Before combined all subject’s data into a single matrix, we store all the file name to a string refer to Appendix C. By calling the string, we can access to all MAT files and extract the data sets. In figure 4.8, two dimension matrix (Length of sample x Number of set) is created from test subject no.1. Other subject’s data
  44. 44. 4.5. METHODOLOGY 31 Figure4.5:Extracted(a)maximum,(b)mean,(c)minimum,(d)standarddeviationstatisticalfeaturesofMHECGsignalfrom experimentinappendixA.
  45. 45. 4.5. METHODOLOGY 32 Figure4.6:Extracted(a)maximum,(b)mean,(c)minimum,(d)standarddeviationstatisticalfeaturesofYNECGsignalfrom experimentinAppendixA.
  46. 46. 4.5. METHODOLOGY 33 Figure 4.7: The algorithm of biometrics system.
  47. 47. 4.5. METHODOLOGY 34 Figure 4.8: The matrix [Length of sample x number of set] of data structure in two dimensions which is containing one subject’s data. Figure 4.9: The matrix [Length of sample x number of set x number of subject] of data structure in three dimensions which is containing all test subject’s data.
  48. 48. 4.5. METHODOLOGY 35 Figure 4.10: The process of wavelet decomposition converts 1 peak of ECG signal into five coefficients. will place behind of the subject no.1, which shown in figure 4.9. The program validation result can refer to Appendix B. The three dimension data structure shown in figure 4.9 will be removed the noise and segmented into 15 peaks [AA09] by preprocessing refer to Appendix C. Feature extraction will use wavelet decomposition that already explained in the previous chapter.Figure 4.10 is the process of wavelet decomposition that converts 1 peak of ECG into five coefficients. In figure 4.11, five coefficients (cA4, cD4, cD3, cD2 and cD1) were generated from wavelet decomposition. The program validation result can refer to Appendix B. Previously, we had discussed about the data structure. Here, we will reconstruct the data structure for train data set and input data set. The third dimension which
  49. 49. 4.5. METHODOLOGY 36 Figure 4.11: The data structure [number of peak*5 features x number of set x number of subject*4 characteristic] after the process of wavelet decomposition ex- tracted features. cA4 to cD1 are the features of ECG signal. Maximum, minimum, mean and stand derivation is the characteristic of each features.
  50. 50. 4.5. METHODOLOGY 37 Figure 4.12: The final face of data structure is two dimensions matrix [number of peak*4 characteristic x number of ser*number of subject]. is containing four characteristic and number of subject. The four characteristic will move down under last peak of the previous subject and the next subject will rearrange next to the last data set of the previous subject that shown in figure 4.12. 4.5.2 Classifier When the data structure is ready for classifier, we have to create the train data set and input data set. Finally, target data set will be created to train the neural network. In figure 4.12, data structure future divide into two parts, which will randomly pick one set of data from all subjects for input data set and the rest will be the train data set. Normalization for input data set and train data set is important to get a better classification result. In figure 4.13, the target data matrix [number of subject x number of train data set*number of subject] is created by a simple code
  51. 51. 4.5. METHODOLOGY 38 Figure 4.13: The target data matrix showed that first four column represented subject 1, from fifth to eighth column represented subject 2 and so on. refer to Appendix C. The neural network classifier is used feed-forward back-propagation network. The network training function is Levenberg-Marquardt back-propagation (trainlm) which is highly recommended as a first choice supervised algorithm. The learning func- tion is Gradient descent with momentum weight and bias learning function (learngdm). In figure 4.14, two layers and 50 neurons in each layer neural network is created. The differentiable transfer function is log-sigmoid (logsig) which is commonly used in back-propagation networks. The data division function is randomly divide train data set (dividerand) into three set which 60% for training, 20% for validation and 20% for testing.
  52. 52. 4.6. SUMMARY 39 Figure 4.14: The feed-forward back-propagation neural network designed in this project. 4.6 Summary In this chapter, we have discussed about the experimental setup for data acquisi- tion process. The result of the experiment is shown in Appendix A. A IIR elliptic low-pass filter with an order of 15 and 41. 6 Hz cut-off frequency was designed to filter the noise signal. The one dimension discrete wavelet transform method to compute energy distribution of the signal in time and frequency was used for feature extraction. The maximum, minimum, mean and standard deviation of each frequency range were then computed.
  53. 53. Chapter 5 Result and Discussion In previous chapter, we knew that the classification was divided into two sections. First section was experiment A (11 test subjects) and follow by experiment B (4 test subjects). In this part, we will show the recognition rate and the discussion. We will discuss the recognition rate of experiment A and B. We can summaries the project and our objectives in this chapter. Recommendation will provides for future research to achieve better outcomes. 5.1 Result of Experiment A The confusion matrix of experiment A shown in figure 5.1. The first column rep- resented the subject FQ and second column represented subject YN accordingly. The correct recognition of subject FQ has shown in the green box of first column. The wrong recognition of subject AR has shown in any red box of fifth column. In table 5.2, the result of classification on experiment A with random picked set as input data. The average accuracy of forty trials is 80.89% and the random pick number is from 1 to 10. When the input data set is from set 9, the recognition rate is 94% within three trials. The Rpick 5 have two critical recognition rate, which is 63.6% and 100% but most of the trials are 81.8%. The Rpick 1 and 6 40
  54. 54. 5.1. RESULT OF EXPERIMENT A 41 Subject No. Subject Name 1 FQ 2 YN 3 VR 4 AA 5 AR 6 KSP 7 LJX 8 ML 9 NA 10 SH 11 YG Table 5.1: 11 test subjects name represent by number shown in the result below. Figure 5.1: The confusion matrix plot from Matlab showed 11 subjects recognition result and accuracy.
  55. 55. 5.1. RESULT OF EXPERIMENT A 42 Rpick Train Duration(S) Accuracy(%) Error subject no. 1 6 72.7 5, 8, 9 1 9 90.9 5 1 10 90.9 5 1 11 81.8 5, 9 1 7 81.8 5, 7 1 9 72.7 5, 6, 9 2 6 81.8 6, 8 2 5 81.8 6, 9 2 8 72.7 3, 5, 6 2 10 72.7 3, 6, 9 3 6 90.9 3 3 30 81.8 3, 6 3 8 72.7 3, 5, 7 3 5 72.7 2, 3, 7 3 9 81.8 3, 6 4 7 90.9 9 5 6 63.6 1, 5, 7, 9 5 6 81.8 7, 11 5 8 81.8 7, 9 5 5 100 5 6 81.8 7, 9 5 40 81.8 6, 11 5 4 81.8 6, 7 6 3 63.6 3, 5, 6, 9 6 14 81.8 5, 7 6 9 81.8 5, 7 7 6 81.8 3, 6 7 9 81.8 3, 9 7 7 72.7 3, 5, 6 7 6 90.9 3 8 8 81.8 3, 6 8 6 72.7 3, 6, 7 9 7 81.8 7, 8 9 12 100 9 29 100 10 5 72.7 1, 3, 8 10 6 72.7 3, 8, 9 10 19 81.8 3, 9 10 5 72.7 3, 7, 9 10 5 81.8 5, 7 Average Accuracy 80.89 Table 5.2: The accuracy of recognition on experiment A with forty trials. Error subject name refer to table 5.1
  56. 56. 5.2. RESULT OF EXPERIMENT B 43 Figure 5.2: The error rate of experiment A within forty trials. Subject No. Subject Name 1 FQ 2 YN 3 VR 4 YG Table 5.3: 4 test subjects name represent by number shown in the result below. cannot recognize the subject AR. The Rpick 2 cannot recognize subject KSP. The Rpick 3, 7 and 10 cannot recognize subject VR. From figure 5.2, we can observe that the most frequent errors are subject VR and AR with 17 and 14 times within forty trials. Subject KSP, LJX and NA also have a high error rate, which compared with subject FQ, YN, ML and YG. The subject AA and SH have no error recognition within forty trials.
  57. 57. 5.2. RESULT OF EXPERIMENT B 44 Figure 5.3: The error rate of experiment B within forty trials. 5.2 Result of Experiment B We assumed that experiment B should have a better recognition rate compare to experiment A. More train data sets are provided to the neural network for training. This assumption can be true if ECG does not change significantly within three months. As shown in table 5.4, the recognition rate of experiment B is 93.75%, which is a better recognition rate compare to experiment A. Increasing the train data set can also increase the recognition rate. The subject no.1 has not error within forty trials, which means subject no.1 has a better ECG recording compare to others. Figure 5.3 shown that subject YN have the highest error rate compare to others. When input data set is picked from set 13, subject YN cannot be recognized by the classifier. Subject YG cannot be recognized by classifier when Rpick is 29.
  58. 58. 5.2. RESULT OF EXPERIMENT B 45 Rpick Train Duration(S) Accuracy(%) Error subject no. 1 1 100 1 2 100 2 2 100 3 2 100 4 5 100 5 1 100 5 1 100 6 3 100 7 2 100 7 3 100 7 4 100 8 6 75 3 9 2 100 10 3 100 11 3 75 2 12 4 100 12 3 100 13 2 75 2 13 7 75 2 13 3 75 2 14 2 100 7, 9 15 4 75 2 16 1 75 4 17 5 100 17 3 100 17 3 100 18 6 100 19 3 100 19 2 100 20 4 100 21 3 100 22 5 100 22 2 100 22 8 100 23 1 100 26 2 100 27 2 100 29 1 75 4 29 1 75 4 30 1 75 2 Average Accuracy 93.75 Table 5.4: The accuracy of recognition on experiment B with 40 times of trials. Test subjects number can refer to table 5.3
  59. 59. 5.3. DISCUSSION 46 5.3 Discussion The result has shown that some ECG signals were not recorded well due to our subjects. The subject had to rest without any movement during experiment pe- riod. This are obviously the input data set is affected the recognition rate of ECG signal. Following certain condition and rules to obtain a better ECG signal from subjects to overcome this problem. This will not apply to the subject who infected heart disease after they created their ECG database in the biometrics’s system. Some of the heart diseases are changing the ECG signal significantly. The number of train data set or features is also important for recognition rate. By increasing number of train data set and features can improve the recognition rate. we have discussed the recognition rate of experiment A and B. We can summaries the project and our objectives in this chapter. Recommendation will provides for future research to achieve better outcomes. 5.4 Conclusion We have made some conclusions from our projects. From first part of project, we study about the process of human heart generates ECG signal. We managed to record ECG signal from our subjects. The characteristics of ECG signal are difference from each others. So we make use from those characteristics to classify our subjects. The ECG has a frequency range of 0 Hz to 45 Hz which contains the characteristic of PQRST. We are able to extract the features from ECG signal by wavelet transform [UCC07]. Recognition rate is more accurate when the ECG signal is represented by important features. Segmentation generated more features from one period of ECG wave- form. We selected some important features such as maximum, minimum, mean and standard deviation. Instead of taking all features from wavelet transform, the features vector is reduced by just including the important features. We found that to construct data structure contains all ECG signal are important
  60. 60. 5.4. CONCLUSION 47 Figure 5.4: Features of subject AR. (a) Set 5 features from maximum. (b) Set 10 features from maximum. (c) Set 6 features from maximum. for an efficient biometrics’s system. From our research, we construct a 3D matrix which is contain all subjects data. We also can easily access any subject or any set of data by a simple code. This will makes our research more efficiency and simple. In experiment A, we found that subject AR cannot be recognized when Rpick 6 as input signal. So, we investigated the features of set 6 and others set from subject AR in figure 5.4. We can observed that figure 5.4(c) has a significant dissimilarity compare to (a) and (b). More features comparison for subject AR can refer to Appendix B. Experiment B has better recognition rate of 93.75%. Recognition rate is improved by the extra train data provided from three days of recording. It is important to collect enough ECG data from our subjects. For feature comparison plot can refer to Appendix B.
  61. 61. 5.5. RECOMMENDATION 48 We concluded that wavelet transform provided efficient feature to represent a ECG signal. Feed-forward back-propagation neural network can classify large size of data set and performed high recognition rate. Electrocardiogram is a long term stability biometrics signal that suitable for biometrics recognition. 5.5 Recommendation Neural network and wavelet decomposition is strongly recommended for its su- perior accuracy, powerful customization features. The concept of neural network is easy to understand and customize for many others application. The wavelet decomposition is also the simplest way to extract the features from signal. Since, other methods are easy on concept but complex on algorithm. The project require as many as possible of data to achieve high recognition rate. Same experiment procedure is important to apply for all subjects involved. For future study, We would like to collect the ECG data from different conditions such as after drink caffeine, after short distance of walking or strenuous exercise. This will change the heart rate or higher R peak which might happen in normal life.
  62. 62. Appendix A Experiment Setup This appendix discussed about pre-process ECG signal and procedure of ECG signal acquisition. A simple filter will remove the ECG noise. So, the ECG can represent the features of PQRST. Several ECG signals will be shown in this appendix and discuss about the filter coefficient and cutoff frequency. A.1 Procedure of ECG Acquisition Model of Equipment: g.Mobilab+ (gTech) Number of probe: 3 to 5 probes 1) Test subject lay down on a comfortable chair. The subject is silent and relax. Sample Name: YN MH FQ Gender: Male Male Male Age: 22 23 23 Number of Electrode: 5 (ch5+/-, GND) 3 (ch5+/-, GND) 3 (ch5+/-, GND) Number of Channel: 2 (ECG, EMG) 1 (ECG) 1 (ECG) Condition of sample: Resting Resting Resting Position of sample: Lie down Lie down Lie down Sampling frequency: 256 Hz 256 Hz 256 Hz Table A.1: Summary of test subject details. i
  63. 63. A.1. PROCEDURE OF ECG ACQUISITION ii Figure A.1: (a) The default electrode configuration from the equipment manual for subject MH. (b) This configuration is refer the Einthoven’s triangle Lead I for subject FQ. (c) Einthoven’s triangle Lead II for subject FQ. (d) Einthoven’s triangle Lead III for subject FQ too. (e) Default configuration from manual and ch6 for the EMG signal for subject YN.
  64. 64. A.2. EXPERIMENT RESULT iii ECG Peak P Q R S T MH 0.01007 0.009817 0.01251 0.009725 0.01034 YN 0.01003 0.009797 0.01217 0.009394 0.01054 Amplitude Different 0.00004 0.00002 0.00034 0.000331 -0.0002 Table A.2: The dissimilarity of ECG peak from two subjects. The detail of subjects showed in table A.1. 2) Place the electrodes on the subject. This equipment has one channel ECG signal from three electrodes and one channel EMG signal from two electrodes. 3) According to figure A.1, which is the position of electrode configurations. We check about the negative, positive and ground electrode is different in color ac- cording to the equipment manual. 4) For MatLab setting, the sampling frequency is set to 256 Hz for ECG signals, the duration of the signal is 10 seconds per recording. 5) Each sample had recorded three sets of ECG signal. The equipment can record two channels of bio-signal, one is ECG signal from ch5 and one is EMG from ch6. From this experiment, we measure the ECG signal for primary data of pre-process filter. The filter will be tested in several coefficients to get a better and noiseless signal. A.2 Experiment Result The experiment result will show in the figures. In order to generate our own ECG database, a standard and easier electrode placement will be required. Pre- processing like the filtering and segmentation is required. The filtering result will be used to design FIR or IIR filter.
  65. 65. A.2. EXPERIMENT RESULT iv FigureA.2:MHoriginalECGsignal,eachrecordingcaneasilyseethenoiseisnoticeable.
  66. 66. A.2. EXPERIMENT RESULT v FigureA.3:FQECGsignal.ThissignalisusedtheEinthoven’striangleleadconfigurationtorecordthesignal.Lead1,2and3is recordedseparatelyduetothenumberofelectrodes.
  67. 67. A.2. EXPERIMENT RESULT vi FigureA.4:YNoriginalECGsignaluseddefaultelectrodesettingfromthemanualandaddedEMGsignalindifferentchannelshowed inthirdplot.
  68. 68. A.2. EXPERIMENT RESULT vii FigureA.5:ThefilteredECGsignalfromthreesamples,topisMHECGsignal,middleisFQECGsignalandbottonisYNECG signal.
  69. 69. A.2. EXPERIMENT RESULT viii FigureA.6:FastfouriortransformoforiginalECGsignalandfilteredsignal.
  70. 70. A.3. ANALYSIS ix Figure A.7: The ECG features comparison from two samples. Both samples have the same experiment setting so the comparison is reliable. A.3 Analysis First, the uniqueness of ECG signal is shown in figure A.7 and table A.2. We can see that the PQRST have an amplitude difference from each subject. The amplitude different is about 0.00002V to 0.0003V which is too small to become a significant feature for the classifier to recognize the ECG from our database. Another process to make this features significant. This uniqueness of ECG char- acteristics is important in our project for classification. Fast Fourier Transforms (FFT) plot can be an useful tools for analyse the frequency spectrum. The frequency range of noise and useful features can observe from the frequency spectrum plot that after and before filtering process. From figure A.6, frequency spectrum is cut off from 50 Hz and above, 0 Hz to 50 Hz is the useful features shown in the figure A.5. A.4 Summary We can concluded that the uniqueness of ECG signal is dependable on our project. For the creation of ECG database, we also can set up an efficient and reliable ECG signal acquisition experiment for many subjects. The moving average filter is removed the noise that contained in the signal. However, we have to create
  71. 71. A.4. SUMMARY x IIR filter to clean the signal. The FFT shown the frequency spectrum that we can predict the cut-off frequency range is 40 Hz to 50 Hz. The ECG is a low frequency signal which requires a low pass filter to remove the noise.
  72. 72. Appendix B Program Validation In this appendix, we will check the whole system function and sub-function is working and give an expected result. The system function contains four main part of sub-function. First, Construction of all data set from the recorded file(.mat) to a single matrix. IIR filter has cleaned the noise in ECG signal. ECG signal will cut into 15 peaks. The feature will extract from each peak of ECG signal. The classifier will use feed-forward back-propagation neural network for classification. B.1 Data Construction From this function, the program extracted the ECG data from recorded files (.mat) from each subject. There are two kinds of data sets can be constructed by the user define, experiment A is 11 subjects with 1 day 10 sets of ECG recording and experiment B is 4 subjects with 3 days 30 sets of ECG recording. Figure B.1 shown that each subject has 10 sets of data and sample length is 7000. Figure B.2 shown that 4 subjects have 30 sets of data and sample length is 7000. The list of files store in each string for each subject: 1) FQ file = {FQ 1 11022010.mat, FQ 2 11022010.mat, FQ 1 19012010.mat, FQ 2 19012010.mat, FQ 1 24022010.mat, FQ 2 24022010.mat} xi
  73. 73. B.1. DATA CONSTRUCTION xii Figure B.1: This figure shown the MATLAB workspace that all subjects data are combined into a single 3D matrix. Figure B.2: This figure shown the MATLAB workspace that 4 subjects data are combined into a single 3D matrix.
  74. 74. B.2. FILTERING PROCESS xiii 2) YN file = {YN 1 11022010.mat, YN 2 11022010.mat, YN 1 21012010.mat, YN 2 21012010.mat, YN 1 24022010.mat, YN 2 24022010.mat} 3) VR file = {VR 1 20012010.mat, VR 2 20012010.mat, VR 1 24022010.mat, VR 2 24022010.mat, VR 19032010 day3.mat} 4) AR file = {AR 1 22012010.mat, AR 2 22012010.mat} 5) KSP file = {KSP 1 20012010.mat, KSP 2 20012010.mat} 6) YG file = {YG 1 20012010.mat, YG 2 20012010.mat, YG 1 18032010.mat, YG 2 18032010.mat, YG 1 24022010.mat, YG 2 24022010.mat} 7) LJX file = {LJX 1 21012010.mat, LJX 2 21012010.mat} 8) ML file = {ML 1 21012010.mat, ML 2 21012010.mat} 9) SH file = {SH 1 21012010.mat, SH 2 21012010.mat} 10) AA file = {AA 1 22012010.mat, AA 2 22012010.mat} 11) NA file = {NA 1 26012010.mat, NA 2 26012010.mat} B.2 Filtering Process In this section will just show the random pick up ECG signal from 11 subjects, to check the filtering result is noiseless. From figure B.3 and figure B.4 can prove that all the data sets in the 3D matrix is filtered by the designed IIR filter. B.3 Segmentation For segmentation of ECG signal is the very first step to extract ECG features for classification. This section will show the result of segmentation. We can see the same R peak from the full ECG and ECG segments in figure B.5 and figure B.6.
  75. 75. B.3. SEGMENTATION xiv FigureB.3:(a)OriginalECGsignalfromFQ11/02/2010recordingdata.(b)filteredECGsignal.
  76. 76. B.3. SEGMENTATION xv FigureB.4:(a)OriginalECGsignalfromAR24/01/2010recordingdata.(b)filteredECGsignal.
  77. 77. B.3. SEGMENTATION xvi FigureB.5:FullrangeofECGsignalthatshouldbesegmentbythefunction.Peak1toPeak15shouldhavesamevalueafterthe segmentationineachsegmentshownisfigureB.6.
  78. 78. B.3. SEGMENTATION xvii FigureB.6:All15peaksfromthefullrangeofECGsignalshownisfigureB.5.
  79. 79. B.4. FEATURES EXTRACTION xviii Then, it can prove that the function has cut the correct ECG peak for all data set from each subject. B.4 Features Extraction This section figure B.7 will show the features that extracted from one random set of data from each subject. After we processed the wavelet decomposition, the signal will down-sampling to several levels in different frequency range. The figure B.8 is shown that the 15 peaks of the wavelet decomposition results and figure B.9 is only taking 1 peak to show the coefficient. From each peak, we will take the maximum, minimum, mean and standard derivation as our final features that shown in figure B.10. B.5 Classification In this section, we will test on classification part that is most important in the whole project. We will use the experiment B (4 test subjects data set) to test our neural network classifier. In figure B.11(d), the all confusion matrix shown a high accuracy of recognition rate 97.4% and 2.6% of error. Inside the confusion matrix, the red box number is not zero means that the recognition for that person is wrong during that trial. In figure B.12, before we train the neural network, we have replaced 10 sets of data for FQ and YN by YG and VR. So, we can see that from figure B.12(d) all confusion matrix get about accuracy of 83.6%. In all confusion matrix, the first column has total 8 trials fall into red boxes and also the second column has 8 trials fall into red boxes.
  80. 80. B.5. CLASSIFICATION xix FigureB.7:ThecoefficientofECGsignalthatcontainof15peaks.Eachsubjecthavelengthof211samples.
  81. 81. B.5. CLASSIFICATION xx FigureB.8:15peaksisshowninthefigurewithlengthof211sampleperpeak.
  82. 82. B.5. CLASSIFICATION xxi FigureB.9:1peakisshowninthefigureandcA4andcD4iscontain15samples,cD3iscontain27samples,cD2iscontain52samples andcD1iscontain102samples.
  83. 83. B.5. CLASSIFICATION xxii FigureB.10:11subjectsfeaturesisshowninthefigure,wecanseethedifferentfromeachsubject.
  84. 84. B.5. CLASSIFICATION xxiii Figure B.11: The confusion plot for the trained neural network with correct data set. (a) Training confusion matrix. (b) Validation confusion matrix. (c) Test confusion matrix. (d) All confusion matrix.
  85. 85. B.5. CLASSIFICATION xxiv Figure B.12: The confusion plot for the trained neural network with mixed with subject FQ and YN data sets. (a) Training confusion matrix. (b) Validation confusion matrix. (c) Test confusion matrix. (d) All confusion matrix.
  86. 86. B.5. CLASSIFICATION xxv FigureB.13:Thefeaturesaredifferentcomparetootherdataset-subjectAR.
  87. 87. B.5. CLASSIFICATION xxvi FigureB.14:Thefeaturesaresimilarcomparetoset10-subjectAR.
  88. 88. B.5. CLASSIFICATION xxvii FigureB.15:Thefeaturesaresimilarcomparetoset5-subjectAR.
  89. 89. B.5. CLASSIFICATION xxviii FigureB.16:Thefeaturesaresimilarcomparetootherdataset-subjectAA.
  90. 90. B.5. CLASSIFICATION xxix FigureB.17:Thefeaturesaresimilarcomparetoset10-subjectAA.
  91. 91. B.5. CLASSIFICATION xxx FigureB.18:Thefeaturesaresimilarcomparetoset5-subjectAA.
  92. 92. Appendix C Source Code This appendix will show the source code of biometrics’s system. Figure C.1 shown each functions and subfunctions in our biometrics system. Here is the main program: 1 % BIOMETRICS SYSTEM USING ECG 2 3 % This is the main program of whole system. 4 % 1. Run this file. 5 % 2. User input type of data set. 6 % 3. Output will popup confusion matrix for all subjects. 7 % 4. Workspace will show DATA, Input_data, train_data 8 % num_sub, num_set, num_peak. 9 % 5. Command Window will show Rpick. 10 % 6. User input y/n to repick input and classify again. 11 12 % Author: LAM ZHENG YAN 13 % Last Update: 12/4/2010 14 % Input: a - 11 subjects in one day recording 15 % b - 4 subjects in three days recording 16 % y - yes, to repick input and classify 17 % n - no, stop program 18 % Output variable: DATA - All subjects ECG signal matrix 19 % Input_data - Input matrix for neural... 20 % train_data - Training data set for... 21 % neural network. 22 % num_sub - Number of subject. 23 % num_set - Number of data set. 24 % num_peak - Number of peak segment. 25 % Parent: - 26 % Child : pre_processing, random_pick, classifier xxxi
  93. 93. xxxii Figure C.1: The program tree of biometrics system.
  94. 94. xxxiii 27 %END 28 29 %The ECG signal classifier Program path 30 clear all; 31 32 %Pre-processing ECG signal 33 [DATA,num_sub,num_set,num_peak] = pre_processing; 34 35 %Random Pick one set of data from each subject... 36 for input_data 37 [Input_data,train_data] = random_pick(DATA,num_sub,... 38 num_set,num_peak); 39 40 %Classification ECG signal 41 [Result_float,Result_round] = classifier(Input_data,... 42 train_data, num_sub,num_set); 43 44 %Random Pick one set of data from each subject for... 45 % input_data 46 sta = 0; 47 while 1-sta == 1 48 yesno = input(’Repick the data set and classify... 49 again?(y/n): ’,’s’); 50 switch yesno 51 case ’y’ 52 [Input_data,train_data] = random_pick(DATA,num_sub,... 53 num_set,num_peak); 54 %Classification ECG signal 55 [Result_float,Result_round] = classifier(Input_data,... 56 train_data,num_sub,num_set); 57 sta = 0; 58 case ’n’ 59 sta = 1; 60 end 61 end
  95. 95. xxxiv Here is the child function: 1 % This function contain all the pre-processing step 2 3 % Before classification, all the ECG data must pass 4 % thruogh this pre-processing steps. 5 % 1. Extract all the ECG from .mat file. 6 % 2. Filter the noise for all ECG signal. 7 % 3. Segmentation process. 8 % 4. Feature extraction. 9 10 % Author: LAM ZHENG YAN 11 % Last Update: 12/4/2010 12 % Input: - 13 % Output variable: DATA - All subjects ECG signal matrix. 14 % num_sub - Number of subject. 15 % num_set - Number of data set. 16 % num_peak - Number of peak segment. 17 % Parent: ECG_Biometric_system 18 % Child : Data_structure, IIRfilter, ecg_segment,... 19 % wavelet_extract 20 21 function [DATA,num_sub,num_set,num_peak] = pre_processing 22 23 [Data,num_sub,num_set,num_peak] = Data_structure; 24 25 fil_ecg = IIRfilter(Data,num_sub,num_set); 26 27 seg = ecg_segment(fil_ecg,num_sub,num_set,num_peak); 28 29 DATA = wavelet_extract(seg,num_sub,num_set,num_peak);
  96. 96. xxxv Here is the child function: 1 % This function randomly pick one set of data as input 2 3 % Randomly pick one set of data from all subjects 4 % for input data set, rest for train data set. 5 % 1. Random pick by function "randperm". 6 % 2. Create input data set. 7 % 3. Create train data set. 8 % 4. Reconstruct 3D matrix to 2D matrix, as neural network 9 % require a 2D matrix as input. 10 11 % Author: LAM ZHENG YAN 12 % Last Update: 12/4/2010 13 % Input: features - all subject features matrix(3D) 14 % num_sub - Number of subject. 15 % num_set - Number of data set. 16 % num_peak - Number of peak segment. 17 % Output: Input_data - Input matrix for neural... 18 % network. 19 % train_data - Training data set for... 20 % neural network. 21 % Parent: ECG_Biometric_system 22 % Child : - 23 24 % Random pick one set of data as classifier input data set. 25 function [Input_data,train_data] = random_pick(features,... 26 num_sub,num_set,num_peak) 27 Rpick = randperm(num_set); 28 Rpick = Rpick(:,1) 29 % Rpick = 10 30 31 % Create Input data set 32 for n = 0:1:num_sub-1 33 for m = 0:1:3 34 p1 = 1+(m*(5*num_peak)); 35 b1 = (5*num_peak)*(m+1); 36 Input_data(p1:b1,n+1) = features(1:(5*num_peak),... 37 Rpick,(m+1)+(4*n)); 38 end 39 end 40 41 if Rpick ˜= num_set 42 features1(:,1:Rpick-1,:) = features(:,1:Rpick-1,:); 43 features1(:,Rpick:num_set-1,:) = features(:,... 44 Rpick+1:num_set,:); 45 46 % Create Training data set
  97. 97. xxxvi 47 for n = 0:1:num_sub-1 48 for m = 0:1:3 49 train_data(1+(m*(5*num_peak)):(5*num_peak)*(m+1),... 50 1+(n*(num_set-1)):(num_set-1)*(n+1)) = ... 51 features1(1:(5*num_peak),1:num_set-1,(m+1)+(4*n)); 52 end 53 end 54 55 else 56 features1(:,1:Rpick-1,:) = features(:,1:Rpick-1,:); 57 58 for n = 0:1:num_sub-1 59 for m = 0:1:3 60 train_data(1+(m*(5*num_peak)):(5*num_peak)*(m+1),... 61 1+(n*(num_set-1)):(num_set-1)*(n+1)) = ... 62 features1(1:(5*num_peak),1:num_set-1,(m+1)+(4*n)); 63 end 64 end 65 end
  98. 98. xxxvii Here is the child function: 1 % Classification function 2 3 % Neural network will be created, classificatiom result 4 % will shown in confusion plot. 5 % 1. Normalize the train data set. 6 % 2. Normalize the input data set. 7 % 3. Define train_parameter. 8 % 4. Create network. 9 % 5. Train network unitl it reach performance. 10 % 6. Classification. 11 12 % Author: LAM ZHENG YAN 13 % Last Update: 12/4/2010 14 % Input: Input_data - Input matrix for neural... 15 % network. 16 % train_data - Training data set for... 17 % neural network. 18 % Output: Confusion plot, Resutl_round 19 % Parent: ECG_Biometric_system 20 % Child : - 21 22 %The ECG signal classifier Program path 23 function [Result_float,Result_round] = ... 24 classifier(Input_data,train_data,num_sub,num_set) 25 26 % ******************************************************** 27 % Normalized the training set and the input_data set 28 % MapStd is used to normalize the data. 29 30 [train_data,PS] = mapstd(train_data); 31 Input_data = mapstd(’apply’,Input_data,PS); 32 33 34 target_data1 = zeros(num_sub,num_sub); 35 target_data = zeros(num_sub,num_sub); 36 for m = 0:1:num_sub-1 37 target_data(m+1,m+1) = 1; 38 target_data1(m+1,1+(m*(num_set-1)):(m+1)*(num_set-1))= 1; 39 end 40 41 % ************************************************************ 42 % Pattern Recognition Network Properties 43 % ************************************************************ 44 % val = 45 % 46 % Neural Network object:
  99. 99. xxxviii 47 % 48 % architecture: 49 % 50 % numInputs: 1 51 % numLayers: 2 52 % biasConnect: [1; 1] 53 % inputConnect: [1; 0] 54 % layerConnect: [0 0; 1 0] 55 % outputConnect: [0 1] 56 % 57 % numOutputs: 1 (read-only) 58 % numInputDelays: 0 (read-only) 59 % numLayerDelays: 0 (read-only) 60 % 61 % subobject structures: 62 % 63 % inputs: {1x1 cell} of inputs 64 % layers: {2x1 cell} of layers 65 % outputs: {1x2 cell} containing 1 output 66 % biases: {2x1 cell} containing 2 biases 67 % inputWeights: {2x1 cell} containing 1 input weight 68 % layerWeights: {2x2 cell} containing 1 layer weight 69 % 70 % functions: 71 % 72 % adaptFcn: ’trains’ 73 % divideFcn: ’dividerand’ 74 % gradientFcn: ’calcgrad’ 75 % initFcn: ’initlay’ 76 % performFcn: ’mse’ 77 % plotFcns: {’plotperform’,’plottrainstate’... 78 % ,’plotconfusion’,’plotroc’} 79 % trainFcn: ’trainscg’ 80 % 81 % parameters: 82 % 83 % adaptParam: .passes 84 % divideParam: .trainRatio, .valRatio, .testRatio 85 % gradientParam: (none) 86 % initParam: (none) 87 % performParam: (none) 88 % trainParam: .show, .showWindow, .showCommandLine,... 89 % .epochs, 90 % .time, .goal, .max_fail, .min_grad, 91 % .sigma, .lambda, .lr, .lr_inc 92 % 93 % weight and bias values:
  100. 100. xxxix 94 % 95 % IW: {2x1 cell} containing 1 input weight matrix 96 % LW: {2x2 cell} containing 1 layer weight matrix 97 % b: {2x1 cell} containing 2 bias vectors 98 % 99 % other: 100 % 101 % name: ’’ 102 % userdata: (user information) 103 % **************************************************************** 104 num_neu = 50 105 net = newpr(train_data, target_data1, num_neu); 106 net.trainParam.epochs = 150; 107 net.trainParam.lr = 0.05; 108 net.trainParam.lr_inc = 3.05; 109 net.trainParam.goal = 2e-3; 110 net.trainParam.max_fail = 50; 111 % net.divideFcn = ’’; 112 [net,tr,Y,E,Pf,Af] = train(net, train_data, target_data1); 113 minPerf = min(tr.perf); 114 goal = tr.goal; 115 116 % Auto retrain network, when performance not reached. 117 while minPerf > goal 118 net = initnw(net,2); 119 [net,tr,Y,E,Pf,Af] = train(net, train_data, target_data1); 120 minPerf = min(tr.perf); 121 goal = tr.goal; 122 end 123 124 Result_float = sim(net,Input_data); 125 Result_round = round(Result_float); 126 127 plotconfusion(target_data,Result_float)
  101. 101. xl Here is the child subfunction: 1 % This subfunction is extract all the .mat file 2 3 % All file is defined in a string seperately for each 4 % subject. Data structure is a 3D matrix: 5 % [num_sample x num_set x num_subject] 6 7 % Author: LAM ZHENG YAN 8 % Last Update: 12/4/2010 9 % Input: .mat files 10 % Output variable: DATA - All subjects ECG signal matrix. 11 % num_sub - Number of subject. 12 % num_set - Number of data set. 13 % num_peak - Number of peak segment. 14 % Parent: pre_processing 15 % Child : - 16 17 function [Data,num_sub,num_set,num_peak] =... 18 Data_structure 19 20 % Add Data structure 21 m = 7000; 22 n = 10; 23 num_sub = 11; 24 % *********************************************************** 25 26 % Faliq ECG signal x 20 set 27 faliq_file = {’faliq_1_11022010.mat’,... 28 ’faliq_2_11022010.mat’,’faliq_1_19012010.mat’,... 29 ’faliq_2_19012010.mat’,’faliq_1_24022010.mat’,... 30 ’faliq_2_24022010.mat’}; 31 for b = 1:1:6 32 faliq1 = load(faliq_file{:,b}); 33 v = 0:5:25; 34 faliq_dat(:,1+v(:,b)) = faliq1.trial1(1:m,5); 35 faliq_dat(:,2+v(:,b)) = faliq1.trial2(1:m,5); 36 faliq_dat(:,3+v(:,b)) = faliq1.trial3(1:m,5); 37 faliq_dat(:,4+v(:,b)) = faliq1.trial4(1:m,5); 38 faliq_dat(:,5+v(:,b)) = faliq1.trial5(1:m,5); 39 end 40 % *********************************************************** 41 42 % Yan ECG signal x 20 set 43 yan_file = {’yan_1_11022010.mat’,... 44 ’yan_2_11022010.mat’,’yan_1_21012010.mat’,... 45 ’yan_2_21012010.mat’,’yan_1_24022010.mat’,... 46 ’yan_2_24022010.mat’};
  102. 102. xli 47 for b = 1:1:6 48 yan1 = load(yan_file{:,b}); 49 v = 0:5:25; 50 yan_dat(:,1+v(:,b)) = yan1.trial1(1:m,5); 51 yan_dat(:,2+v(:,b)) = yan1.trial2(1:m,5); 52 yan_dat(:,3+v(:,b)) = yan1.trial3(1:m,5); 53 yan_dat(:,4+v(:,b)) = yan1.trial4(1:m,5); 54 yan_dat(:,5+v(:,b)) = yan1.trial5(1:m,5); 55 end 56 % *********************************************************** 57 58 % Veren ECG singal x 10 set 59 veren_file = {’veren_1_20012010.mat’,... 60 ’veren_2_20012010.mat’,’veren_1_24022010.mat’,... 61 ’veren_2_24022010.mat’,’veren_19032010_day3.mat’}; 62 for b = 1:1:4 63 veren1 = load(veren_file{:,b}); 64 v = 0:5:15; 65 veren_dat(:,1+v(:,b)) = veren1.trial1(1:m,5); 66 veren_dat(:,2+v(:,b)) = veren1.trial2(1:m,5); 67 veren_dat(:,3+v(:,b)) = veren1.trial3(1:m,5); 68 veren_dat(:,4+v(:,b)) = veren1.trial4(1:m,5); 69 veren_dat(:,5+v(:,b)) = veren1.trial5(1:m,5); 70 end 71 72 veren1 = load(veren_file{:,5}); 73 veren_dat(:,21) = veren1.trial1(1:m,6); 74 veren_dat(:,22) = veren1.trial2(1:m,6); 75 veren_dat(:,23) = veren1.trial3(1:m,6); 76 veren_dat(:,24) = veren1.trial4(1:m,6); 77 veren_dat(:,25) = veren1.trial5(1:m,6); 78 veren_dat(:,26) = veren1.trial6(1:m,6); 79 veren_dat(:,27) = veren1.trial7(1:m,6); 80 veren_dat(:,28) = veren1.trial8(1:m,6); 81 veren_dat(:,29) = veren1.trial9(1:m,6); 82 veren_dat(:,30) = veren1.trial10(1:m,6); 83 % ********************************************************** 84 85 % Asyrani ECG signal x 10 set 86 asyrani_file = {’asyrani_1_22012010.mat’,... 87 ’asyrani_2_22012010.mat’}; 88 for b = 1:1:2 89 asyrani1 = load(asyrani_file{:,b}); 90 v = 0:5:5; 91 asyrani_dat(:,1+v(:,b)) = asyrani1.trial1(1:m,5); 92 asyrani_dat(:,2+v(:,b)) = asyrani1.trial2(1:m,5); 93 asyrani_dat(:,3+v(:,b)) = asyrani1.trial3(1:m,5);
  103. 103. xlii 94 asyrani_dat(:,4+v(:,b)) = asyrani1.trial4(1:m,5); 95 asyrani_dat(:,5+v(:,b)) = asyrani1.trial5(1:m,5); 96 end 97 % ********************************************************** 98 99 % koshaopeng ECG signal x 10 set 100 koshaopeng_file = {’koshaopeng_1_20012010.mat’,... 101 ’koshaopeng_2_20012010.mat’}; 102 for b = 1:1:2 103 koshaopeng1 = load(koshaopeng_file{:,b}); 104 v = 0:5:5; 105 koshaopeng_dat(:,1+v(:,b)) = koshaopeng1.trial1(1:m,5); 106 koshaopeng_dat(:,2+v(:,b)) = koshaopeng1.trial2(1:m,5); 107 koshaopeng_dat(:,3+v(:,b)) = koshaopeng1.trial3(1:m,5); 108 koshaopeng_dat(:,4+v(:,b)) = koshaopeng1.trial4(1:m,5); 109 koshaopeng_dat(:,5+v(:,b)) = koshaopeng1.trial5(1:m,5); 110 end 111 % ********************************************************** 112 113 % yeong ECG signal x 10 set 114 yeong_file = {’yeong_1_20012010.mat’,... 115 ’yeong_2_20012010.mat’,’yeong_1_18032010.mat’,... 116 ’yeong_2_18032010.mat’,’yeong_1_24022010.mat’,... 117 ’yeong_2_24022010.mat’}; 118 for b = 1:1:6 119 yeong1 = load(yeong_file{:,b}); 120 v = 0:5:25; 121 yeong_dat(:,1+v(:,b)) = yeong1.trial1(1:m,5); 122 yeong_dat(:,2+v(:,b)) = yeong1.trial2(1:m,5); 123 yeong_dat(:,3+v(:,b)) = yeong1.trial3(1:m,5); 124 yeong_dat(:,4+v(:,b)) = yeong1.trial4(1:m,5); 125 yeong_dat(:,5+v(:,b)) = yeong1.trial5(1:m,5); 126 end 127 % ********************************************************** 128 129 % lowjinxiang ECG signal x 10 set 130 lowjinxiang_file = {’lowjinxiang_1_21012010.mat’,... 131 ’lowjinxiang_2_21012010.mat’}; 132 for b = 1:1:2 133 lowjinxiang1 = load(lowjinxiang_file{:,b}); 134 v = 0:5:5; 135 lowjinxiang_dat(:,1+v(:,b)) = lowjinxiang1.trial1(1:m,5); 136 lowjinxiang_dat(:,2+v(:,b)) = lowjinxiang1.trial2(1:m,5); 137 lowjinxiang_dat(:,3+v(:,b)) = lowjinxiang1.trial3(1:m,5); 138 lowjinxiang_dat(:,4+v(:,b)) = lowjinxiang1.trial4(1:m,5); 139 lowjinxiang_dat(:,5+v(:,b)) = lowjinxiang1.trial5(1:m,5); 140 end
  104. 104. xliii 141 % ********************************************************** 142 143 % mukhlish ECG signal x 10 set 144 mukhlish_file = {’mukhlish_1_21012010.mat’,... 145 ’mukhlish_2_21012010.mat’}; 146 for b = 1:1:2 147 mukhlish1 = load(mukhlish_file{:,b}); 148 v = 0:5:5; 149 mukhlish_dat(:,1+v(:,b)) = mukhlish1.trial1(1:m,5); 150 mukhlish_dat(:,2+v(:,b)) = mukhlish1.trial2(1:m,5); 151 mukhlish_dat(:,3+v(:,b)) = mukhlish1.trial3(1:m,5); 152 mukhlish_dat(:,4+v(:,b)) = mukhlish1.trial4(1:m,5); 153 mukhlish_dat(:,5+v(:,b)) = mukhlish1.trial5(1:m,5); 154 end 155 % ********************************************************** 156 157 % solehin ECG signal x 10 set 158 solehin_file = {’solehin_1_21012010.mat’,... 159 ’solehin_2_21012010.mat’}; 160 for b = 1:1:2 161 solehin1 = load(solehin_file{:,b}); 162 v = 0:5:5; 163 solehin_dat(:,1+v(:,b)) = solehin1.trial1(1:m,5); 164 solehin_dat(:,2+v(:,b)) = solehin1.trial2(1:m,5); 165 solehin_dat(:,3+v(:,b)) = solehin1.trial3(1:m,5); 166 solehin_dat(:,4+v(:,b)) = solehin1.trial4(1:m,5); 167 solehin_dat(:,5+v(:,b)) = solehin1.trial5(1:m,5); 168 end 169 % ********************************************************** 170 171 % alamin ECG signal x 10 set 172 alamin_file = {’alamin_1_22012010.mat’,... 173 ’alamin_2_22012010.mat’}; 174 for b = 1:1:2 175 alamin1 = load(alamin_file{:,b}); 176 v = 0:5:5; 177 alamin_dat(:,1+v(:,b)) = alamin1.trial1(1:m,5); 178 alamin_dat(:,2+v(:,b)) = alamin1.trial2(1:m,5); 179 alamin_dat(:,3+v(:,b)) = alamin1.trial3(1:m,5); 180 alamin_dat(:,4+v(:,b)) = alamin1.trial4(1:m,5); 181 alamin_dat(:,5+v(:,b)) = alamin1.trial5(1:m,5); 182 end 183 % ********************************************************** 184 185 % nikadam ECG signal x 10 set 186 nikadam_file = {’nikadam_1_26012010.mat’,... 187 ’nikadam_2_26012010.mat’};
  105. 105. xliv 188 for b = 1:1:2 189 nikadam1 = load(nikadam_file{:,b}); 190 v = 0:5:5; 191 nikadam_dat(:,1+v(:,b)) = nikadam1.trial1(1:m,5); 192 nikadam_dat(:,2+v(:,b)) = nikadam1.trial2(1:m,5); 193 nikadam_dat(:,3+v(:,b)) = nikadam1.trial3(1:m,5); 194 nikadam_dat(:,4+v(:,b)) = nikadam1.trial4(1:m,5); 195 nikadam_dat(:,5+v(:,b)) = nikadam1.trial5(1:m,5); 196 end 197 % ********************************************************** 198 199 % Choose 2 type of data set 200 % 1st type: 11 subjects in 1 day of recording 201 % 2nd type: 4 subjects in 3 days of recording 202 203 msg0 = {’The type of data set:’;... 204 ’a. 11 subjects in 1 day of recording’;... 205 ’b. 4 subjects in 3 days of recording’} 206 type_data = input(’Choose the type of data set(a/b): ’,’s’); 207 208 switch type_data 209 case ’a’ 210 % data structure 211 Data(:,:,1) = faliq_dat(:,1:10); 212 Data(:,:,2) = yan_dat(:,1:10); 213 Data(:,:,3) = veren_dat(:,1:10); 214 Data(:,:,4) = alamin_dat; 215 Data(:,:,5) = asyrani_dat; 216 Data(:,:,6) = koshaopeng_dat; 217 Data(:,:,7) = lowjinxiang_dat; 218 Data(:,:,8) = mukhlish_dat; 219 Data(:,:,9) = nikadam_dat; 220 Data(:,:,10) = solehin_dat; 221 Data(:,:,11) = yeong_dat(:,1:10); 222 num_sub = 11; 223 num_set = 10; 224 num_peak = 15; 225 226 case ’b’ 227 Data(:,:,1) = faliq_dat; 228 Data(:,:,2) = yan_dat; 229 Data(:,:,3) = veren_dat; 230 Data(:,:,4) = yeong_dat; 231 num_sub = 4; 232 num_set = 30; 233 num_peak = 15; 234 end
  106. 106. xlv Here is the child subfunction: 1 % This subfunction is filtering the noise in ECG. 2 3 % IIR filter is creadted for filtering process. 4 5 % Author: LAM ZHENG YAN 6 % Last Update: 12/4/2010 7 % Input: ecg - data structure(3D) 8 % num_sub - Number of subject. 9 % num_set - Number of data set. 10 % num_peak - Number of peak segment. 11 % Output variable: fil_ecg - ECG without noise 12 % Parent: pre_processing 13 % Child : - 14 15 function fil_ecg = IIRfilter(ecg,num_sub,num_set) 16 % Lowpass Elliptic IIR filter design 17 ftype = ’low’; 18 Rp=2; 19 Rs=90; 20 Wp = 0.325; 21 Ws = 0.33; 22 [n, Wn] =ellipord(Wp,Ws,Rp,Rs); 23 24 % Transfer Function design 25 [b,a] = ellip(n,Rp,Rs,Wn,ftype); 26 27 for n = 1:1:num_sub 28 for m = 1:1:num_set 29 fil_ecg(:,m,n) = filtfilt(b,a,ecg(:,m,n)); 30 end 31 end
  107. 107. xlvi Here is the child subfunction: 1 % This subfunction is segmentation 2 3 % The ECG is segmented into 15 pieces. 4 % 1. Detect the first peak. 5 % 2. Find 15 peak after the first peak. 6 % 3. Define the range of each segment. 7 8 % Author: LAM ZHENG YAN 9 % Last Update: 12/4/2010 10 % Input: w1 - data structure(3D) 11 % num_sub - Number of subject. 12 % num_set - Number of data set. 13 % num_peak - Number of peak segment. 14 % Output variable: segments(3D)- contain 15 pieces of ECG 15 % Parent: pre_processing 16 % Child : - 17 18 function segments = ecg_segment(w1,num_sub,... 19 num_set,num_peak) 20 RRmax = 250; 21 RRmin = 150; 22 RR2max = 500; 23 RR2min = 300; 24 %first double interval, all peak detected 25 for v = 1:1:num_sub 26 for c = 1:1:num_set 27 28 for j = 1:1:RR2max 29 if w1(j,c,v) == max(w1(1:RR2max,c,v)) 30 position = j; 31 int_peak = [position,w1(j,c,v)]; 32 end 33 end 34 35 for m = 1:1:num_peak 36 for k = position:1:position+373 37 if w1(k,c,v) == ... 38 max(w1(position+17:position+390,c,v)) 39 position = k; 40 peak_pos(:,m) = position; 41 peak_val(:,m) = w1(k,c,v); 42 end 43 end 44 end 45 46 %from peak, break each ECG wave in single vector
  108. 108. xlvii 47 % Each peak is 201 samples 48 for n = 0:1:num_peak-1 49 p = 1+(n*201); 50 b = (n+1)*201; 51 segments(p:b,c,v) = ... 52 w1(peak_pos(:,n+1)-50:peak_pos(:,n+1)+150,c,v); 53 end 54 55 end 56 end
  109. 109. xlviii Here is the child subfunction: 1 % This subfunction is feature extraction 2 3 % Feature extraction is done by wavelet decomposition 4 % 1. Find the maximum from each subband. 5 % 2. Find the minimum from each subband. 6 % 3. Find the mean from each subband. 7 % 4. Find the standard deviation from each subband. 8 9 % Author: LAM ZHENG YAN 10 % Last Update: 12/4/2010 11 % Input: ecg_seg - data that already segmented(3D) 12 % num_sub - Number of subject. 13 % num_set - Number of data set. 14 % num_peak - Number of peak segment. 15 % Output variable: all data features (3D) 16 % Parent: pre_processing 17 % Child : - 18 19 function features = ... 20 wavelet_extract(ecg_seg,num_sub,num_set,num_peak) 21 %Discreate Wavelet Transform 22 for v = 0:1:num_sub-1 23 for c = 1:1:num_set 24 for n = 0:1:num_peak-1 25 %Daubenchies wavelet order 2, lvl 4 26 p = 1+(n*201); 27 b = (n+1)*201; 28 p1 = 1+(n*211); 29 b1 = (n+1)*211; 30 [C(p1:b1,c,v+1),L]=... 31 wavedec(ecg_seg(p:b,c,v+1),4,’db2’); 32 end 33 end 34 end 35 36 for q = 1:1:5 37 %convert L 38 Lsum(:,1) = 1; 39 Lsum(:,q+1) = sum(L(1:q,:)); 40 end 41 42 %Create the 3D features cube 43 %Define 3rd dimension axis 44 maxi = 1; 45 mini = 2; 46 meanval = 3;
  110. 110. xlix 47 stdval = 4; 48 for v = 0:1:num_sub-1 49 for c = 1:1:num_set 50 for m = 0:1:num_peak-1 51 p1 = 1+(m*211); 52 b1 = (m+1)*211; 53 C1(:,c,v+1) = C(p1:b1,c,v+1); 54 for mn = 1:1:5 55 features(mn+(m*5),c,maxi+(v*4)) =... 56 max(C1(Lsum(:,mn):Lsum(:,mn+1),c,v+1)); 57 features(mn+(m*5),c,mini+(v*4)) =... 58 min(C1(Lsum(:,mn):Lsum(:,mn+1),c,v+1)); 59 features(mn+(m*5),c,meanval+(v*4)) =... 60 mean(C1(Lsum(:,mn):Lsum(:,mn+1),c,v+1)); 61 features(mn+(m*5),c,stdval+(v*4)) =... 62 std(C1(Lsum(:,mn):Lsum(:,mn+1),c,v+1)); 63 end 64 end 65 66 end 67 end

×