O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a navegar o site, você aceita o uso de cookies. Leia nosso Contrato do Usuário e nossa Política de Privacidade.
O SlideShare utiliza cookies para otimizar a funcionalidade e o desempenho do site, assim como para apresentar publicidade mais relevante aos nossos usuários. Se você continuar a utilizar o site, você aceita o uso de cookies. Leia nossa Política de Privacidade e nosso Contrato do Usuário para obter mais detalhes.
Measurement of pulse rate of a person using his videoBy Sahil ShahDate: 30-11-2012
•Literature Review: From literature we know that approaches havebeen found to extract human pulse information from the video of astationary person.•One of the methods is using the mean values of the R,G,B streamsfrom a specific region of interest of the face and plotting them over timefrom the video.•Analysis using Matlab.
•Two approaches: 1. Object tracking: We use the standard object trackingimplementation in MIRA to detect the face. The ROIs are stated in theconfiguration file of the Pulse Detector unit as sub regions of the face.We select the largest detected object as the face and subsequentlyselect the closest object to the last detection as the face.ADV:• Faster• GeneralizedDIS:• Breaks when first detection is wrong (generally when face takessmaller area in the image)• ‘jumping’ detections.
2. Active Appearance Model (AAM): We use the activeappearance model algorithm to recognize faces based on multiplefeatures. It returns triangles that define different features on the face.We configure the AAM face detector to return some pre selectedtriangles as ROIsADV:• More robust to small movements• Exact ROIsDIS:• No generalized model for all kinds of faces
•Average R,G and B pixel values of the regions of interest from the facefor each timestamp•Interpolation to get RGB values for the timestamps for which we getimages (since detections come little later)•Sampling rate can be changed and is not required to be same as thatof images because interpolation can also be used to get intensityvalues for any timestamp•Interpolation also helps to maintain equal intervals between framesand increase accuracy
•Intensity Normalization: rn = r/(r+g+b) gn = g/(r+g+b) bn = b/(r+g+b)•Independent Component Analysis•Hann Window: Reduces resolution but works better when S/R is low.•Bayes Filtering: Kernel with +/-1 bin change (+/- 3 bpm for a windowof 200 frames at 10Hz).
•Fast Fourier Transform: Discrete Fourier transforms of the processedsignals to get their power spectrum•Band-pass filter: Band-pass filter (0.75 to 1.5) to get the frequencyspectrum for the range in which the human pulse can lie.•Peak Detection: Detects maximum power frequency•Parabola estimation•Calculate Pulse
•The Pulse Detector can be configured with the help of variousparameters like: Number of frames Virtual Sampling Frequency Regions of Interest Use AAM Use ICA Bayes Filter Windowing (Hann) Filter Bands Parabola Estimation
•We evaluated the Pulse Detector Unit on the following factors Motion vs Stationary AAM vs Object Tracking Near vs Far (Resolution) Jumping detections vs. Non jumping detection Different ROIs ICA vs No ICA
•The analysis and testing was done in Matlab while the entireimplementation is in C++ using the Middleware for Robotic Applications(MIRA) framework.
Which algorithm is the most promising for usage?• The Object Tracking algorithm is giving better results currently.• The AAM tends to lose the detections on increasing movement.• But a better trained AAM will be more robust because it is more accurate and gives the exact ROI thus effect of small noise becomes negligible.
What is the maximum distance of people in the image from whererobust pulse extraction is possible?• For stationary images taken using the Kinect sensor we got good results even for face size 107x107 pixels from a 640x480 image.• This was around 80 cm from the camera.
To what degree the people can move in the image without losingpulse observation?• A well trained AAM would almost nullify the noise effects, currently face tracking however is not so robust to higher noise (>10 pixels) specially when the person is farther from the camera.
What is the minimum duration of a video sequence to allow pulserate extraction?• 20 second blocks of video are sufficient for pulse rate extraction. We take 20 second sliding window continuously for as long as the video is captured.
 Remote plethysmographic imaging using ambient light. Verkruysse,W. and Svaasand, L.O. and Nelson, J.S., Optics express, nr. 26, vol.16, pp. 21434-21445, Optical Society of America, 2008 Eulerian video magnification for revealing subtle changes in theworld. Wu, H.Y. and Rubinstein, M. and Shih, E. and Guttag, J. andDurand, F. and Freeman, W., ACM Transactions on Graphics (TOG), nr.4, vol. 31, pp. 65, ACM, 2012 Non-contact, automated cardiac pulse measurements using videoimaging and blind source separation. Poh, M.Z. and McDuff, D.J. andPicard, R.W., Optics Express, nr. 10, vol. 18, pp. 10762-10774, OpticalSociety of America, 2010