SlideShare uma empresa Scribd logo
1 de 1
Can Visual Fixation Patterns Improve Image Quality?
                                                   Eric C. Larson, Cuong Vu, and Damon M. Chandler, Members IEEE
                      Image Coding and Analysis Lab, Department of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078



Introduction                                                                     Results                                                                                                        Results
                                                                                                                                                                                                From the bar graph, it can be seen that VIF shows the most improvement in
A computer cannot judge image quality. Although current algorithms have                                                                                                                         correlation and that WSNR ends with the highest correlation. Also notice that the
made great strides in predicting human ratings of fidelity, we are still do not                                                                                                                  no task condition regions were the most useful for augmenting the metrics.
have a foolproof method of judging the quality of distorted images. This
                                                                                                                          No Task Improve                     Tasked Improve
experiment explores if the missing link in image quality is that we need to                                                                                                                     The graphs of the “correlation space” show that the highest correlations
know where humans tend to look in an image.                                                                                                                                                     generally appear when weighting the region with the highest fixations most, and
                                                                                             PSNR                                 0.0137                            0.0045                      weighting some of the region with a mild number of fixations. This is true in all
Five common metrics of image fidelity were augmented using two sets of eye                                                                                                                       cases except VSNR, were most of the weight should be placed in the regions that
fixation data. The first set was obtained under task-free viewing conditions                   SSIM                                 0.0344                            0.0032
                                                                                                                                                                                                people do not look (although VSNR has the least to gain from fixations).
and another set was obtained when viewers were asked to specifically “judge
                                                                                             VIF                                  0.0794                            0.0292
image quality.” We then compared the augmented metrics to subjective                                                                                                                            All of the improvements are not statistically significant over the un-weighted
ratings of the images.                                                                       VSNR                                 0.0022                            0.0100
                                                                                                                                                                                                metric except for when weighting VIF by the no task condition fixations.
We then asked,                                                                               WSNR                                 0.0096                            0.0038
1. Can existing fidelity metrics be improved using eye fixation data?
2. If so, is it more appropriate to use eye fixations obtained under no task
                                                                                                                                                                                                Conclusions
   viewing conditions or when viewers were asked to assess quality?                                                                                                                             A computational experiment was presented that segmented images based upon
                                                                                   Tasked Condition                                        No Task Condition
3. Can PSNR be augmented using eye fixation data to perform as well as                                                                                                                           eye fixation data and augmented existing image fidelity metrics with the
   SSIM, VIF, VSNR, or WSNR?                                                                                Residual   Residual             Metric   F-Statistic
                                                                                                                                                                    Residual   Residual         segmentation regions. It was shown that,
                                                                                    Metric   F-Statistic
4. Using a fixation based segmentation, can we quantify how important each                                  Skewness    Kurtosis                                    Skewness    Kurtosis
                                                                                                                                                                                                1. Existing fidelity metrics can be positively augmented using fixation data,
   segmented region is for predicting human subjective ratings?                     PSNR       0.9943       0.6628     -0.1136               PSNR     0.9500        0.7149     -0.0810
                                                                                                                                                                                                    with SSIM and VIF showing the greatest improvements (for common sense
                                                                                    WSNR       0.9920       0.8414     0.5133               WSNR      0.9433        0.9463     0.9048
                                                                                                                                             VSNR     1.1030        1.3209     2.4445
                                                                                                                                                                                                    weighting).
                                                                                    VSNR       0.9730       1.2734     2.1786

Methods                                                                             SSIM
                                                                                     VIF
                                                                                               0.9724
                                                                                               0.8384
                                                                                                            0.9383
                                                                                                            1.5202
                                                                                                                       0.7314
                                                                                                                       2.9391
                                                                                                                                             SSIM
                                                                                                                                             VIF
                                                                                                                                                      0.8274
                                                                                                                                                      0.6285
                                                                                                                                                                    0.8110
                                                                                                                                                                    1.4874
                                                                                                                                                                               0.4574
                                                                                                                                                                               2.7594
                                                                                                                                                                                                2. The no task fixation condition showed the greatest improvements for all
                                                                                                                                                                                                    metrics except VSNR.
                                                                                                                                                                                                3. Under no task conditions, the primary region of eye fixation corresponds to
Two types of visual fixation data were used: The first set of fixations was                                                                                                                            the most important region for PSNR, SSIM, and VIF. For VSNR, the non-ROI is
collected when the viewers were given no task (i.e., they simply looked at the                                                                                                                      the most important region.
images). The second set of fixations was collected when the viewers were               No Task                                                                                                   4. PSNR can be augmented to perform better than original VIF, but not SSIM,
asked to assess image fidelity.                                                                                                                                                                      VSNR, nor WSNR (under this image set). When all metrics are augmented,
                                                                                      Fixation                                                                                                      PSNR has the worst performance.
The resulting eye tracking data was used to cluster images from the LIVE
database[1] into three regions – the regions where viewers gazed (1)with high         Correlation                                                                                               Ultimately, the best way to augment metrics using ROI information and how to
frequency, (2) low frequency, and (3) not at all.                                     Space                                                                                                     cluster eye tracking data in the most meaningful manner for image fidelity
                                                                                                                                                                                                assessment remains an open question. However, it is clear from this experiment
                                                                                                                                                                                                and others (for example, see [5][6]) that fixation and ROI data is less important
                                                                                                                                                                                                for fidelity assessment than expected.

                                                                                                                                                                                                Future
                                                                                                                                                                                                Although fixation data proved ineffective when working with images of all
                                                                                                                                                                                                quality, it was observed over the course of the experiment that region of interest
                                                                                                                                                                                                might be useful for very low quality images.

                                                                                                                                                                                                                                     MAD                     The Most Apparent Distortion

                                                                                      Tasked
Once the images were segmented using fixation data, we wanted to                       Fixation
investigate how much each region contributed to the subjective quality of the
image, and use it to augment five image quality metrics (PSNR, WSNR, SSIM[2],
                                                                                      Correlation
VIF[3], and VSNR[4]).                                                                 Space                                                                                                Motivation

                                                                                                                                                                                           Methods
Specifically, we (1) weighted the three segmented regions in the images, (2)
used the metrics to calculate a new weighted quality of the image, and (3)                                                                                                                 Results

calculated the correlation between the new weighted quality predictions and
subjective ratings of quality.
By adjusting the weights (and constraining that they sum to one) we were
able to look at the “correlation space” for all possible weighting
combinations. This was done for both sets of fixation data (i.e. – “Tasked”
and “No Task”).                                                                                                                                                                           [1] H. R. Sheikh, Z. Wang, A. C. Bovik, and L. K. Cormack, “Image and video quality assessment research at LIVE.” Online. http://live.ece.utexas.edu/research/quality/.
                                                                                                                                                                                          [2] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).

   Etot = α1st-ROI E1st-ROI + α2nd-ROI E2nd-ROI + αnon-ROI
                                                                                                                                                                                          [3] H. R. Sheikh and A. C. Bovik, “Image Information and Visual Quality,” IEEE Trans. Image Process., Vol. 15, No. 2, pp. 430-444, 2006.
                                                                                                                                                                                          [4]  D.M. Chandler and S.S. Hemami, “VSNR: a Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images,” IEEE Trans. Image Process., Vol. 16, No. 9, 2007.
                                                                                                                                                                                          [5]  A. Ninassi, O Le Meur, P.L. Callet, and D. Barba, “Does where you gaze on an image affect your perception of quality? Applying visual attention to image quality,” in IEEE ICIP 2007,




                                                                                                                                                                                                                                                                                                                                                       OSU
                                                                                                                                                                                              2007.

                            Enon-ROI                                                                                                                                                      [6] E.C. Larson and D.M. Chandler, “Unveiling relationships between regions of interest and image fidelity metrics,” Conference on Visual Communications and Image Processing, 2007.




                                        Image Coding and Analysis Lab, Department of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078,
                                                                                                                                                                                                                                                                                                                                                            ECEN

Mais conteúdo relacionado

Mais procurados

An Experimental Study into Objective Quality Assessment of Watermarked Images
An Experimental Study into Objective Quality Assessment of Watermarked ImagesAn Experimental Study into Objective Quality Assessment of Watermarked Images
An Experimental Study into Objective Quality Assessment of Watermarked ImagesCSCJournals
 
FingerVisionを用いた触覚センシングと物体検出を同時に実現するための画像修復とノイズ除去
FingerVisionを用いた触覚センシングと物体検出を同時に実現するための画像修復とノイズ除去FingerVisionを用いた触覚センシングと物体検出を同時に実現するための画像修復とノイズ除去
FingerVisionを用いた触覚センシングと物体検出を同時に実現するための画像修復とノイズ除去KakeruYamasaki
 
Smart Content AAP PSP 2012 02-01 rev 1
Smart Content AAP PSP 2012 02-01 rev 1Smart Content AAP PSP 2012 02-01 rev 1
Smart Content AAP PSP 2012 02-01 rev 1Bradley Allen
 
Fcv bio cv_simoncelli
Fcv bio cv_simoncelliFcv bio cv_simoncelli
Fcv bio cv_simoncellizukun
 
Recent Advances in Object-based Change Detection.pdf
Recent Advances in Object-based Change Detection.pdfRecent Advances in Object-based Change Detection.pdf
Recent Advances in Object-based Change Detection.pdfgrssieee
 
Effective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesEffective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
 
Fundamental optics
Fundamental opticsFundamental optics
Fundamental opticsVU TVM
 
MRI Image Compression
MRI Image CompressionMRI Image Compression
MRI Image Compressiondswazalwar
 
Abstract from medical image to 3 d entities creation
Abstract from medical image to 3 d entities creationAbstract from medical image to 3 d entities creation
Abstract from medical image to 3 d entities creationSara Diogo
 
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...iosrjce
 
Detection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
Detection of Carotid Artery from Pre-Processed Magnetic Resonance AngiogramDetection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
Detection of Carotid Artery from Pre-Processed Magnetic Resonance AngiogramIDES Editor
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
my poster presentation in the jcms2011 conference
my poster presentation in the jcms2011 conferencemy poster presentation in the jcms2011 conference
my poster presentation in the jcms2011 conferencePawitra Masa-ah
 

Mais procurados (15)

An Experimental Study into Objective Quality Assessment of Watermarked Images
An Experimental Study into Objective Quality Assessment of Watermarked ImagesAn Experimental Study into Objective Quality Assessment of Watermarked Images
An Experimental Study into Objective Quality Assessment of Watermarked Images
 
FingerVisionを用いた触覚センシングと物体検出を同時に実現するための画像修復とノイズ除去
FingerVisionを用いた触覚センシングと物体検出を同時に実現するための画像修復とノイズ除去FingerVisionを用いた触覚センシングと物体検出を同時に実現するための画像修復とノイズ除去
FingerVisionを用いた触覚センシングと物体検出を同時に実現するための画像修復とノイズ除去
 
Smart Content AAP PSP 2012 02-01 rev 1
Smart Content AAP PSP 2012 02-01 rev 1Smart Content AAP PSP 2012 02-01 rev 1
Smart Content AAP PSP 2012 02-01 rev 1
 
Fcv bio cv_simoncelli
Fcv bio cv_simoncelliFcv bio cv_simoncelli
Fcv bio cv_simoncelli
 
Sensfusion
SensfusionSensfusion
Sensfusion
 
Recent Advances in Object-based Change Detection.pdf
Recent Advances in Object-based Change Detection.pdfRecent Advances in Object-based Change Detection.pdf
Recent Advances in Object-based Change Detection.pdf
 
Sirona Orthophos X 3 G
Sirona  Orthophos X 3 GSirona  Orthophos X 3 G
Sirona Orthophos X 3 G
 
Effective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye imagesEffective segmentation of sclera, iris and pupil in noisy eye images
Effective segmentation of sclera, iris and pupil in noisy eye images
 
Fundamental optics
Fundamental opticsFundamental optics
Fundamental optics
 
MRI Image Compression
MRI Image CompressionMRI Image Compression
MRI Image Compression
 
Abstract from medical image to 3 d entities creation
Abstract from medical image to 3 d entities creationAbstract from medical image to 3 d entities creation
Abstract from medical image to 3 d entities creation
 
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...
An Efficient Approach of Segmentation and Blind Deconvolution in Image Restor...
 
Detection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
Detection of Carotid Artery from Pre-Processed Magnetic Resonance AngiogramDetection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
Detection of Carotid Artery from Pre-Processed Magnetic Resonance Angiogram
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
my poster presentation in the jcms2011 conference
my poster presentation in the jcms2011 conferencemy poster presentation in the jcms2011 conference
my poster presentation in the jcms2011 conference
 

Destaque

Visual Image Quality Assessment Technique using FSIM
Visual Image Quality Assessment Technique using FSIMVisual Image Quality Assessment Technique using FSIM
Visual Image Quality Assessment Technique using FSIMEditor IJCATR
 
Visual attention
Visual attentionVisual attention
Visual attentionannakalme
 
Visual attention: models and performance
Visual attention: models and performanceVisual attention: models and performance
Visual attention: models and performanceOlivier Le Meur
 
Computational models of human visual attention driven by auditory cues
Computational models of human visual attention driven by auditory cuesComputational models of human visual attention driven by auditory cues
Computational models of human visual attention driven by auditory cuesAkisato Kimura
 
Image quality assessment and statistical evaluation
Image quality assessment and statistical evaluationImage quality assessment and statistical evaluation
Image quality assessment and statistical evaluationDocumentStory
 
Visual Saliency: Learning to Detect Salient Objects
Visual Saliency: Learning to Detect Salient ObjectsVisual Saliency: Learning to Detect Salient Objects
Visual Saliency: Learning to Detect Salient ObjectsVicente Ordonez
 

Destaque (6)

Visual Image Quality Assessment Technique using FSIM
Visual Image Quality Assessment Technique using FSIMVisual Image Quality Assessment Technique using FSIM
Visual Image Quality Assessment Technique using FSIM
 
Visual attention
Visual attentionVisual attention
Visual attention
 
Visual attention: models and performance
Visual attention: models and performanceVisual attention: models and performance
Visual attention: models and performance
 
Computational models of human visual attention driven by auditory cues
Computational models of human visual attention driven by auditory cuesComputational models of human visual attention driven by auditory cues
Computational models of human visual attention driven by auditory cues
 
Image quality assessment and statistical evaluation
Image quality assessment and statistical evaluationImage quality assessment and statistical evaluation
Image quality assessment and statistical evaluation
 
Visual Saliency: Learning to Detect Salient Objects
Visual Saliency: Learning to Detect Salient ObjectsVisual Saliency: Learning to Detect Salient Objects
Visual Saliency: Learning to Detect Salient Objects
 

Semelhante a Can visual fixation patterns improve image quality metrics

Quality assessment of stereoscopic 3 d image compression by binocular integra...
Quality assessment of stereoscopic 3 d image compression by binocular integra...Quality assessment of stereoscopic 3 d image compression by binocular integra...
Quality assessment of stereoscopic 3 d image compression by binocular integra...Shakas Technologies
 
Unsupervised quantification of under and over-segmentation for object-based ...
Unsupervised quantification of under  and over-segmentation for object-based ...Unsupervised quantification of under  and over-segmentation for object-based ...
Unsupervised quantification of under and over-segmentation for object-based ...I3E Technologies
 
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy  up...OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy  up...
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...feature software solutions pvt ltd
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Kalle
 
Face recognition: A Comparison of Appearance Based Approaches
Face recognition: A Comparison of Appearance Based ApproachesFace recognition: A Comparison of Appearance Based Approaches
Face recognition: A Comparison of Appearance Based Approachessadique_ghitm
 
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOSSHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOScsandit
 
Hansen Homography Normalization For Robust Gaze Estimation In Uncalibrated Se...
Hansen Homography Normalization For Robust Gaze Estimation In Uncalibrated Se...Hansen Homography Normalization For Robust Gaze Estimation In Uncalibrated Se...
Hansen Homography Normalization For Robust Gaze Estimation In Uncalibrated Se...Kalle
 
Perceptual Weights Based On Local Energy For Image Quality Assessment
Perceptual Weights Based On Local Energy For Image Quality AssessmentPerceptual Weights Based On Local Energy For Image Quality Assessment
Perceptual Weights Based On Local Energy For Image Quality AssessmentCSCJournals
 
A HVS based Perceptual Quality Estimation Measure for Color Images
A HVS based Perceptual Quality Estimation Measure for Color ImagesA HVS based Perceptual Quality Estimation Measure for Color Images
A HVS based Perceptual Quality Estimation Measure for Color ImagesIDES Editor
 
Session 29 christer ahlström
Session 29 christer ahlströmSession 29 christer ahlström
Session 29 christer ahlströmchrah
 
Enhancing Target Efficiency of a Laser by the Integration of a Stabilization ...
Enhancing Target Efficiency of a Laser by the Integration of a Stabilization ...Enhancing Target Efficiency of a Laser by the Integration of a Stabilization ...
Enhancing Target Efficiency of a Laser by the Integration of a Stabilization ...Nadim Bari
 
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptxReview A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptxAravindHari22
 
Blur Parameter Identification using Support Vector Machine
Blur Parameter Identification using Support Vector MachineBlur Parameter Identification using Support Vector Machine
Blur Parameter Identification using Support Vector MachineIDES Editor
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)ijceronline
 
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...IRJET Journal
 

Semelhante a Can visual fixation patterns improve image quality metrics (17)

Quality assessment of stereoscopic 3 d image compression by binocular integra...
Quality assessment of stereoscopic 3 d image compression by binocular integra...Quality assessment of stereoscopic 3 d image compression by binocular integra...
Quality assessment of stereoscopic 3 d image compression by binocular integra...
 
Unsupervised quantification of under and over-segmentation for object-based ...
Unsupervised quantification of under  and over-segmentation for object-based ...Unsupervised quantification of under  and over-segmentation for object-based ...
Unsupervised quantification of under and over-segmentation for object-based ...
 
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy  up...OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy  up...
OPTIMIZED FINGERPRINT COMPRESSION WITHOUT LOSS OF DATAProposed workblessy up...
 
Dw32759763
Dw32759763Dw32759763
Dw32759763
 
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
Vinnikov Contingency Evaluation Of Gaze Contingent Displays For Real Time Vis...
 
Face recognition: A Comparison of Appearance Based Approaches
Face recognition: A Comparison of Appearance Based ApproachesFace recognition: A Comparison of Appearance Based Approaches
Face recognition: A Comparison of Appearance Based Approaches
 
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOSSHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
SHARP OR BLUR: A FAST NO-REFERENCE QUALITY METRIC FOR REALISTIC PHOTOS
 
Hansen Homography Normalization For Robust Gaze Estimation In Uncalibrated Se...
Hansen Homography Normalization For Robust Gaze Estimation In Uncalibrated Se...Hansen Homography Normalization For Robust Gaze Estimation In Uncalibrated Se...
Hansen Homography Normalization For Robust Gaze Estimation In Uncalibrated Se...
 
Perceptual Weights Based On Local Energy For Image Quality Assessment
Perceptual Weights Based On Local Energy For Image Quality AssessmentPerceptual Weights Based On Local Energy For Image Quality Assessment
Perceptual Weights Based On Local Energy For Image Quality Assessment
 
A HVS based Perceptual Quality Estimation Measure for Color Images
A HVS based Perceptual Quality Estimation Measure for Color ImagesA HVS based Perceptual Quality Estimation Measure for Color Images
A HVS based Perceptual Quality Estimation Measure for Color Images
 
Session 29 christer ahlström
Session 29 christer ahlströmSession 29 christer ahlström
Session 29 christer ahlström
 
Enhancing Target Efficiency of a Laser by the Integration of a Stabilization ...
Enhancing Target Efficiency of a Laser by the Integration of a Stabilization ...Enhancing Target Efficiency of a Laser by the Integration of a Stabilization ...
Enhancing Target Efficiency of a Laser by the Integration of a Stabilization ...
 
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptxReview A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
Review A DCNN APPROACH FOR REAL TIME UNCONSTRAINED FACE.pptx
 
Blur Parameter Identification using Support Vector Machine
Blur Parameter Identification using Support Vector MachineBlur Parameter Identification using Support Vector Machine
Blur Parameter Identification using Support Vector Machine
 
International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)International Journal of Computational Engineering Research(IJCER)
International Journal of Computational Engineering Research(IJCER)
 
D25014017
D25014017D25014017
D25014017
 
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
A Survey on Retinal Area Detector From Scanning Laser Ophthalmoscope (SLO) Im...
 

Mais de Eric Larson

PupilWare Petra 2015
PupilWare Petra 2015PupilWare Petra 2015
PupilWare Petra 2015Eric Larson
 
Mobile healthforthemasses.2015
Mobile healthforthemasses.2015Mobile healthforthemasses.2015
Mobile healthforthemasses.2015Eric Larson
 
Flipping the clinic: in home health monitoring using mobile phones
Flipping the clinic: in home health monitoring using mobile phonesFlipping the clinic: in home health monitoring using mobile phones
Flipping the clinic: in home health monitoring using mobile phonesEric Larson
 
First world problems: education, options, and impact
First world problems: education, options, and impactFirst world problems: education, options, and impact
First world problems: education, options, and impactEric Larson
 
Recognizing mHealth through phone-as-a-sensor technology
Recognizing mHealth through phone-as-a-sensor technologyRecognizing mHealth through phone-as-a-sensor technology
Recognizing mHealth through phone-as-a-sensor technologyEric Larson
 
Consumer Centered Calibration End Use Water Monitoring
Consumer Centered Calibration End Use Water MonitoringConsumer Centered Calibration End Use Water Monitoring
Consumer Centered Calibration End Use Water MonitoringEric Larson
 
Big Data, Small Data
Big Data, Small DataBig Data, Small Data
Big Data, Small DataEric Larson
 
Phone As A Sensor Technology: mHealth and Chronic Disease
Phone As A Sensor Technology: mHealth and Chronic Disease Phone As A Sensor Technology: mHealth and Chronic Disease
Phone As A Sensor Technology: mHealth and Chronic Disease Eric Larson
 
Commercialization and Broader Impact: mirroring research through commercial d...
Commercialization and Broader Impact: mirroring research through commercial d...Commercialization and Broader Impact: mirroring research through commercial d...
Commercialization and Broader Impact: mirroring research through commercial d...Eric Larson
 
Creating the Dots: Computer Science and Engineering for Good
Creating the Dots: Computer Science and Engineering for GoodCreating the Dots: Computer Science and Engineering for Good
Creating the Dots: Computer Science and Engineering for GoodEric Larson
 
Mobilizing mHealth: interdisciplinary computer science and engineering
Mobilizing mHealth: interdisciplinary computer science and engineeringMobilizing mHealth: interdisciplinary computer science and engineering
Mobilizing mHealth: interdisciplinary computer science and engineeringEric Larson
 
Applications and Derivation of Linear Predictive Coding
Applications and Derivation of Linear Predictive CodingApplications and Derivation of Linear Predictive Coding
Applications and Derivation of Linear Predictive CodingEric Larson
 
Sensing for Sustainability: Disaggregated Sensing of Electricity, Gas, and Water
Sensing for Sustainability: Disaggregated Sensing of Electricity, Gas, and WaterSensing for Sustainability: Disaggregated Sensing of Electricity, Gas, and Water
Sensing for Sustainability: Disaggregated Sensing of Electricity, Gas, and WaterEric Larson
 
Ubicomp2012 spiro smartpresentation
Ubicomp2012 spiro smartpresentationUbicomp2012 spiro smartpresentation
Ubicomp2012 spiro smartpresentationEric Larson
 
Machine Learning Lecture
Machine Learning LectureMachine Learning Lecture
Machine Learning LectureEric Larson
 
Open cv tutorial
Open cv tutorialOpen cv tutorial
Open cv tutorialEric Larson
 

Mais de Eric Larson (20)

PupilWare Petra 2015
PupilWare Petra 2015PupilWare Petra 2015
PupilWare Petra 2015
 
Mobile healthforthemasses.2015
Mobile healthforthemasses.2015Mobile healthforthemasses.2015
Mobile healthforthemasses.2015
 
Flipping the clinic: in home health monitoring using mobile phones
Flipping the clinic: in home health monitoring using mobile phonesFlipping the clinic: in home health monitoring using mobile phones
Flipping the clinic: in home health monitoring using mobile phones
 
First world problems: education, options, and impact
First world problems: education, options, and impactFirst world problems: education, options, and impact
First world problems: education, options, and impact
 
Recognizing mHealth through phone-as-a-sensor technology
Recognizing mHealth through phone-as-a-sensor technologyRecognizing mHealth through phone-as-a-sensor technology
Recognizing mHealth through phone-as-a-sensor technology
 
Consumer Centered Calibration End Use Water Monitoring
Consumer Centered Calibration End Use Water MonitoringConsumer Centered Calibration End Use Water Monitoring
Consumer Centered Calibration End Use Water Monitoring
 
Big Data, Small Data
Big Data, Small DataBig Data, Small Data
Big Data, Small Data
 
Phone As A Sensor Technology: mHealth and Chronic Disease
Phone As A Sensor Technology: mHealth and Chronic Disease Phone As A Sensor Technology: mHealth and Chronic Disease
Phone As A Sensor Technology: mHealth and Chronic Disease
 
Commercialization and Broader Impact: mirroring research through commercial d...
Commercialization and Broader Impact: mirroring research through commercial d...Commercialization and Broader Impact: mirroring research through commercial d...
Commercialization and Broader Impact: mirroring research through commercial d...
 
Creating the Dots: Computer Science and Engineering for Good
Creating the Dots: Computer Science and Engineering for GoodCreating the Dots: Computer Science and Engineering for Good
Creating the Dots: Computer Science and Engineering for Good
 
Mobilizing mHealth: interdisciplinary computer science and engineering
Mobilizing mHealth: interdisciplinary computer science and engineeringMobilizing mHealth: interdisciplinary computer science and engineering
Mobilizing mHealth: interdisciplinary computer science and engineering
 
Applications and Derivation of Linear Predictive Coding
Applications and Derivation of Linear Predictive CodingApplications and Derivation of Linear Predictive Coding
Applications and Derivation of Linear Predictive Coding
 
BreatheSuite
BreatheSuiteBreatheSuite
BreatheSuite
 
Job Talk
Job TalkJob Talk
Job Talk
 
Larson.defense
Larson.defenseLarson.defense
Larson.defense
 
Sensing for Sustainability: Disaggregated Sensing of Electricity, Gas, and Water
Sensing for Sustainability: Disaggregated Sensing of Electricity, Gas, and WaterSensing for Sustainability: Disaggregated Sensing of Electricity, Gas, and Water
Sensing for Sustainability: Disaggregated Sensing of Electricity, Gas, and Water
 
Ubicomp2012 spiro smartpresentation
Ubicomp2012 spiro smartpresentationUbicomp2012 spiro smartpresentation
Ubicomp2012 spiro smartpresentation
 
Machine Learning Lecture
Machine Learning LectureMachine Learning Lecture
Machine Learning Lecture
 
ACEEE 2012
ACEEE 2012ACEEE 2012
ACEEE 2012
 
Open cv tutorial
Open cv tutorialOpen cv tutorial
Open cv tutorial
 

Can visual fixation patterns improve image quality metrics

  • 1. Can Visual Fixation Patterns Improve Image Quality? Eric C. Larson, Cuong Vu, and Damon M. Chandler, Members IEEE Image Coding and Analysis Lab, Department of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078 Introduction Results Results From the bar graph, it can be seen that VIF shows the most improvement in A computer cannot judge image quality. Although current algorithms have correlation and that WSNR ends with the highest correlation. Also notice that the made great strides in predicting human ratings of fidelity, we are still do not no task condition regions were the most useful for augmenting the metrics. have a foolproof method of judging the quality of distorted images. This No Task Improve Tasked Improve experiment explores if the missing link in image quality is that we need to The graphs of the “correlation space” show that the highest correlations know where humans tend to look in an image. generally appear when weighting the region with the highest fixations most, and PSNR 0.0137 0.0045 weighting some of the region with a mild number of fixations. This is true in all Five common metrics of image fidelity were augmented using two sets of eye cases except VSNR, were most of the weight should be placed in the regions that fixation data. The first set was obtained under task-free viewing conditions SSIM 0.0344 0.0032 people do not look (although VSNR has the least to gain from fixations). and another set was obtained when viewers were asked to specifically “judge VIF 0.0794 0.0292 image quality.” We then compared the augmented metrics to subjective All of the improvements are not statistically significant over the un-weighted ratings of the images. VSNR 0.0022 0.0100 metric except for when weighting VIF by the no task condition fixations. We then asked, WSNR 0.0096 0.0038 1. Can existing fidelity metrics be improved using eye fixation data? 2. If so, is it more appropriate to use eye fixations obtained under no task Conclusions viewing conditions or when viewers were asked to assess quality? A computational experiment was presented that segmented images based upon Tasked Condition No Task Condition 3. Can PSNR be augmented using eye fixation data to perform as well as eye fixation data and augmented existing image fidelity metrics with the SSIM, VIF, VSNR, or WSNR? Residual Residual Metric F-Statistic Residual Residual segmentation regions. It was shown that, Metric F-Statistic 4. Using a fixation based segmentation, can we quantify how important each Skewness Kurtosis Skewness Kurtosis 1. Existing fidelity metrics can be positively augmented using fixation data, segmented region is for predicting human subjective ratings? PSNR 0.9943 0.6628 -0.1136 PSNR 0.9500 0.7149 -0.0810 with SSIM and VIF showing the greatest improvements (for common sense WSNR 0.9920 0.8414 0.5133 WSNR 0.9433 0.9463 0.9048 VSNR 1.1030 1.3209 2.4445 weighting). VSNR 0.9730 1.2734 2.1786 Methods SSIM VIF 0.9724 0.8384 0.9383 1.5202 0.7314 2.9391 SSIM VIF 0.8274 0.6285 0.8110 1.4874 0.4574 2.7594 2. The no task fixation condition showed the greatest improvements for all metrics except VSNR. 3. Under no task conditions, the primary region of eye fixation corresponds to Two types of visual fixation data were used: The first set of fixations was the most important region for PSNR, SSIM, and VIF. For VSNR, the non-ROI is collected when the viewers were given no task (i.e., they simply looked at the the most important region. images). The second set of fixations was collected when the viewers were No Task 4. PSNR can be augmented to perform better than original VIF, but not SSIM, asked to assess image fidelity. VSNR, nor WSNR (under this image set). When all metrics are augmented, Fixation PSNR has the worst performance. The resulting eye tracking data was used to cluster images from the LIVE database[1] into three regions – the regions where viewers gazed (1)with high Correlation Ultimately, the best way to augment metrics using ROI information and how to frequency, (2) low frequency, and (3) not at all. Space cluster eye tracking data in the most meaningful manner for image fidelity assessment remains an open question. However, it is clear from this experiment and others (for example, see [5][6]) that fixation and ROI data is less important for fidelity assessment than expected. Future Although fixation data proved ineffective when working with images of all quality, it was observed over the course of the experiment that region of interest might be useful for very low quality images. MAD The Most Apparent Distortion Tasked Once the images were segmented using fixation data, we wanted to Fixation investigate how much each region contributed to the subjective quality of the image, and use it to augment five image quality metrics (PSNR, WSNR, SSIM[2], Correlation VIF[3], and VSNR[4]). Space Motivation Methods Specifically, we (1) weighted the three segmented regions in the images, (2) used the metrics to calculate a new weighted quality of the image, and (3) Results calculated the correlation between the new weighted quality predictions and subjective ratings of quality. By adjusting the weights (and constraining that they sum to one) we were able to look at the “correlation space” for all possible weighting combinations. This was done for both sets of fixation data (i.e. – “Tasked” and “No Task”). [1] H. R. Sheikh, Z. Wang, A. C. Bovik, and L. K. Cormack, “Image and video quality assessment research at LIVE.” Online. http://live.ece.utexas.edu/research/quality/. [2] Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image Quality Assessment: From Error Visibility to Structural Similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). Etot = α1st-ROI E1st-ROI + α2nd-ROI E2nd-ROI + αnon-ROI [3] H. R. Sheikh and A. C. Bovik, “Image Information and Visual Quality,” IEEE Trans. Image Process., Vol. 15, No. 2, pp. 430-444, 2006. [4] D.M. Chandler and S.S. Hemami, “VSNR: a Wavelet-Based Visual Signal-to-Noise Ratio for Natural Images,” IEEE Trans. Image Process., Vol. 16, No. 9, 2007. [5] A. Ninassi, O Le Meur, P.L. Callet, and D. Barba, “Does where you gaze on an image affect your perception of quality? Applying visual attention to image quality,” in IEEE ICIP 2007, OSU 2007. Enon-ROI [6] E.C. Larson and D.M. Chandler, “Unveiling relationships between regions of interest and image fidelity metrics,” Conference on Visual Communications and Image Processing, 2007. Image Coding and Analysis Lab, Department of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078, ECEN

Notas do Editor

  1. Histogram 1.) Methods section may need to be shortened considerably. Possibly cutting much of the mean spectra discussion and keep the monitor calibration data on hand but not in the poster. I will be around to explain each. Would be nice to show Garst Image here, instead of wordy methods section. 2.) First two sections are 3rd person professional. Present tense used when referring to study. Past tense used when referring to steps in the methods. 3.) Could show quantized histograms of Intensity, Red, Green, and Blue (or LAB histograms) instead of the mean variance, skew, and kurtosis 4.) BAR GRAPH OVERLOAD!!!!!!!!!!!!!!!!!!!!! 5.) Would it be good to show the statistics of the environment and animal for the three cases of crypsis ????? wavelet and