SlideShare uma empresa Scribd logo
1 de 9
Baixar para ler offline
Normalized averaging using adaptive applicability
functions with applications in image reconstruction from
          sparsely and randomly sampled data

                             Tuan Q. Pham and Lucas J. van Vliet

                  Pattern Recognition Group, Delft University of Technology ,
                         Lorentzweg 1, 2628 CJ Delft, The Netherlands
                           {tuan,lucas}@ph.tn.tudelft.nl



       Abstract. In this paper we describe a new strategy for using local structure
       adaptive filtering in normalized convolution. The shape of the filter, used as the
       applicability function in the context of normalized convolution, adapts to the
       local image structure and avoids filtering across borders. The size of the filter is
       also adaptable to the local sample density to avoid unnecessary smoothing over
       high certainty regions. We compared our adaptive interpolation technique with
       conventional normalized averaging methods. We found that our strategy yields a
       result that is much closer to the original signal both visually and in terms of
       MSE, meanwhile retaining sharpness and improving the SNR.



1 Introduction

Conventional interpolation techniques often rely on either one or two input signal
characteristics such as the signal amplitude (bilinear interpolation), the arrangement of
sampled signals (natural neighbor interpolation [1]), or the certainty of signals (normal-
ized convolution [2]) but never all of them. While the existing framework of normalized
convolution is efficient in finding a local representation of the signal incorporating the
signal uncertainties, it does not take the signal structures into account. In fact, when
interpolation of sparsely sampled signals is concerned, the neighborhood's structural
content should play an important role in shaping the missing values. This paper points
out how adaptive filtering can be used in conjunction with normalized convolution to
take advantage of both signal certainty and structural content in image analysis prob-
lems.
   The structure of the paper is as follows; we start with a short description of normal-
ized averaging. We then introduce adaptive parameters into the applicability function,
starting from sizes then shapes of the filter. The sizes of the filter vary according to the
local sample density and the shapes are steered towards the local signal structure,
which comprises of orientation, anisotropy and curvature. Finally, comparisons are
made between our adaptive method and other interpolation techniques with respect to
mean square error (MSE), peak signal-noise ratio (PSNR) and a sharpness measure.
2 Normalized Averaging

Normalized convolution (NC) [2] is a method for local signal modeling that takes signal
uncertainties into account. While the applications of normalized convolution are nu-
merous, the simplest and most striking example is interpolation of incomplete and un-
certain data using the special case called normalized averaging (NA). In this recon-
struction algorithm, an interpolated value is estimated as a weighted sum of neighbor-
ing values considering their certainty and applicability, in which applicability refers to
the influence of the neighbors towards the interpolated value (see equation 1).
                                         ( s ⋅ c) ∗ a
                                                                                                (1)
                                            c ∗a
where r denotes the reconstructed version of the measured signal s with the certainty
c; the normalized averaging is done via two convolutions (*) with the applicability
function a. This applicability function is usually modeled by a localized function such
as the Gaussian function. Since Gaussian filtering has a very fast recursive implemen-
tation [3], the NC reconstruction algorithm is efficient while giving reasonable results
(see figure 1b).




            a                        b                         c                         d

Fig. 1. (a) original image. (b-d) Results of normalized averaging reconstruction of the image from
10% of the original information, with the applicability function being: (b) the Gaussian func-
tion: σ = 1, (c) the Knutsson applicability function: r −3 , (d) the local structure adaptive appli-
cability function: σu = 1 (1 − A)σdens , σv = 1 (1+ A)σdens .
                         2                    2



3 Scale adaptive applicability functions based on sample density

Since interpolation favors the contribution of a closer pixel over a distant pixel, a com-
monly used applicability is the Gaussian function. However, practical Gaussian kernels
decrease rather gradually near the center and they are often truncated outside a certain
radius for implementation efficiency. These characteristics impose a constant smooth-
ing factor on the image while the function support is not large enough to recover big
missing blobs of data. Sharp functions with large support have been suggested [2][4],
however this is not the real solution to the problem. To minimize the unnecessary
smoothing and maximize the spatial support, a scale-adaptive filter that shrinks or
grows depending on the local sample density should be used.
We define local sample density as σ dens ( x , y ) such that σ dens equals the radius of
a pillbox, centered at position (x,y), that encompasses a total certainty equal to one.
For a missing sample image c : ¡ → {0,1} , σ dens is the distance to the nearest pixel with
c=1, which relates to the Euclidean distance transform [5]. For images with arbitrary
certainty ranging from zero to one, we filter the input certainty by a bank of pillbox
filters with increasing radii. The radius corresponding to a response of one is found by
a quadratic interpolation of the filtered results.


4 Shape adaptive applicability functions based on local structure

While a variable-sized applicability function solves the problem of excessive image
smoothing, it does not guarantee the continuity of the signal structure. This drawback
can be seen in figure 3c where the reconstructed image, though visually informative,
shows severe edge jaggedness. A natural approach to solve this shortcoming is mak-
ing the applicability function adaptive to the underlying signals structure. In other
words, it is desirable that the reconstructed image patch shows the same oriented
pattern as its surrounding neighborhood, if there is any. The corrupted samples there-
fore should be more similar to the high confidence pixe ls along its linear orientation.
As a result, anisotropic filters should be used as the applicability functions. The prob-
lem remains as how to estimate the local structure of an image with missing samples.
Fortunately, (differential) normalized convolution [2][6] offers a way to estimate the
image gradients from incomplete and uncertain data. We can then construct the gradi-
ent structure tensor (GST) [7][8] to analyze the local linear structures.
    Local image structure at a certain scale can be characterized by orientation, anisot-
ropy and curvature. These parameters can be extracted using the regularized gradient
                                                                       r
structure tensor T as shown in equation 2, where the eigenvector u corresponding to
the largest eigenvalue λu determines the local orientation φ ; A and κ denote local
anisotropy and curvature, respectively. Note that the curvature κ is computed from
the differential of the mapping M (φ ) = exp ( j 2φ ) rather than on the orientation itself
which contains jumps between +/- p [9].

    r r           rr       rr           λu − λv       ∂φ    j         ∂
T = ∇I ∇ I T = λu uuT + λv vvT     A=              κ = r = − M ( −φ ) r M (φ )          (2)
                                        λu + λv       ∂v    2        ∂v

    The idea of a shape-adapted smoothing is not new. Adaptive filtering that shapes
the smoothing kernel after the principal components of the inverse GST has been pro-
posed by a number of authors [10][11]. Here, we introduce curvature to the filter by
bending the same anisotropic Gaussian applicability kernel. The directional scales of
the kernel before curvature bending are given by:

         σ u = C(1 − A)α σ dens                      σ v = C(1 + A)α σ dens             (3)

where the scaling is based on the adaptive scale parameter σ dens (see previous sec-
tion), an additional term C that may encompass the local signal-to-noise ratio, and the
degree of structure enhancement depends on the anisotropy A through an exponent α.
Special attention needs to be given to the application of the gradient structure ten-
sor to incomplete data. Computation of the gradient image can be done in either three
ways: (1) reconstruct the image by normalized convolution and compute the gradient
from this result, (2) using normalized differential convolution NDC [2], (3) using the
derivative of the normalized convolution equation (in contrast to the normalized con-
volution result) DoNC [ An extensive evaluation [6] for natural images, all three
                          6].
methods: gradient from normalized convolution result, NDC and DoNC show comp a-
rable performance over a wide range of SNR’s. Due to the variable sample density of
our input images, we use the first method where the gradient is estimated from an ini-
tial image reconstruction using NC with scale-adaptive isotropic Gaussian applicabil-
ity. The Gaussian scales of the gradient operator and the tensor smoothing for the
GST is set to one and three respectively.

                                        r                   6σ u
  y = 1 κ x2                   r
                                        v
      2
                               u ϕ

                                                      6σv



             a                       b                       c                         d
Fig. 2. (a-c) Image transformation for the local structure adaptive filtering: curvature bent and
rotated to be resampled (at the white dots) by bilinear interpolation. (d) look-up Gaussian
kernel with sampled coefficients at the white dots.


4.1. Implementation details

In the reconstruction experiment given in figure 1, all input samples are lined up prop-
erly on the sampling lattice. This makes it very easy to collect the neighbors’ positions
and look up the corresponding filtering coefficients. However, our adaptive normalized
convolution technique also works on randomly sampled signals. The most challenging
task is then how to compute the kernel of an off-grid centered filter. Continuous nor-
malized convolution (CNC) [4] approximates the coefficients by truncating the Taylor
expansion up to the first order. The method, relying on the first-order partial derivative
kernels at every pixel, is therefore not suitable for our varying applicability function.
Furthermore, CNC cannot accurately approximate Gaussian kernels of sigma smaller
than 1, which is often necessary when little smoothing is required.
   Here we present a filter construction approach that uses a large look-up Gaussian
kernel. The size of such kernel is chosen such that most commonly used smaller ker-
nels can be easily extracted from the master kernel by pixel-skipping subsampling. For
example, a master kernel of size 121x121 hold coefficients of any kernel with sizes from
{3,5,7,9,11,13,21,25,31,41,61,121}. In fact, we only use kernels with a right combination
of these sizes such that the distance between adjacent resampled positions on the
input image is less than the resolution grid. In this way, we can minimize the kernel size
at each pixel while still satisfy the resampling condition (figure 2). With this simple
scheme, Gaussian coefficients need not be recomputed for on-grid centered kernels .
Coefficients for off-grid centered kernels can also be estimated from the master kernel
using bilinear interpolation. Such an interpolation scheme allows accurate approxima-
tion of any Gaussian kernels with sigma greater than 0.3 grid size.

Table 1. MSEs and PSNRs of the reconstructed Lena images from 10% original information

          Random         Knutsson r −3       Gaussian s=1,   Scale-adaptive   Shape and scale
          10% of         applicability       fig. 1b.        isotropic        adaptive Gaus-
          Lena image     function, fig.1c.                   Gaussian         sian, fig.1d.
 MSE        31449            126.13              88.23            88.32            43.77
 PSNR        3.15             27.12              28.67            28.67            31.72



5 Experiments on image reconstruction from missing samples

In order to compare the result of the adaptive normalized averaging with the traditional
normalized averaging method, the same experiments as presented in [2] are carried out.
In the first experiment, the famous Lena image has been randomly sampled to a test
image containing only 10% of the original information. The reconstruction results have
been shown earlier in figure 1. A quantitative comparison is presented in table 1. The
second experiment involves reconstruction of the same image having large missing
regions (see figure 3 and table 2). While the standard normalized averaging can fill in
the ‘holes’ with ‘plausible’ data [2], the adaptive filtering approach really proves its
strength by being capable of extending local linear structures into the missing regions.




           a                        b                        c                     d
Fig. 3. (a) Lena ‘hole’ image with large missing blobs. (b-d) Results of normalized averaging
with the applicability function as (b) the Knutsson applicability function, (c) the isotropic
scale-adaptive Gaussian function, (d) the local structure adaptive Gaussian function (eq. 3).

Table 2. MSEs and PSNRs of the reconstructed regions from the Lena ‘hole’ image

           Lena image    Knutsson r −3 Gaussian s=3          Scale-adaptive   Shape and scale
           with holes,   applicability                       isotropic        adaptive Gaus-
           fig.3a.       function, fig.3b                    Gaussian, fig.3c sian, fig.3d
 MSE           7235          133.21       106.11                  83.63            41.05
 PSNR          9.54           26.89        27.87                  28.91            32.00
With the lowest MSE and the highest PSNR (equation 4), adaptive applicability
function clearly outperforms other listed applicability functions. It is also interesting to
note that the Knutsson applicability function scores poorly for both quantitative tests
even though the results are claimed to be better than the Gaussian function [2]. This is
due to the fact that the fixed Knutsson applicability function is only significant in the
very center while having near-zero values elsewhere. Though this is desirable for high
certainty regions, low certainty regions are not reconstructed very well.

                    ∑ ( I j − Iˆj )
                    N
                1                     2
                                                                      MAX 2
       MSE =                                        PSNR = 10log 10                       (4)
                N   j =1                                              MSE
     To prove the superior in sharpness quality, we use the sharpness comparison
measure by Dijk et. al. [12]. The method is described as follows: a scatter diagram com-
posed of a set of data points is plotted such that the coordinates of each data point
correspond to the gradient magnitudes of the two tested images at the same image
location. Data points below the line y=x correspond to pixels with a higher gradient
magnitude in the first image than in the second. Those points favor the sharperness of
the first image over the second. Pixels above the line y=x, on the other hand, favor the
smootherness of the first image over the second. Normally, these two types of data
points are always present in a scattergram because an image is filtered for both detail
enhancement and noise suppression. As a result, the gradient magnitudes of edge
pixels increase while those of noisy background pixels decrease. However, due to the
higher gradient energy that edge pixels possess, they form a cluster of data points that
extends further away from the origin than the cluster formed by the noisy pixels. A line
going through the origin can be fitted to each cluster, resulting in a slope for the de-
gree of edge sharpening or noise smoothening accordingly.




Fig. 4. (left) Gradient magnitude scattergram of the reconstructed Lena images from a random
10% of the original information using adaptive Gaussian (horizontal axis) and Knutsson appli-
cability function (vertical axis). (right) similar scattergram for the Lena ‘hole’ experiment.
   As can be seen from both scattergrams in figure 4, the majority of data points form a
cluster below the y=x line. The result from the diagram on the right is not as clear
though because what is really reconstructed in the Lena 'hole' experiment is the con-
tinuation of the image linear structures rather than the image gradient. This sharpness
measure agrees with the subjective perception of the better quality of the recon-
structed images using our adaptive method over Knutsson’s fixed applicability func-
tion.



6. Conclusion and further research directions

We have demonstrated that normalized convolution with adaptive applicability is an
attractive framework for signal analysis that takes signal uncertainties as well as local
structures into account.
   The application of this framework in the interpolation of incomplete and uncertain
data image yields a method that sharpens edges, suppresses noise and avoids edge
jaggedness by extending local image orientation into the low certainty regions. The
comparisons of visual sharpness, MSE and PSNR on the reconstructed images show a
significant improvement of the new method over existing normalized convolution
based interpolation techniques.
     Apparently, normalized averaging is not the only technique for image reconstruc-
tion. Image inpainting [13] also reconstruct missing sample images by extending is o-
phote lines from the exterior into the missing regions. However, different from our non-
iterative adaptive normalized averaging method, image inpainting is a diffusion proc-
ess that takes many iterations to propagate orientation and image grayscale into the
restored areas. The information to be propagated is computed from the intact sur-
rounding area. As a result, image inpainting is only suitable for the restoration of dam-
aged photographs and movies or the removal of small objects where the majority of
image information is present. For example, the algorithm would give a satisfactory
result in the Lena 'hole' image experiment, but not when 90 per cent of the image data is
unavailable (see figure 5). In addition, due to the discrete implementation of diffusion,
the computation of the gradient vector fields is not as robust to noise as done with the
gradient structure tensor. Image inpainting also cannot handle irregular sample input
as is possible in our adaptive filtering approach.




          a                      b                      c                     d
Fig. 5. Comparison of our image reconstruction with Sapiro’s image inpainting: (a) our recon-
struction from 10% original information, (b) an unsuccessful inpainting result for the same
experiment in (a), (c) our reconstruction for one of the missing holes in the Lena ‘hole’ experi-
ment, (d) same hole reconstruction by inpainting. Notice the continuation of the hair structure
reconstructed by our algorithm as opposed to the mean gray level filling-in from inpainting

    Future research will focus on the application of adaptive normalized convolution in
other problems such as image denoising, video restoration, data fusion, optic flow
computation and compensation and super-resolution. The local structure adaptive
filtering scheme presented in this paper can also be extended to the local structure
neighborhood operators, in which more robust estimators than the weighted mean (as
used in normalized convolution) can be used to approximate the local data.



Acknowledgements

  This research is partly supported by the IOP Beeldverwerkings project of Senter,
Agency of the Ministry of Economic Affairs of the Netherlands.



References

1. A. Okabe, B. Boots, K. Sugihara, Spatial tessellations concepts and application of Voronoi
   diagrams, Wiley, 1992.
2. H. Knutsson, C-F Westin, Normalized and differential convolution: Methods for interpola-
   tion and filtering of incomplete and uncertain data, in CVPR’93, 1993. 515–523.
3. I.T. Young, L.J. van Vliet, Recursive implementation of the Gaussian filters, Signal Process-
   ing, 44(2), 1995. 139–151.
4. K. Andersson, H. Knutsson, Continuous Normalized Convolution, ICME'02, 2002. 725–
   728.
5. P.E. Danielsson. Euclidean distance mapping. CGIP, 14, 1980. 227–248.
6. F. de Jong, L.J. van Vliet, P.P. Jonker, Gradient estimation in uncertain data, MVA'98 IAPR
   Workshop on Machine Vision Applications, Makuhari, Chiba, Japan, Nov. 1998. 144–147.
7. L.J. van Vliet, P.W. Verbeek, Estimators for orientation and anisotropy in digitized images, in
   J. van Katwijk et.al. (eds.), ASCI'95, Proc. First Annual Conf. of the Advanced School for
   Computing and Imaging, Heijen, the Netherlands, 1995. 442–450.
8. H. Knutsson, Representing local structure using tensors, SCIA'89, Oulu, Finland, 1989. 244–
   251.
9. M. van Ginkel, J. van de Weijer, L.J. van Vliet, P.W. Verbeek, Curvature estimation from
   orientation fields, SCIA'99, Greenland, 1999. 545–551.
10. M. Nitzberg, T. Shiota, Nonlinear Image Filtering with Edge and Corner Enhancement. IEEE
   Trans. on PAMI 14(8), 1992. 826–833.
11. A. Almansa and T. Lindeberg, Enhancement of Fingerprint Images using Shape-Adapted
   Scale-Space Operators, in J. Sporring, M. Nielsen, L. Florack and P. Johansen (eds.), Gaus-
   sian Scale-Space Theory, Kluwer Academic Publishers, 1997. 21–30.
12. J. Dijk, D. de Ridder, P.W. Verbeek, J. Walraven, I.T. Young, L.J. van Vliet, “A new meas-
   ure for the effect of sharpening and smoothing filters on images”, in SCIA’99, Greenland,
   1999. 213–220.
13. C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro, J. Verdera, Filling-in by Joint Interpola-
   tion of Vector Fields and Gray Levels, IEEE Transactions on Image Processing, 10 (8),
   2001. 1200–1211.

Mais conteúdo relacionado

Mais procurados

EDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTION
EDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTIONEDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTION
EDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTIONcsitconf
 
A Novel Approach for Qualitative Imaging of Buried PEC Scatterers
A Novel Approach for Qualitative Imaging of Buried PEC ScatterersA Novel Approach for Qualitative Imaging of Buried PEC Scatterers
A Novel Approach for Qualitative Imaging of Buried PEC ScatterersTELKOMNIKA JOURNAL
 
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Polytechnique Montreal
 
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesBand Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesIDES Editor
 
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_r
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_rHeckbert p s__adaptive_radiosity_textures_for_bidirectional_r
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_rXavier Davias
 
Introduction to Support Vector Machines
Introduction to Support Vector MachinesIntroduction to Support Vector Machines
Introduction to Support Vector MachinesSilicon Mentor
 
Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Editor IJARCET
 
K147897
K147897K147897
K147897irjes
 
Rotman Lens Performance Analysis
Rotman Lens Performance AnalysisRotman Lens Performance Analysis
Rotman Lens Performance AnalysisIDES Editor
 
Approximate Thin Plate Spline Mappings
Approximate Thin Plate Spline MappingsApproximate Thin Plate Spline Mappings
Approximate Thin Plate Spline MappingsArchzilon Eshun-Davies
 
Initial study and implementation of the convolutional Perfectly Matched Layer...
Initial study and implementation of the convolutional Perfectly Matched Layer...Initial study and implementation of the convolutional Perfectly Matched Layer...
Initial study and implementation of the convolutional Perfectly Matched Layer...Arthur Weglein
 
Gi2429352937
Gi2429352937Gi2429352937
Gi2429352937IJMER
 
Satellite image compression technique
Satellite image compression techniqueSatellite image compression technique
Satellite image compression techniqueacijjournal
 
1998 characterisation of multilayers by x ray reflection
1998 characterisation of multilayers by x ray reflection1998 characterisation of multilayers by x ray reflection
1998 characterisation of multilayers by x ray reflectionpmloscholte
 
Seismic 13- Professor. Arthur B Weglein
Seismic 13- Professor. Arthur B WegleinSeismic 13- Professor. Arthur B Weglein
Seismic 13- Professor. Arthur B WegleinArthur Weglein
 
nanoscale xrd
nanoscale xrdnanoscale xrd
nanoscale xrdAun Ahsan
 

Mais procurados (18)

EDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTION
EDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTIONEDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTION
EDGE DETECTION IN RADAR IMAGES USING WEIBULL DISTRIBUTION
 
2009 asilomar
2009 asilomar2009 asilomar
2009 asilomar
 
A Novel Approach for Qualitative Imaging of Buried PEC Scatterers
A Novel Approach for Qualitative Imaging of Buried PEC ScatterersA Novel Approach for Qualitative Imaging of Buried PEC Scatterers
A Novel Approach for Qualitative Imaging of Buried PEC Scatterers
 
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
Using Subspace Pursuit Algorithm to Improve Performance of the Distributed Co...
 
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral ImagesBand Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
Band Clustering for the Lossless Compression of AVIRIS Hyperspectral Images
 
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_r
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_rHeckbert p s__adaptive_radiosity_textures_for_bidirectional_r
Heckbert p s__adaptive_radiosity_textures_for_bidirectional_r
 
Introduction to Support Vector Machines
Introduction to Support Vector MachinesIntroduction to Support Vector Machines
Introduction to Support Vector Machines
 
Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276
 
K147897
K147897K147897
K147897
 
Rotman Lens Performance Analysis
Rotman Lens Performance AnalysisRotman Lens Performance Analysis
Rotman Lens Performance Analysis
 
Approximate Thin Plate Spline Mappings
Approximate Thin Plate Spline MappingsApproximate Thin Plate Spline Mappings
Approximate Thin Plate Spline Mappings
 
07 Tensor Visualization
07 Tensor Visualization07 Tensor Visualization
07 Tensor Visualization
 
Initial study and implementation of the convolutional Perfectly Matched Layer...
Initial study and implementation of the convolutional Perfectly Matched Layer...Initial study and implementation of the convolutional Perfectly Matched Layer...
Initial study and implementation of the convolutional Perfectly Matched Layer...
 
Gi2429352937
Gi2429352937Gi2429352937
Gi2429352937
 
Satellite image compression technique
Satellite image compression techniqueSatellite image compression technique
Satellite image compression technique
 
1998 characterisation of multilayers by x ray reflection
1998 characterisation of multilayers by x ray reflection1998 characterisation of multilayers by x ray reflection
1998 characterisation of multilayers by x ray reflection
 
Seismic 13- Professor. Arthur B Weglein
Seismic 13- Professor. Arthur B WegleinSeismic 13- Professor. Arthur B Weglein
Seismic 13- Professor. Arthur B Weglein
 
nanoscale xrd
nanoscale xrdnanoscale xrd
nanoscale xrd
 

Semelhante a Normalized averaging using adaptive applicability functions improves image reconstruction

Boosting CED Using Robust Orientation Estimation
Boosting CED Using Robust Orientation EstimationBoosting CED Using Robust Orientation Estimation
Boosting CED Using Robust Orientation Estimationijma
 
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION cscpconf
 
Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Editor IJARCET
 
A Note on Confidence Bands for Linear Regression Means-07-24-2015
A Note on Confidence Bands for Linear Regression Means-07-24-2015A Note on Confidence Bands for Linear Regression Means-07-24-2015
A Note on Confidence Bands for Linear Regression Means-07-24-2015Junfeng Liu
 
A Comparative Study of Wavelet and Curvelet Transform for Image Denoising
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingA Comparative Study of Wavelet and Curvelet Transform for Image Denoising
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingIOSR Journals
 
Lossless image compression via by lifting scheme
Lossless image compression via by lifting schemeLossless image compression via by lifting scheme
Lossless image compression via by lifting schemeSubhashini Subramanian
 
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUESA STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUEScscpconf
 
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...Tuan Q. Pham
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentIJERD Editor
 
Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...Willy Marroquin (WillyDevNET)
 
videoMotionTrackingPCA
videoMotionTrackingPCAvideoMotionTrackingPCA
videoMotionTrackingPCAKellen Betts
 
Modified adaptive bilateral filter for image contrast enhancement
Modified adaptive bilateral filter for image contrast enhancementModified adaptive bilateral filter for image contrast enhancement
Modified adaptive bilateral filter for image contrast enhancementeSAT Publishing House
 
Data-Driven Motion Estimation With Spatial Adaptation
Data-Driven Motion Estimation With Spatial AdaptationData-Driven Motion Estimation With Spatial Adaptation
Data-Driven Motion Estimation With Spatial AdaptationCSCJournals
 

Semelhante a Normalized averaging using adaptive applicability functions improves image reconstruction (20)

Boosting CED Using Robust Orientation Estimation
Boosting CED Using Robust Orientation EstimationBoosting CED Using Robust Orientation Estimation
Boosting CED Using Robust Orientation Estimation
 
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
PERFORMANCE EVALUATION OF DIFFERENT TECHNIQUES FOR TEXTURE CLASSIFICATION
 
Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276Ijarcet vol-2-issue-7-2273-2276
Ijarcet vol-2-issue-7-2273-2276
 
A Note on Confidence Bands for Linear Regression Means-07-24-2015
A Note on Confidence Bands for Linear Regression Means-07-24-2015A Note on Confidence Bands for Linear Regression Means-07-24-2015
A Note on Confidence Bands for Linear Regression Means-07-24-2015
 
A Comparative Study of Wavelet and Curvelet Transform for Image Denoising
A Comparative Study of Wavelet and Curvelet Transform for Image DenoisingA Comparative Study of Wavelet and Curvelet Transform for Image Denoising
A Comparative Study of Wavelet and Curvelet Transform for Image Denoising
 
My Prize Winning Physics Poster from 2006
My Prize Winning Physics Poster from 2006My Prize Winning Physics Poster from 2006
My Prize Winning Physics Poster from 2006
 
Lossless image compression via by lifting scheme
Lossless image compression via by lifting schemeLossless image compression via by lifting scheme
Lossless image compression via by lifting scheme
 
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUESA STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
A STUDY AND ANALYSIS OF DIFFERENT EDGE DETECTION TECHNIQUES
 
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
Multi-hypothesis projection-based shift estimation for sweeping panorama reco...
 
Wavelet
WaveletWavelet
Wavelet
 
International Journal of Engineering Research and Development
International Journal of Engineering Research and DevelopmentInternational Journal of Engineering Research and Development
International Journal of Engineering Research and Development
 
Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...
 
videoMotionTrackingPCA
videoMotionTrackingPCAvideoMotionTrackingPCA
videoMotionTrackingPCA
 
Modified adaptive bilateral filter for image contrast enhancement
Modified adaptive bilateral filter for image contrast enhancementModified adaptive bilateral filter for image contrast enhancement
Modified adaptive bilateral filter for image contrast enhancement
 
Image Denoising Using WEAD
Image Denoising Using WEADImage Denoising Using WEAD
Image Denoising Using WEAD
 
channel_mzhazbay.pdf
channel_mzhazbay.pdfchannel_mzhazbay.pdf
channel_mzhazbay.pdf
 
BNL_Research_Report
BNL_Research_ReportBNL_Research_Report
BNL_Research_Report
 
Data-Driven Motion Estimation With Spatial Adaptation
Data-Driven Motion Estimation With Spatial AdaptationData-Driven Motion Estimation With Spatial Adaptation
Data-Driven Motion Estimation With Spatial Adaptation
 
G234247
G234247G234247
G234247
 
40120140501004
4012014050100440120140501004
40120140501004
 

Último

Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Farhan Tariq
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesBernd Ruecker
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integrationmarketing932765
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...itnewsafrica
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxfnnc6jmgwh
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical InfrastructureVarsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructureitnewsafrica
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxLoriGlavin3
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfNeo4j
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfpanagenda
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 

Último (20)

Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...Genislab builds better products and faster go-to-market with Lean project man...
Genislab builds better products and faster go-to-market with Lean project man...
 
QCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architecturesQCon London: Mastering long-running processes in modern architectures
QCon London: Mastering long-running processes in modern architectures
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS:  6 Ways to Automate Your Data IntegrationBridging Between CAD & GIS:  6 Ways to Automate Your Data Integration
Bridging Between CAD & GIS: 6 Ways to Automate Your Data Integration
 
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...Abdul Kader Baba- Managing Cybersecurity Risks  and Compliance Requirements i...
Abdul Kader Baba- Managing Cybersecurity Risks and Compliance Requirements i...
 
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptxGenerative AI - Gitex v1Generative AI - Gitex v1.pptx
Generative AI - Gitex v1Generative AI - Gitex v1.pptx
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical InfrastructureVarsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
Varsha Sewlal- Cyber Attacks on Critical Critical Infrastructure
 
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
The State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptxThe State of Passkeys with FIDO Alliance.pptx
The State of Passkeys with FIDO Alliance.pptx
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Connecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdfConnecting the Dots for Information Discovery.pdf
Connecting the Dots for Information Discovery.pdf
 
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdfSo einfach geht modernes Roaming fuer Notes und Nomad.pdf
So einfach geht modernes Roaming fuer Notes und Nomad.pdf
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 

Normalized averaging using adaptive applicability functions improves image reconstruction

  • 1. Normalized averaging using adaptive applicability functions with applications in image reconstruction from sparsely and randomly sampled data Tuan Q. Pham and Lucas J. van Vliet Pattern Recognition Group, Delft University of Technology , Lorentzweg 1, 2628 CJ Delft, The Netherlands {tuan,lucas}@ph.tn.tudelft.nl Abstract. In this paper we describe a new strategy for using local structure adaptive filtering in normalized convolution. The shape of the filter, used as the applicability function in the context of normalized convolution, adapts to the local image structure and avoids filtering across borders. The size of the filter is also adaptable to the local sample density to avoid unnecessary smoothing over high certainty regions. We compared our adaptive interpolation technique with conventional normalized averaging methods. We found that our strategy yields a result that is much closer to the original signal both visually and in terms of MSE, meanwhile retaining sharpness and improving the SNR. 1 Introduction Conventional interpolation techniques often rely on either one or two input signal characteristics such as the signal amplitude (bilinear interpolation), the arrangement of sampled signals (natural neighbor interpolation [1]), or the certainty of signals (normal- ized convolution [2]) but never all of them. While the existing framework of normalized convolution is efficient in finding a local representation of the signal incorporating the signal uncertainties, it does not take the signal structures into account. In fact, when interpolation of sparsely sampled signals is concerned, the neighborhood's structural content should play an important role in shaping the missing values. This paper points out how adaptive filtering can be used in conjunction with normalized convolution to take advantage of both signal certainty and structural content in image analysis prob- lems. The structure of the paper is as follows; we start with a short description of normal- ized averaging. We then introduce adaptive parameters into the applicability function, starting from sizes then shapes of the filter. The sizes of the filter vary according to the local sample density and the shapes are steered towards the local signal structure, which comprises of orientation, anisotropy and curvature. Finally, comparisons are made between our adaptive method and other interpolation techniques with respect to mean square error (MSE), peak signal-noise ratio (PSNR) and a sharpness measure.
  • 2. 2 Normalized Averaging Normalized convolution (NC) [2] is a method for local signal modeling that takes signal uncertainties into account. While the applications of normalized convolution are nu- merous, the simplest and most striking example is interpolation of incomplete and un- certain data using the special case called normalized averaging (NA). In this recon- struction algorithm, an interpolated value is estimated as a weighted sum of neighbor- ing values considering their certainty and applicability, in which applicability refers to the influence of the neighbors towards the interpolated value (see equation 1). ( s ⋅ c) ∗ a (1) c ∗a where r denotes the reconstructed version of the measured signal s with the certainty c; the normalized averaging is done via two convolutions (*) with the applicability function a. This applicability function is usually modeled by a localized function such as the Gaussian function. Since Gaussian filtering has a very fast recursive implemen- tation [3], the NC reconstruction algorithm is efficient while giving reasonable results (see figure 1b). a b c d Fig. 1. (a) original image. (b-d) Results of normalized averaging reconstruction of the image from 10% of the original information, with the applicability function being: (b) the Gaussian func- tion: σ = 1, (c) the Knutsson applicability function: r −3 , (d) the local structure adaptive appli- cability function: σu = 1 (1 − A)σdens , σv = 1 (1+ A)σdens . 2 2 3 Scale adaptive applicability functions based on sample density Since interpolation favors the contribution of a closer pixel over a distant pixel, a com- monly used applicability is the Gaussian function. However, practical Gaussian kernels decrease rather gradually near the center and they are often truncated outside a certain radius for implementation efficiency. These characteristics impose a constant smooth- ing factor on the image while the function support is not large enough to recover big missing blobs of data. Sharp functions with large support have been suggested [2][4], however this is not the real solution to the problem. To minimize the unnecessary smoothing and maximize the spatial support, a scale-adaptive filter that shrinks or grows depending on the local sample density should be used.
  • 3. We define local sample density as σ dens ( x , y ) such that σ dens equals the radius of a pillbox, centered at position (x,y), that encompasses a total certainty equal to one. For a missing sample image c : ¡ → {0,1} , σ dens is the distance to the nearest pixel with c=1, which relates to the Euclidean distance transform [5]. For images with arbitrary certainty ranging from zero to one, we filter the input certainty by a bank of pillbox filters with increasing radii. The radius corresponding to a response of one is found by a quadratic interpolation of the filtered results. 4 Shape adaptive applicability functions based on local structure While a variable-sized applicability function solves the problem of excessive image smoothing, it does not guarantee the continuity of the signal structure. This drawback can be seen in figure 3c where the reconstructed image, though visually informative, shows severe edge jaggedness. A natural approach to solve this shortcoming is mak- ing the applicability function adaptive to the underlying signals structure. In other words, it is desirable that the reconstructed image patch shows the same oriented pattern as its surrounding neighborhood, if there is any. The corrupted samples there- fore should be more similar to the high confidence pixe ls along its linear orientation. As a result, anisotropic filters should be used as the applicability functions. The prob- lem remains as how to estimate the local structure of an image with missing samples. Fortunately, (differential) normalized convolution [2][6] offers a way to estimate the image gradients from incomplete and uncertain data. We can then construct the gradi- ent structure tensor (GST) [7][8] to analyze the local linear structures. Local image structure at a certain scale can be characterized by orientation, anisot- ropy and curvature. These parameters can be extracted using the regularized gradient r structure tensor T as shown in equation 2, where the eigenvector u corresponding to the largest eigenvalue λu determines the local orientation φ ; A and κ denote local anisotropy and curvature, respectively. Note that the curvature κ is computed from the differential of the mapping M (φ ) = exp ( j 2φ ) rather than on the orientation itself which contains jumps between +/- p [9]. r r rr rr λu − λv ∂φ j ∂ T = ∇I ∇ I T = λu uuT + λv vvT A= κ = r = − M ( −φ ) r M (φ ) (2) λu + λv ∂v 2 ∂v The idea of a shape-adapted smoothing is not new. Adaptive filtering that shapes the smoothing kernel after the principal components of the inverse GST has been pro- posed by a number of authors [10][11]. Here, we introduce curvature to the filter by bending the same anisotropic Gaussian applicability kernel. The directional scales of the kernel before curvature bending are given by: σ u = C(1 − A)α σ dens σ v = C(1 + A)α σ dens (3) where the scaling is based on the adaptive scale parameter σ dens (see previous sec- tion), an additional term C that may encompass the local signal-to-noise ratio, and the degree of structure enhancement depends on the anisotropy A through an exponent α.
  • 4. Special attention needs to be given to the application of the gradient structure ten- sor to incomplete data. Computation of the gradient image can be done in either three ways: (1) reconstruct the image by normalized convolution and compute the gradient from this result, (2) using normalized differential convolution NDC [2], (3) using the derivative of the normalized convolution equation (in contrast to the normalized con- volution result) DoNC [ An extensive evaluation [6] for natural images, all three 6]. methods: gradient from normalized convolution result, NDC and DoNC show comp a- rable performance over a wide range of SNR’s. Due to the variable sample density of our input images, we use the first method where the gradient is estimated from an ini- tial image reconstruction using NC with scale-adaptive isotropic Gaussian applicabil- ity. The Gaussian scales of the gradient operator and the tensor smoothing for the GST is set to one and three respectively. r 6σ u y = 1 κ x2 r v 2 u ϕ 6σv a b c d Fig. 2. (a-c) Image transformation for the local structure adaptive filtering: curvature bent and rotated to be resampled (at the white dots) by bilinear interpolation. (d) look-up Gaussian kernel with sampled coefficients at the white dots. 4.1. Implementation details In the reconstruction experiment given in figure 1, all input samples are lined up prop- erly on the sampling lattice. This makes it very easy to collect the neighbors’ positions and look up the corresponding filtering coefficients. However, our adaptive normalized convolution technique also works on randomly sampled signals. The most challenging task is then how to compute the kernel of an off-grid centered filter. Continuous nor- malized convolution (CNC) [4] approximates the coefficients by truncating the Taylor expansion up to the first order. The method, relying on the first-order partial derivative kernels at every pixel, is therefore not suitable for our varying applicability function. Furthermore, CNC cannot accurately approximate Gaussian kernels of sigma smaller than 1, which is often necessary when little smoothing is required. Here we present a filter construction approach that uses a large look-up Gaussian kernel. The size of such kernel is chosen such that most commonly used smaller ker- nels can be easily extracted from the master kernel by pixel-skipping subsampling. For example, a master kernel of size 121x121 hold coefficients of any kernel with sizes from {3,5,7,9,11,13,21,25,31,41,61,121}. In fact, we only use kernels with a right combination of these sizes such that the distance between adjacent resampled positions on the input image is less than the resolution grid. In this way, we can minimize the kernel size at each pixel while still satisfy the resampling condition (figure 2). With this simple
  • 5. scheme, Gaussian coefficients need not be recomputed for on-grid centered kernels . Coefficients for off-grid centered kernels can also be estimated from the master kernel using bilinear interpolation. Such an interpolation scheme allows accurate approxima- tion of any Gaussian kernels with sigma greater than 0.3 grid size. Table 1. MSEs and PSNRs of the reconstructed Lena images from 10% original information Random Knutsson r −3 Gaussian s=1, Scale-adaptive Shape and scale 10% of applicability fig. 1b. isotropic adaptive Gaus- Lena image function, fig.1c. Gaussian sian, fig.1d. MSE 31449 126.13 88.23 88.32 43.77 PSNR 3.15 27.12 28.67 28.67 31.72 5 Experiments on image reconstruction from missing samples In order to compare the result of the adaptive normalized averaging with the traditional normalized averaging method, the same experiments as presented in [2] are carried out. In the first experiment, the famous Lena image has been randomly sampled to a test image containing only 10% of the original information. The reconstruction results have been shown earlier in figure 1. A quantitative comparison is presented in table 1. The second experiment involves reconstruction of the same image having large missing regions (see figure 3 and table 2). While the standard normalized averaging can fill in the ‘holes’ with ‘plausible’ data [2], the adaptive filtering approach really proves its strength by being capable of extending local linear structures into the missing regions. a b c d Fig. 3. (a) Lena ‘hole’ image with large missing blobs. (b-d) Results of normalized averaging with the applicability function as (b) the Knutsson applicability function, (c) the isotropic scale-adaptive Gaussian function, (d) the local structure adaptive Gaussian function (eq. 3). Table 2. MSEs and PSNRs of the reconstructed regions from the Lena ‘hole’ image Lena image Knutsson r −3 Gaussian s=3 Scale-adaptive Shape and scale with holes, applicability isotropic adaptive Gaus- fig.3a. function, fig.3b Gaussian, fig.3c sian, fig.3d MSE 7235 133.21 106.11 83.63 41.05 PSNR 9.54 26.89 27.87 28.91 32.00
  • 6. With the lowest MSE and the highest PSNR (equation 4), adaptive applicability function clearly outperforms other listed applicability functions. It is also interesting to note that the Knutsson applicability function scores poorly for both quantitative tests even though the results are claimed to be better than the Gaussian function [2]. This is due to the fact that the fixed Knutsson applicability function is only significant in the very center while having near-zero values elsewhere. Though this is desirable for high certainty regions, low certainty regions are not reconstructed very well. ∑ ( I j − Iˆj ) N 1 2 MAX 2 MSE = PSNR = 10log 10 (4) N j =1 MSE To prove the superior in sharpness quality, we use the sharpness comparison measure by Dijk et. al. [12]. The method is described as follows: a scatter diagram com- posed of a set of data points is plotted such that the coordinates of each data point correspond to the gradient magnitudes of the two tested images at the same image location. Data points below the line y=x correspond to pixels with a higher gradient magnitude in the first image than in the second. Those points favor the sharperness of the first image over the second. Pixels above the line y=x, on the other hand, favor the smootherness of the first image over the second. Normally, these two types of data points are always present in a scattergram because an image is filtered for both detail enhancement and noise suppression. As a result, the gradient magnitudes of edge pixels increase while those of noisy background pixels decrease. However, due to the higher gradient energy that edge pixels possess, they form a cluster of data points that extends further away from the origin than the cluster formed by the noisy pixels. A line going through the origin can be fitted to each cluster, resulting in a slope for the de- gree of edge sharpening or noise smoothening accordingly. Fig. 4. (left) Gradient magnitude scattergram of the reconstructed Lena images from a random 10% of the original information using adaptive Gaussian (horizontal axis) and Knutsson appli- cability function (vertical axis). (right) similar scattergram for the Lena ‘hole’ experiment. As can be seen from both scattergrams in figure 4, the majority of data points form a cluster below the y=x line. The result from the diagram on the right is not as clear
  • 7. though because what is really reconstructed in the Lena 'hole' experiment is the con- tinuation of the image linear structures rather than the image gradient. This sharpness measure agrees with the subjective perception of the better quality of the recon- structed images using our adaptive method over Knutsson’s fixed applicability func- tion. 6. Conclusion and further research directions We have demonstrated that normalized convolution with adaptive applicability is an attractive framework for signal analysis that takes signal uncertainties as well as local structures into account. The application of this framework in the interpolation of incomplete and uncertain data image yields a method that sharpens edges, suppresses noise and avoids edge jaggedness by extending local image orientation into the low certainty regions. The comparisons of visual sharpness, MSE and PSNR on the reconstructed images show a significant improvement of the new method over existing normalized convolution based interpolation techniques. Apparently, normalized averaging is not the only technique for image reconstruc- tion. Image inpainting [13] also reconstruct missing sample images by extending is o- phote lines from the exterior into the missing regions. However, different from our non- iterative adaptive normalized averaging method, image inpainting is a diffusion proc- ess that takes many iterations to propagate orientation and image grayscale into the restored areas. The information to be propagated is computed from the intact sur- rounding area. As a result, image inpainting is only suitable for the restoration of dam- aged photographs and movies or the removal of small objects where the majority of image information is present. For example, the algorithm would give a satisfactory result in the Lena 'hole' image experiment, but not when 90 per cent of the image data is unavailable (see figure 5). In addition, due to the discrete implementation of diffusion, the computation of the gradient vector fields is not as robust to noise as done with the gradient structure tensor. Image inpainting also cannot handle irregular sample input as is possible in our adaptive filtering approach. a b c d
  • 8. Fig. 5. Comparison of our image reconstruction with Sapiro’s image inpainting: (a) our recon- struction from 10% original information, (b) an unsuccessful inpainting result for the same experiment in (a), (c) our reconstruction for one of the missing holes in the Lena ‘hole’ experi- ment, (d) same hole reconstruction by inpainting. Notice the continuation of the hair structure reconstructed by our algorithm as opposed to the mean gray level filling-in from inpainting Future research will focus on the application of adaptive normalized convolution in other problems such as image denoising, video restoration, data fusion, optic flow computation and compensation and super-resolution. The local structure adaptive filtering scheme presented in this paper can also be extended to the local structure neighborhood operators, in which more robust estimators than the weighted mean (as used in normalized convolution) can be used to approximate the local data. Acknowledgements This research is partly supported by the IOP Beeldverwerkings project of Senter, Agency of the Ministry of Economic Affairs of the Netherlands. References 1. A. Okabe, B. Boots, K. Sugihara, Spatial tessellations concepts and application of Voronoi diagrams, Wiley, 1992. 2. H. Knutsson, C-F Westin, Normalized and differential convolution: Methods for interpola- tion and filtering of incomplete and uncertain data, in CVPR’93, 1993. 515–523. 3. I.T. Young, L.J. van Vliet, Recursive implementation of the Gaussian filters, Signal Process- ing, 44(2), 1995. 139–151. 4. K. Andersson, H. Knutsson, Continuous Normalized Convolution, ICME'02, 2002. 725– 728. 5. P.E. Danielsson. Euclidean distance mapping. CGIP, 14, 1980. 227–248. 6. F. de Jong, L.J. van Vliet, P.P. Jonker, Gradient estimation in uncertain data, MVA'98 IAPR Workshop on Machine Vision Applications, Makuhari, Chiba, Japan, Nov. 1998. 144–147. 7. L.J. van Vliet, P.W. Verbeek, Estimators for orientation and anisotropy in digitized images, in J. van Katwijk et.al. (eds.), ASCI'95, Proc. First Annual Conf. of the Advanced School for Computing and Imaging, Heijen, the Netherlands, 1995. 442–450. 8. H. Knutsson, Representing local structure using tensors, SCIA'89, Oulu, Finland, 1989. 244– 251. 9. M. van Ginkel, J. van de Weijer, L.J. van Vliet, P.W. Verbeek, Curvature estimation from orientation fields, SCIA'99, Greenland, 1999. 545–551. 10. M. Nitzberg, T. Shiota, Nonlinear Image Filtering with Edge and Corner Enhancement. IEEE Trans. on PAMI 14(8), 1992. 826–833.
  • 9. 11. A. Almansa and T. Lindeberg, Enhancement of Fingerprint Images using Shape-Adapted Scale-Space Operators, in J. Sporring, M. Nielsen, L. Florack and P. Johansen (eds.), Gaus- sian Scale-Space Theory, Kluwer Academic Publishers, 1997. 21–30. 12. J. Dijk, D. de Ridder, P.W. Verbeek, J. Walraven, I.T. Young, L.J. van Vliet, “A new meas- ure for the effect of sharpening and smoothing filters on images”, in SCIA’99, Greenland, 1999. 213–220. 13. C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro, J. Verdera, Filling-in by Joint Interpola- tion of Vector Fields and Gray Levels, IEEE Transactions on Image Processing, 10 (8), 2001. 1200–1211.