SlideShare uma empresa Scribd logo
1 de 13
Baixar para ler offline
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
DOI : 10.5121/ijcsit.2013.5202 19
OBTAINING SUPER-RESOLUTION IMAGES
BY COMBINING LOW-RESOLUTION IMAGES
WITH HIGH-FREQUENCY INFORMATION
DERIVEDFROM TRAINING IMAGES
Emil Bilgazyev
1
, Nikolaos Tsekos
2
, and Ernst Leiss
3
1,2,3
Department of Computer Science, University of Houston, TX, USA
1
eismailov2@uh.edu, 2
ntsekos@cs.uh.edu, 3
eleiss@uh.edu
ABSTRACT
In this paper, we propose a new algorithm to estimate a super-resolution image from a given low-resolution
image, by adding high-frequency information that is extracted from natural high-resolution images in the
training dataset. The selection of the high-frequency information from the training dataset is accomplished in
two steps, a nearest-neighbor search algorithm is used to select the closest images from the training dataset,
which can be implemented in the GPU, and a sparse-representation algorithm is used to estimate a weight
parameter to combine the high-frequency information of selected images. This simple but very powerful
super-resolution algorithm can produce state-of-the-art results. Qualitatively and quantitatively, we
demonstrate that the proposed algorithm outperforms existing state-of-the-art super-resolution algorithms.
KEYWORDS
Super-resolution, face recognition, sparse representation.
1. INTRODUCTION
Recent advances in electronics, sensors, and optics have led to a widespread availability of
video-based surveillance and monitoring systems. Some of the imaging devices, such as cameras,
camcorders, and surveillance cameras, have limited achievable resolution due to factors such as
quality of lenses, limited number of sensors in the camera, etc. Increasing the quality of lenses or
the number of sensors in the camera will also increase the cost of the device; in some cases the
desired resolution may be still not achievable with the current technology. However, many
applications, ranging from security to broadcasting, are driving the need for higher-resolution
images or videos for better visualization [1].
The idea behind super-resolution is to enhance the low-resolution input image, such that the
spatial-resolution (total number of independent pixels within the image) as well as pixel-resolution
(total number of pixels) are improved.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
20
In this paper, we propose a new approach to estimate super-resolution by combining a given
low-resolution image with high-frequency information obtained from training images (Fig. 1). The
nearest-neighbor-search algorithm is used to select closest images from the training dataset, and a
sparse representation algorithm is used to estimate a weight parameter to combine the
high-frequencyinformation of selected images. The main motivation of our approach is that the
high-frequency information helps to obtain sharp edges on the reconstructed images (see Fig. 1).
(a) (b) (c)
Figure 1: Depiction of (a) the super-resolution image obtained by combining (b) a given low-resolution
image and (c) high-frequency information estimated from a natural high-resolution training dataset.
The rest of the paper is organized as follow: Previous work is presented in Section 2, a description
of our proposed method is presented in Section 3, the implementation details are presented in
Section 4, and experimental results of the proposed algorithm as well as other algorithms are
presented in Section 5. Finally, Section 6 summarizes our findings and concludes the paper.
2. PREVIOUS WORK
In this section, we briefly review existing techniques for super-resolution of low-resolution images
for general and domain-specific purposes. In recent years, several methods have been proposed that
address the issue of image resolution. Existing super-resolution (SR) algorithms can be classified
into two classes: multi-frame-based and example-based algorithms [8]. Multi-frame-based
methods compute an high-resolution (HR) image from a set of low-resolution (LR) images from
any domain [6]. The key assumption of multi-frame-based super-resolution methods is that the set
of input LR images overlap and each LR image contains additional information than other LR
images. Then multi-frame-based SR methods combine these sets of LR images into one image so
that all information is contained in a single output SR image. Additionally, these methods perform
super-resolution with the general goal of improving the quality of the image so that the resulting
higher-resolution image is also visually pleasing. The example-based methods compute an HR
counterpart of a single LR image from a known domain [2, 13, 18, 10, 14]. These methods learn
observed information targeted to a specific domain and thus, can exploit prior knowledge to obtain
superior results specific to that domain. Our approach belongs to this category, where we use a
training database to improve reconstruction output.
Moreover, the domain-specific SR methods targeting the same domain differ considerably from
each other in the way they model and apply a priori knowledge about natural images. Yang et al.
[22] introduced a method to reconstruct SR images using a sparse representation of the input LR
images. However, the performance of these example-based SR methods degrades rapidly if the
magnification factor is more than 2. In addition, the performance of these SR methods is highly
dependent on the size of the training database.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
20
In this paper, we propose a new approach to estimate super-resolution by combining a given
low-resolution image with high-frequency information obtained from training images (Fig. 1). The
nearest-neighbor-search algorithm is used to select closest images from the training dataset, and a
sparse representation algorithm is used to estimate a weight parameter to combine the
high-frequencyinformation of selected images. The main motivation of our approach is that the
high-frequency information helps to obtain sharp edges on the reconstructed images (see Fig. 1).
(a) (b) (c)
Figure 1: Depiction of (a) the super-resolution image obtained by combining (b) a given low-resolution
image and (c) high-frequency information estimated from a natural high-resolution training dataset.
The rest of the paper is organized as follow: Previous work is presented in Section 2, a description
of our proposed method is presented in Section 3, the implementation details are presented in
Section 4, and experimental results of the proposed algorithm as well as other algorithms are
presented in Section 5. Finally, Section 6 summarizes our findings and concludes the paper.
2. PREVIOUS WORK
In this section, we briefly review existing techniques for super-resolution of low-resolution images
for general and domain-specific purposes. In recent years, several methods have been proposed that
address the issue of image resolution. Existing super-resolution (SR) algorithms can be classified
into two classes: multi-frame-based and example-based algorithms [8]. Multi-frame-based
methods compute an high-resolution (HR) image from a set of low-resolution (LR) images from
any domain [6]. The key assumption of multi-frame-based super-resolution methods is that the set
of input LR images overlap and each LR image contains additional information than other LR
images. Then multi-frame-based SR methods combine these sets of LR images into one image so
that all information is contained in a single output SR image. Additionally, these methods perform
super-resolution with the general goal of improving the quality of the image so that the resulting
higher-resolution image is also visually pleasing. The example-based methods compute an HR
counterpart of a single LR image from a known domain [2, 13, 18, 10, 14]. These methods learn
observed information targeted to a specific domain and thus, can exploit prior knowledge to obtain
superior results specific to that domain. Our approach belongs to this category, where we use a
training database to improve reconstruction output.
Moreover, the domain-specific SR methods targeting the same domain differ considerably from
each other in the way they model and apply a priori knowledge about natural images. Yang et al.
[22] introduced a method to reconstruct SR images using a sparse representation of the input LR
images. However, the performance of these example-based SR methods degrades rapidly if the
magnification factor is more than 2. In addition, the performance of these SR methods is highly
dependent on the size of the training database.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
20
In this paper, we propose a new approach to estimate super-resolution by combining a given
low-resolution image with high-frequency information obtained from training images (Fig. 1). The
nearest-neighbor-search algorithm is used to select closest images from the training dataset, and a
sparse representation algorithm is used to estimate a weight parameter to combine the
high-frequencyinformation of selected images. The main motivation of our approach is that the
high-frequency information helps to obtain sharp edges on the reconstructed images (see Fig. 1).
(a) (b) (c)
Figure 1: Depiction of (a) the super-resolution image obtained by combining (b) a given low-resolution
image and (c) high-frequency information estimated from a natural high-resolution training dataset.
The rest of the paper is organized as follow: Previous work is presented in Section 2, a description
of our proposed method is presented in Section 3, the implementation details are presented in
Section 4, and experimental results of the proposed algorithm as well as other algorithms are
presented in Section 5. Finally, Section 6 summarizes our findings and concludes the paper.
2. PREVIOUS WORK
In this section, we briefly review existing techniques for super-resolution of low-resolution images
for general and domain-specific purposes. In recent years, several methods have been proposed that
address the issue of image resolution. Existing super-resolution (SR) algorithms can be classified
into two classes: multi-frame-based and example-based algorithms [8]. Multi-frame-based
methods compute an high-resolution (HR) image from a set of low-resolution (LR) images from
any domain [6]. The key assumption of multi-frame-based super-resolution methods is that the set
of input LR images overlap and each LR image contains additional information than other LR
images. Then multi-frame-based SR methods combine these sets of LR images into one image so
that all information is contained in a single output SR image. Additionally, these methods perform
super-resolution with the general goal of improving the quality of the image so that the resulting
higher-resolution image is also visually pleasing. The example-based methods compute an HR
counterpart of a single LR image from a known domain [2, 13, 18, 10, 14]. These methods learn
observed information targeted to a specific domain and thus, can exploit prior knowledge to obtain
superior results specific to that domain. Our approach belongs to this category, where we use a
training database to improve reconstruction output.
Moreover, the domain-specific SR methods targeting the same domain differ considerably from
each other in the way they model and apply a priori knowledge about natural images. Yang et al.
[22] introduced a method to reconstruct SR images using a sparse representation of the input LR
images. However, the performance of these example-based SR methods degrades rapidly if the
magnification factor is more than 2. In addition, the performance of these SR methods is highly
dependent on the size of the training database.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
21
Freeman et al. [8] proposed an example-based learning strategy that applies to generic images
where the LR to HR relationship is learned using a Markov Random Field (MRF). Sun et al. [17]
extended this approach by using the Primal Sketch priors to reconstruct edge regions and corners by
deblurring them. The main drawback of these methods is that they require a large database of LR
and HR image pairs to train the MRF. Chang et al. [3] used the Locally Linear Embedding (LLE)
Figure 2: Depiction of the pipeline for the proposed super-resolution algorithm.
manifold learning approach to map the the local geometry of HR images to LR images with the
assumption that the manifolds between LR and HR images are similar. In addition, they
reconstructed a SR image using K neighbors. However, the manifold between the synthetic LR that
is generated from HR images is not similar to the manifold of real scenario LR images, which are
captured under different environments and camera settings. Also, using a fixed number of
neighbors to reconstruct an SR image usually results in blurring effects such as artifacts in the
edges, due to over- or under-fitting.
Another approach is derived from a multi-frame based approach to reconstruct an SR image from a
single LR image [7, 12, 19]. These approaches learn the co-occurrence of a patch within the image
where the correspondence between LR and HR is predicted. These approaches cannot be used to
reconstruct a SR image from a single LR facial image, due to the limited number of similar patches
within a facial image.
SR reconstruction based on wavelet analysis has been shown to be well suited for reconstruction,
denoising and deblurring, and has been used in a variety of application domains including
biomedical [11], biometrics [5], and astronomy [20]. In addition, it provides an accurate and sparse
representation of images that consist of smooth regions with isolated abrupt changes [16]. In our
method, we propose to take advantage of the wavelet decomposition-based approach in conjunction
with compressed sensing techniques to improve the quality of the super-resolution output.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
22
3. METHODOLOGY
Table 1. Notations used in this paper
Symbols Description
X Collection of training images
iX th
i training image ( ×m n
R )
jix ,
th
j patch of th
i image ( ×k l
R )
 threshold value
i sparse representation of th
i patch
p
. pl -norm
D Dictionary
W , −1
W forward and inverse wavelet transforms
 ,  high- and low-frequencies of image
][ Concatenation or vectors or matrices
Let nm
i RX ×
∈ be the th
i image of the training dataset }0...=:{= NiXX i , and
×
∈,
k l
i j
x R
be the
th
j patch of an image }0...=:{= , MjxX jii . The wavelet transform of an image patch
x will return low- and high-frequency information:
( ) = [ ( ), ( )] ,W x x x  (1)
where W is the forward wavelet transform, )(x is the low-frequency information, and )(x is
the high-frequency information of an image patch x . Taking the inverse wavelet transform of
high- and low-frequency information of original image (without any processing on them) will
result in the original image:
1
= ([ ( ), ( )]) ,x W x x −
(2)
where −1
W is the inverse wavelet transform. If we use Haar wavelet transform with its
coefficients being 0.5 instead of 2 (nonquadratic-mirror-filter), then the low-frequency
information of an image x will actually be a low-resolution version of an image x , where four
neighboring pixels are averaged; in other words, it is similar to down-sampling an image x by a
factor of 2 with nearest-neighbor interpolation, and the high-frequency information )(x of an
image x will be similar to the horizontal, vertical and diagonal gradients of x .
Assume that, for a given low-resolution image patch iy which is the th
i patch of an image y , we
can find a similar patch }0={= NMjxj  from the natural image patches, then by combining
iy with the high-frequency information )( jx of a high-resolution patch jx , and taking the
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
23
inverse wavelet transform, we will get the super-resolution
*
y (see Fig. 1):
  −
− ≤
2* 1
02
= ([ , ( )]) , ( ) ,i i j i jy W y x y x (3)
where 0 is a small nonnegative value.
It is not guaranteed that we will always find an jx such that 0
2
2
)(  ≤− ji xy , thus, we
introduce an approach to estimate a few closest low-resolution patches ( )( jx ) from the training
dataset and then estimate a weight for each patch )( jx which will be used to combine
high-frequency information of the training patches )( jx .
To find closest matches to the low-resolution input patch iy , we use a nearest-neighbor search
algorithm:
,},)(=:{= 1
2
2
Xxxyccc
iciciii ∈∀≤−  (4)
where c is a vector containing the indexes ( ic ) of training patches of the closest matches to input
patch iy , and 1 is the radius threshold of a nearest-neighbor search. After selecting the closest
matches to iy , we build two dictionaries from the selected patches jx ; the first dictionary will be
the joint of low-frequency information of training patches )( jx where it will be used to estimate
a weight parameter, and the second dictionary will be the joint of high-frequency information of
training patches )( jx :
.}:)({=,}:)({= cjxDcjxD jiji ∈∈  
(5)
We use a sparse representation algorithm [21] to estimate the weight parameter. The
sparse-representation i of an input image patch iy with respect to the dictionary 
iD , is used
as a weight for fusion of the high-frequency information of training patches ( 
iD ):
.argmin= 12 iiii
i
i Dy  

+− (6)
The sparse representation algorithm (Eq. 6) tries to estimate iy by fusing a few atoms (columns)
of the dictionary 
iD , by assigning non-zero weights to these atoms. The result will be the
sparse-representation i , which has only a few non-zero elements. In other words, the input image
patch iy can be represented by combining a few atoms of 
iD ( iii Dy 
≈ ) with a weight
parameter i ; similarly, the high-frequency information of training patches 
iD can also be
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
24
combined with the same weight parameter i , to estimate the unknown high-frequency
information of an input image patch iy : 
−* 1
= ([ , ]) ,i i i i
y W y D (7)
where *
iy is the output (super-resolution) image patch, and W −1
is the inverse wavelet transform.
Figure 2 depicts the pipeline for the proposed algorithm. For the training step, from high-resolution
training images we extract patches, then we compute low-frequency (will become low-resolution
training image patches) and high-frequency information for each patch in the training dataset. For
the reconstruction step, given an input low-resolution image y , we extract a patch iy , find nearest
neighbors c within the given radius 1 (this can be speeded-up usinga GPU), then from selected
neighbors c, we construct low-frequency 
i
D and high-frequency dictionaries 
i
D , where the
low-frequency dictionary is used to estimate the sparse representation i of input low-resolution
patch iy with respect to the selected neighbors, and the high-frequency dictionary 
iD will be
used to fuse its atoms (columns) with a weight parameter, where the sparse representation i will
be used as a weight parameter. Finally, by taking the inverse wavelet transform (
−1
W ) of a given
low-resolution image patch iy with fused high-frequency information, we will get the
super-resolution patch
*
y . Iteratively repeating the reconstruction step (red-dotted block in Fig. 2)
for each patch in the low-resolution image y , we will obtain the super-resolution image
*
y .
4. IMPLEMENTATION DETAILS
In this section we will explain the implementation details. As we have pointed out in Sec. 3, we
extract patches for each training image }0...=:{= , MjxX jii with
lk
ji Rx ×
∈, . The number
M depends on the window function, which determines how we would like to select the patches.
There are two ways to select the patches from the image; one is by selecting distinct patches from
an image, where two consecutive patches don’t overlap, and another is by selecting overlapped
patches (sliding window), where two consecutive patches are overlapped. Since the 2l -norm in
nearest-neighbor search is sensitive to the shift, we slide the window by one pixel in horizontal or
vertical direction, where the two consecutive patches will overlap each other by lk ×−1)( or
1)( −× lk , where
×
∈,
k l
i jx R . To store these patches we will require an enormous amount of
storage space lklnkmN ××−×−× )()( , where N is the number of training images and
×
∈ m n
i
X R . For example, if we have 1000 images natural images in the training dataset, and each
has a resolution of 10001000× pixels, to store the patches of 4040× , we will require 1.34TB of
storage space, which would be inefficient and computationally expensive.To reduce the number of
patches, we removed patches which don’t contain any gradients, or contain very few gradients,
2
2
2, ≥∇ jix where ∇ is the sum of gradients along the vertical and horizontal directions (
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
25
y
x
x
x
x
jiji
ji
∂
∂
+
∂
∂
∇
,,
, = ), and 2 is the threshold value to filter out the patches with less gradient
variation. Similarly, we calculate the gradients on input low-resolution patches ( iy ), and if they are
below the threshold 2 , we upsample them using bicubic interpolation, where no super-resolution
reconstruction will be performed on that patch. To improve the computation speed, the
nearest-neighbor search can be calculated in the GPU, and since all given low-resolution patches
are calculated independently of each other, multi-threaded processing can be used for each
super-resolution patch reconstruction.
In the wavelet transform, for low-pass filter and high-pass filter we used [ 0.5 , 0.5 ] and
[ 0.5− , 0.5 ], where 2D filters for wavelet transform are created from them. These filters are not
quadratic-mirror-filters (nonorthogonal), thus, during the inverse wavelet transform we need to
multiply the output by 4 . The reason for choosing these values for the filters is, the low-frequency
information (analysis part) of the forward wavelet transform will be the same as down-sampling the
signal by a factor of 2 with nearest neighbor interpolation, which is used in the nearest-neighbor
search.During the experiments, all color images are converted to YCbCr , where only the
luminance component ( Y ) is used.For the display, the blue- and red-difference chroma
components (Cb and Cr ) of an input low-resolution image are up-sampled and combined with
the super-resolution image to obtain the color image
*
y (Fig. 2).
Note that we can reduce the storage space for the patches to zero, by extracting the patches of the
training images during reconstruction. This can be accomplished by changing the neighbor-search
algorithm, and can be implemented in GPU. During the neighbor-searching, each GPU thread will
be assigned to extract low-frequency )( , jlx and high-frequency information  ,
( )l j
x at an
assigned position j of a training image lX ; we compute the distance to the input low-resolution
image patch iy , and if the distance is less than the threshold 2 , then the GPU thread will return
the high-frequency information )( , jlx , where the returned high-frequency information will be
used to construct 
i
D :

  − ≤ ∀ ∈
2
1
2
= { ( ) : = ( ) , } .i i i i c c
i i
D c c y x x X (8)
As a threshold (radius) value for nearest-neighbor search algorithm we used 0.5 for natural images,
and 0.3 for facial images. Both low-frequency information of training image and input image
patches are normalized before calculating euclidean distance. We selected these values
experimentally, where at these values we get highest SNR and lowest MSE. As we know that that
the euclidean distance ( in nearest-neighbor search) is sensitive to noise, but in our approach, our
main goal is to reduce the number of training patches which are close to input patch. Thus, we take
a higher the threshold value for nearest-neighbor search, where we select closest matches, then the
sparse representation is performed on them. Note that, sparse representation estimation (Eq. 6)
tends to estimate input patch from training patches, where noise is taken care of [14].Reducing the
storage space will slightly increase the super-resolution reconstruction time, since the wavelet
transform will be computed during the reconstruction.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
26
5. EXPERIMENT RESULTS
We performed experiments on a variety of images to test the performance of our approach (HSR) as
well as other super-resolution algorithms: BCI [1], SSR [22] and MSR [8]. We conducted two types
of experiments.
For the first one, we performed the experiment on the Berkeley Segmentation Dataset 500 [15]. It
contains natural images, where the natural images are divided into two groups; the first group of
images (Fig. 3(a)) are used to train super-resolution algorithms (except BCI), and the second group
images (Fig. 3(b)) are used to test the performance of the super-resolution algorithms. To measure
the performance of the algorithms, we use mean-square-error (MSE) and signal-to-noise ratio
(SNR) as a distance metric. These algorithms measure the difference between the ground truth and
the reconstructed images.
The second type of experiment is performed on facial images (Fig. 4), where face recognition
system is used as a distance metric to demonstrate the performance of the super-resolution
algorithms.
(a) (b)
Figure 3: Depiction of Berkeley Segmentation Dataset 500 images used for (a) training and (b) testing the
super-resolution algorithms.
(a) (b)
Figure 4: Depiction of facial images used for (a) training and (b) testing the super-resolution algorithms.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
27
5.1 Results on Natural Images
In Figure 5, we show the output of the proposed super-resolution algorithm, BCI, SSR, and MSR.
The red rectangle is zoomed-in and displayed in Figure 6. In this figure we focus on the effect
ofsuper-resolution algorithms on low-level patterns (fur of the bear). Most of the super-resolution
algorithms tend to improve the sharpness of the edges along the border of the objects, which looks
good to human eyes, and the low-level patterns are ignored. One can see that the output of BCI is
smooth (Fig. 5(b)), and from the zoomed-in region (Fig. 6(b)) it can be noticed that the edges along
the border of the object are smoothed, and similarly, the pattern inside the regions is also smooth.
This is because BCI interpolates the neighboring pixel values in the lower-resolution to introduce a
new pixel value in the higher-resolution. This is the same as taking the inverse wavelet transform of
a given low-resolution image with its high-frequency information being zero, thus the
reconstructed image will not contain any sharp edges. The result of MSR has sharp edges, however,
it contains block artifacts (Fig. 5(c)). One can see that the edges around the border of an object are
sharp, but the patterns inside the region are smoothed, and block artifact are introduced (Fig. 6(c)).
On the other hand, the result of SSR doesn’t contain sharp edges along the border of the object, but
it contains sharper patterns compared to BCI and MSR (Fig. 5(d)). The result of the proposed
super-resolution algorithm has sharp edges, sharp patterns, as well as fewer artifacts compared to
other methods
(Fig. 5(e) and Fig. 6(e)), and visually it looks more similar to the ground truth image (Fig. 5(f) and
Fig. 6(f)).
Figure 7 shows the performance of the super-resolution algorithms on a different image with fewer
patterns. One can see that the output of the BCI is still smooth along the borders, and inside the
region it is clearer. The output of MSR looks better for the images with fewer patterns, where it
tends to reconstruct the edges along the borders.
(a) (b) (c)
(d) (e ) (f)
Figure 5: Depiction of low-resolution, super-resolution and original high-resolution images. (a)
Low-resolution image, (b) output of BCI, (c) output of SSR, (d) output of MSR, (e) output of proposed
algorithm, and (f) original high-resolution image. The solid rectangle boxes in red color represents the
regions that is magnified and displayed in Figure 5 for better visualization. One can see that the output of the
proposed algorithm has sharper patterns compared to other SR algorithms.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
28
(a) (b) (c)
(d) ( e ) (f)
Figure 6: Depiction of a region (red rectangle in Figure 3) for (a) low-resolution image, output of (b) BCI, (c)
SSR, (d) MSR, (e) proposed algorithm, and (f) original high-resolution image. Notice that the the proposed
algorithm has sharper patterns compared to other SR algorithms.
(a) (b) (c)
(d) (e ) (f)
Figure 7: Depiction of low-resolution, super-resolution and original high-resolution images. (a)
Low-resolution image, (b) output of BCI, (c) output of SSR, (d) output of MSR, (e) output of proposed
algorithm, and (f) original high-resolution image. The solid rectangle boxes in yellow and red colors
represent the regions that were magnified and displayed on the right side of each image for better
visualization. One can see that the output of the proposed algorithm has better visual quality compared to
other SR algorithms.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
29
In the output of SSR, one can see that the edges on the borders are smooth, and inside the regions it
has ringing artifacts. The SSR algorithm builds dictionaries from high-resolution and
low-resolution image patches by reducing the number of atoms (columns) of the dictionaries under
a constraint that these dictionaries can represent the image patches in the training dataset with
minimal difference. This is similar to compressing or dimension reduction, where we try to
preserve the structure of the signal, not the details of the signal, and sometimes we get artifacts
during the reconstruction1
.
We also computed the average SNR and MSE to quantitatively measure the performance of the
super-resolution algorithms. Table 5.1 depicts the average SNR and MSE values for BCI, MSR,
SSR, and HSR. Notice that the proposed algorithm has the highest signal-to-noise ratio and the
lowest difference mean-square-error.
Table 5.1: Experimental Results
5.2. Results on Facial Images
We conducted experiments on surveillance camera facial images (SCFace)[9]. This database
contains 4,160 static images from 130 subjects. The images were acquired in an uncontrolled
indoor environment using five surveillance cameras of various qualities and ranges. For each of
these cameras, one image from each subject at three distances, 4.2 m, 2.6 m, and 1.0 m was
acquired. Another set of images was acquired by a mug shot camera. Nine images per subject
provide nine discrete images ranging from left to right profile in equal steps of 22.5 degrees,
including a frontal mug-shot image at 0 degrees. The database contains images in visible and
infrared spectrum. Images from different quality cameras mimic real-world conditions. The
high-resolution images are used as a gallery (Fig. 4(a)), while the images captured by a camera with
visible light spectrum from a 4.2 m distance are used as a probe (Fig. 4(b)). Since the SCFace
dataset consists of two types of images, high-resolution images and surveillance images, we used
the high-resolution images to train SR methods and the surveillance images as a probe.
We used Sparse Representation based Face Recognition proposed by Wright .et al [21], to test
the performance of the super-resolution algorithms. It has been proven that the performance of the
face recognition systems relies on low-level information (high-frequency information) of the facial
images [4]. The high-level information, which is the structure of the face, affects less the
performance of face recognition systems compared to low-level information, unless we compare
human faces with other objects such as monkey, lion, car, etc., where the structures between them
1
The lower-frequencies of the signal affect more the difference between original and reconstructed signals, compared to the
higher-frequencies. For example, if we remove the DC component (0 Hz) from one of the signals, original or reconstructed, the
difference between them will be very large. Thus keeping the lower-frequencies of the signal helps to preserve the structure and have
minimal difference between the original and reconstructed signals.
Dist. Metric  SR Algorithms BCI SSR MSR HSR
SNR ( dB ) 23.08 24.76 18.46 25.34
MSE 5.45 5.81 12.01 3.95
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
30
are very different. Most of the human faces have similar structures: two eyes, one nose, two
eyebrows, etc., and in the low-resolution facial images, the edges (high-frequency information)
around the eyes, eyebrows, nose, mouth, etc., are lost which decreases the performance of face
recognition systems [21]. Even for humans it is very difficult to recognize a person from a
low-resolution image (see Fig. 4). Figure 8 depicts the given low-resolution images, and the output
of super-resolution images. The rank-1 face recognition accuracy for LR, BCI, SSR, MSR, and our
proposed algorithms are: 2%, 18%, 13%, 16%, and 21%. Overall, the face recognition accuracy is
low, but compared to the face recognition performance on the low-resolution images, we can
summarize that the super-resolution algorithms can improve the recognition accuracy.
6. CONCLUSION
We have proposed a novel approach to reconstruct super-resolution images for better visual quality
as well as for better face recognition purposes, which also can be applied to other fields. We
presented a sparse representation-based SR method to recover the high-frequency components of
an
(a) (b) (c) (d)
Figure 8: Depiction of LR and output of SR images. (a) Low-resolution image, output of (b) BCI, (c) SSR,
and (d) proposed method. For this experiment we used a patch size of 10×10 pixels; thus when we increase
the patch size we introduce ringing artifacts, which can be seen in the reconstructed image (d). Quantitatively,
in terms of face recognition our proposed super-resolution algorithm outperforms other super-resolution
algorithms.
SR image. We demonstrated the superiority of our methods over existing state-of-the-art
super-resolution methods for the task of face recognition in low-resolution images obtained from
real world surveillance data, as well as better performance in terms of MSE and SNR. We conclude
that by having more than one training image for the subject we can significantly improve the visual
quality of the proposed super-resolution output, as well as the recognition accuracy.
REFERENCES
[1] M. Ao, D. Yi, Z. Lei, and S. Z. Li. Handbook of remote biometrics, chapter Face Recognition at a
Distance:System Issues, pages 155–167. Springer London, 2009.
[2] S. Baker and T. Kanade. Hallucinating faces. In Proc. IEEE International Conference on
Automatic Face and Gesture Recognition, pages 83–88, Grenoble, France, March 28-30, 2002.
[3] H. Chang, D. Y. Yeung, and Y. Xiong. Super-resolution through neighbor embedding. In Proc.
IEEEInternational Conference on Computer Vision and Pattern Recognition, pages
275–282,Washington DC., 27 June-2 July 2004.
[4] G. Chen and W. Xie. Pattern recognition using dual-tree complex wavelet features and svm. In
Proc. Canadian Conference on Electrical and Computer Engineering, pages 2053–2056, 2008.
International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013
31
[5] A. Elayan, H. Ozkaramanli, and H. Demirel. Complex wavelet transform-based face recognition.
EURASIP Journal on Advances in Signal Processing, 10(1):1–13, Jan 2008.
[6] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar. Fast and robust multiframe super resolution.
IEEE Transactions on Image Processing, 13(10):1327–1344, 2004.
[7] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Learning low-level vision. IEEE International
Journal of Computer Vision, 40(1):25–47, 2000.
[8] W. T. Freeman and C. Liu. Markov random fields for super-resolution and texture synthesis, chapter
10, pages 1–30. MIT Press, 2011.
[9] M. Grgic, K. Delac, and S. Grgic. SCface - surveillance cameras face database. Multimedia Tools
and Applications Journal, 51(3):863–879, 2011.
[10] P. H. Hennings-Yeomans. Simultaneous super-resolution and recognition. PhD thesis, Carnegie
Mellon University, Pittsburgh, PA, USA, 2008.
[11] J. T. Hsu, C. C. Yen, C. C. Li, M. Sun, B. Tian, and M. Kaygusuz. Application of wavelet-based
POCS superresolution for cardiovascular MRI image enhancement. In Proc. International
Conference on Image and Graphics, pages 572 – 575, Hong Kong, China, Dec. 18-20, 2004.
[12] K. I. Kim and Y. Kwon. Single-image super-resolution using sparse regression and natural image
prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6):1127–1133, 2010.
[13] C. Liu, H. Y. Shum, and C. S. Zhang. A two-step approach to hallucinating faces: global parametric
model and local nonparametric model. In Proc. IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, pages 192–198, San Diego, CA, USA, Jun. 20-26, 2005.
[14] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE
Transactions on Image Processing, 17(1):53–69, Jan. 2008.
[15] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its
application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. 8th
International Conference on Computer Vision, volume 2, pages 416–423, July 2001.
[16] G. Pajares and J. M. Cruz. A wavelet based image fusion tutorial. Pattern Recognition,
37(9):1855–1872, Sep. 2004.
[17] J. Sun, N. N. Zheng, H. Tao, and H. Shum. Image hallucination with primal sketch priors. In Proc.
IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages II – 729–36
vol.2, Madison, WI, Jun. 18-20, 2003.
[18] J. Wang, S. Zhu, and Y. Gong. Resolution enhancement based on learning the sparse association of
image patches. Pattern Recognition Letters, 31(1):1–10, Jan. 2010.
[19] Q. Wang, X. Tang, and H. Shum. Patch based blind image super-resolution. In Proc. IEEE
International Conference on Computer Vision, Beijing, China, Oct. 17-20, 2005.
[20] R. Willet, I. Jermyn, R. Nowak, and J. Zerubia. Wavelet based super resolution in astronomy. In
Proc. Astronomical Data Analysis Software and Systems, volume 314, pages 107–116, Strasbourg,
France, 2003.
[21] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse
representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210–227,
February 2009.
[22] J. Yang, J. Wright, T. Huang, and Y. Ma. Image super-resolution as sparse representation of raw
image patches. In Proc. IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, Anchorage, AK, Jun. 23-28, 2008.

Mais conteúdo relacionado

Mais procurados

International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)inventionjournals
 
Efficient Reversible Data Hiding Algorithms Based on Dual Prediction
Efficient Reversible Data Hiding Algorithms Based on Dual PredictionEfficient Reversible Data Hiding Algorithms Based on Dual Prediction
Efficient Reversible Data Hiding Algorithms Based on Dual Predictionsipij
 
A comprehensive method for image contrast enhancement based on global –local ...
A comprehensive method for image contrast enhancement based on global –local ...A comprehensive method for image contrast enhancement based on global –local ...
A comprehensive method for image contrast enhancement based on global –local ...eSAT Publishing House
 
A MODIFIED HISTOGRAM BASED FAST ENHANCEMENT ALGORITHM
A MODIFIED HISTOGRAM BASED FAST ENHANCEMENT ALGORITHMA MODIFIED HISTOGRAM BASED FAST ENHANCEMENT ALGORITHM
A MODIFIED HISTOGRAM BASED FAST ENHANCEMENT ALGORITHMcsandit
 
Data Hiding Using Reversibly Designed Difference-Pair Method
Data Hiding Using Reversibly Designed Difference-Pair MethodData Hiding Using Reversibly Designed Difference-Pair Method
Data Hiding Using Reversibly Designed Difference-Pair MethodIJERA Editor
 
Fuzzy Logic based Contrast Enhancement
Fuzzy Logic based Contrast EnhancementFuzzy Logic based Contrast Enhancement
Fuzzy Logic based Contrast EnhancementSamrudh Keshava Kumar
 
Reversible image data hiding with contrast enhancement
Reversible image data hiding with contrast enhancementReversible image data hiding with contrast enhancement
Reversible image data hiding with contrast enhancementredpel dot com
 
Kurmi 2015-ijca-905317
Kurmi 2015-ijca-905317Kurmi 2015-ijca-905317
Kurmi 2015-ijca-905317Yashwant Kurmi
 
Reversible Data Hiding Using Contrast Enhancement Approach
Reversible Data Hiding Using Contrast Enhancement ApproachReversible Data Hiding Using Contrast Enhancement Approach
Reversible Data Hiding Using Contrast Enhancement ApproachCSCJournals
 
Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesSurvey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesIOSR Journals
 
An efficient fusion based up sampling technique for restoration of spatially ...
An efficient fusion based up sampling technique for restoration of spatially ...An efficient fusion based up sampling technique for restoration of spatially ...
An efficient fusion based up sampling technique for restoration of spatially ...ijitjournal
 
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...IJERA Editor
 
Artem Baklanov - Votes Aggregation Techniques in Geo-Wiki Crowdsourcing Game:...
Artem Baklanov - Votes Aggregation Techniques in Geo-Wiki Crowdsourcing Game:...Artem Baklanov - Votes Aggregation Techniques in Geo-Wiki Crowdsourcing Game:...
Artem Baklanov - Votes Aggregation Techniques in Geo-Wiki Crowdsourcing Game:...AIST
 
Image Contrast Enhancement for Brightness Preservation Based on Dynamic Stret...
Image Contrast Enhancement for Brightness Preservation Based on Dynamic Stret...Image Contrast Enhancement for Brightness Preservation Based on Dynamic Stret...
Image Contrast Enhancement for Brightness Preservation Based on Dynamic Stret...CSCJournals
 
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET Journal
 
Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ug 205-image-retrieval-using-re-ranking-algorithm-11Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ug 205-image-retrieval-using-re-ranking-algorithm-11Ijcem Journal
 
Effective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super ResolutionEffective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super ResolutionIOSR Journals
 

Mais procurados (17)

International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
 
Efficient Reversible Data Hiding Algorithms Based on Dual Prediction
Efficient Reversible Data Hiding Algorithms Based on Dual PredictionEfficient Reversible Data Hiding Algorithms Based on Dual Prediction
Efficient Reversible Data Hiding Algorithms Based on Dual Prediction
 
A comprehensive method for image contrast enhancement based on global –local ...
A comprehensive method for image contrast enhancement based on global –local ...A comprehensive method for image contrast enhancement based on global –local ...
A comprehensive method for image contrast enhancement based on global –local ...
 
A MODIFIED HISTOGRAM BASED FAST ENHANCEMENT ALGORITHM
A MODIFIED HISTOGRAM BASED FAST ENHANCEMENT ALGORITHMA MODIFIED HISTOGRAM BASED FAST ENHANCEMENT ALGORITHM
A MODIFIED HISTOGRAM BASED FAST ENHANCEMENT ALGORITHM
 
Data Hiding Using Reversibly Designed Difference-Pair Method
Data Hiding Using Reversibly Designed Difference-Pair MethodData Hiding Using Reversibly Designed Difference-Pair Method
Data Hiding Using Reversibly Designed Difference-Pair Method
 
Fuzzy Logic based Contrast Enhancement
Fuzzy Logic based Contrast EnhancementFuzzy Logic based Contrast Enhancement
Fuzzy Logic based Contrast Enhancement
 
Reversible image data hiding with contrast enhancement
Reversible image data hiding with contrast enhancementReversible image data hiding with contrast enhancement
Reversible image data hiding with contrast enhancement
 
Kurmi 2015-ijca-905317
Kurmi 2015-ijca-905317Kurmi 2015-ijca-905317
Kurmi 2015-ijca-905317
 
Reversible Data Hiding Using Contrast Enhancement Approach
Reversible Data Hiding Using Contrast Enhancement ApproachReversible Data Hiding Using Contrast Enhancement Approach
Reversible Data Hiding Using Contrast Enhancement Approach
 
Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesSurvey on Single image Super Resolution Techniques
Survey on Single image Super Resolution Techniques
 
An efficient fusion based up sampling technique for restoration of spatially ...
An efficient fusion based up sampling technique for restoration of spatially ...An efficient fusion based up sampling technique for restoration of spatially ...
An efficient fusion based up sampling technique for restoration of spatially ...
 
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...
Image Resolution Enhancement using DWT and Spatial Domain Interpolation Techn...
 
Artem Baklanov - Votes Aggregation Techniques in Geo-Wiki Crowdsourcing Game:...
Artem Baklanov - Votes Aggregation Techniques in Geo-Wiki Crowdsourcing Game:...Artem Baklanov - Votes Aggregation Techniques in Geo-Wiki Crowdsourcing Game:...
Artem Baklanov - Votes Aggregation Techniques in Geo-Wiki Crowdsourcing Game:...
 
Image Contrast Enhancement for Brightness Preservation Based on Dynamic Stret...
Image Contrast Enhancement for Brightness Preservation Based on Dynamic Stret...Image Contrast Enhancement for Brightness Preservation Based on Dynamic Stret...
Image Contrast Enhancement for Brightness Preservation Based on Dynamic Stret...
 
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...IRJET-  	  Enhancement of Image using Fuzzy Inference System for Remotely Sen...
IRJET- Enhancement of Image using Fuzzy Inference System for Remotely Sen...
 
Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ug 205-image-retrieval-using-re-ranking-algorithm-11Ug 205-image-retrieval-using-re-ranking-algorithm-11
Ug 205-image-retrieval-using-re-ranking-algorithm-11
 
Effective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super ResolutionEffective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super Resolution
 

Destaque

Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognitionanshu shrivastava
 
A Security Mechanism Against Reactive Jammer Attack In Wireless Sensor Netwo...
A Security Mechanism Against Reactive Jammer  Attack In Wireless Sensor Netwo...A Security Mechanism Against Reactive Jammer  Attack In Wireless Sensor Netwo...
A Security Mechanism Against Reactive Jammer Attack In Wireless Sensor Netwo...ijsptm
 
AXELOS_ITIL_V3_Foundation_Certificate_JAN2016
AXELOS_ITIL_V3_Foundation_Certificate_JAN2016AXELOS_ITIL_V3_Foundation_Certificate_JAN2016
AXELOS_ITIL_V3_Foundation_Certificate_JAN2016JUZAR CHOONAWALA
 
Sonogenetic Locale Specific Activation of Universal Vectors for Xenobiotics -...
Sonogenetic Locale Specific Activation of Universal Vectors for Xenobiotics -...Sonogenetic Locale Specific Activation of Universal Vectors for Xenobiotics -...
Sonogenetic Locale Specific Activation of Universal Vectors for Xenobiotics -...Nejc Draganjec
 
Παραδοσιακες Χριστουγεννιατικες συνταγες
Παραδοσιακες Χριστουγεννιατικες  συνταγεςΠαραδοσιακες Χριστουγεννιατικες  συνταγες
Παραδοσιακες Χριστουγεννιατικες συνταγεςDaisy Anastasia Leonardou
 
trabajo-de-investigacion-corte-directo-universidad-peruana-los-andes-filial-...
 trabajo-de-investigacion-corte-directo-universidad-peruana-los-andes-filial-... trabajo-de-investigacion-corte-directo-universidad-peruana-los-andes-filial-...
trabajo-de-investigacion-corte-directo-universidad-peruana-los-andes-filial-...Antonio Oviedo Huaman
 

Destaque (8)

Automatic speech recognition
Automatic speech recognitionAutomatic speech recognition
Automatic speech recognition
 
A Security Mechanism Against Reactive Jammer Attack In Wireless Sensor Netwo...
A Security Mechanism Against Reactive Jammer  Attack In Wireless Sensor Netwo...A Security Mechanism Against Reactive Jammer  Attack In Wireless Sensor Netwo...
A Security Mechanism Against Reactive Jammer Attack In Wireless Sensor Netwo...
 
Antecedentes del cine
Antecedentes del cineAntecedentes del cine
Antecedentes del cine
 
AXELOS_ITIL_V3_Foundation_Certificate_JAN2016
AXELOS_ITIL_V3_Foundation_Certificate_JAN2016AXELOS_ITIL_V3_Foundation_Certificate_JAN2016
AXELOS_ITIL_V3_Foundation_Certificate_JAN2016
 
Sonogenetic Locale Specific Activation of Universal Vectors for Xenobiotics -...
Sonogenetic Locale Specific Activation of Universal Vectors for Xenobiotics -...Sonogenetic Locale Specific Activation of Universal Vectors for Xenobiotics -...
Sonogenetic Locale Specific Activation of Universal Vectors for Xenobiotics -...
 
Παραδοσιακες Χριστουγεννιατικες συνταγες
Παραδοσιακες Χριστουγεννιατικες  συνταγεςΠαραδοσιακες Χριστουγεννιατικες  συνταγες
Παραδοσιακες Χριστουγεννιατικες συνταγες
 
Robavecchia robanuova
Robavecchia robanuovaRobavecchia robanuova
Robavecchia robanuova
 
trabajo-de-investigacion-corte-directo-universidad-peruana-los-andes-filial-...
 trabajo-de-investigacion-corte-directo-universidad-peruana-los-andes-filial-... trabajo-de-investigacion-corte-directo-universidad-peruana-los-andes-filial-...
trabajo-de-investigacion-corte-directo-universidad-peruana-los-andes-filial-...
 

Semelhante a OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIGH-FREQUENCY INFORMATION DERIVEDFROM TRAINING IMAGES

Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesSurvey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesIOSR Journals
 
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...cscpconf
 
Asilomar09 compressive superres
Asilomar09 compressive superresAsilomar09 compressive superres
Asilomar09 compressive superresHoàng Sơn
 
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...CSCJournals
 
Single Image Super Resolution using Interpolation and Discrete Wavelet Transform
Single Image Super Resolution using Interpolation and Discrete Wavelet TransformSingle Image Super Resolution using Interpolation and Discrete Wavelet Transform
Single Image Super Resolution using Interpolation and Discrete Wavelet Transformijtsrd
 
Face Image Restoration based on sample patterns using MLP Neural Network
Face Image Restoration based on sample patterns using MLP Neural NetworkFace Image Restoration based on sample patterns using MLP Neural Network
Face Image Restoration based on sample patterns using MLP Neural NetworkIOSR Journals
 
Literature Review on Single Image Super Resolution
Literature Review on Single Image Super ResolutionLiterature Review on Single Image Super Resolution
Literature Review on Single Image Super Resolutionijtsrd
 
Robust High Resolution Image from the Low Resolution Satellite Image
Robust High Resolution Image from the Low Resolution Satellite ImageRobust High Resolution Image from the Low Resolution Satellite Image
Robust High Resolution Image from the Low Resolution Satellite Imageidescitation
 
Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
 Image Super-Resolution using Single Image Semi Coupled Dictionary Learning  Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
Image Super-Resolution using Single Image Semi Coupled Dictionary Learning CSCJournals
 
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...
Enhance Example-Based Super Resolution to Achieve Fine  Magnification of Low ...Enhance Example-Based Super Resolution to Achieve Fine  Magnification of Low ...
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...IJMER
 
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm
Single Image Super-Resolution Using Analytical Solution for L2-L2 AlgorithmSingle Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithmijtsrd
 
Novel Iterative Back Projection Approach
Novel Iterative Back Projection ApproachNovel Iterative Back Projection Approach
Novel Iterative Back Projection ApproachIOSR Journals
 
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...Query Image Searching With Integrated Textual and Visual Relevance Feedback f...
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...IJERA Editor
 
IMAGE QUALITY OPTIMIZATION USING RSATV
IMAGE QUALITY OPTIMIZATION USING RSATVIMAGE QUALITY OPTIMIZATION USING RSATV
IMAGE QUALITY OPTIMIZATION USING RSATVpaperpublications3
 
Effective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super ResolutionEffective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super ResolutionIOSR Journals
 

Semelhante a OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIGH-FREQUENCY INFORMATION DERIVEDFROM TRAINING IMAGES (20)

Survey on Single image Super Resolution Techniques
Survey on Single image Super Resolution TechniquesSurvey on Single image Super Resolution Techniques
Survey on Single image Super Resolution Techniques
 
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...
A REGULARIZED ROBUST SUPER-RESOLUTION APPROACH FORALIASED IMAGES AND LOW RESO...
 
Seminarpaper
SeminarpaperSeminarpaper
Seminarpaper
 
Asilomar09 compressive superres
Asilomar09 compressive superresAsilomar09 compressive superres
Asilomar09 compressive superres
 
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...
Learning Based Single Frame Image Super-resolution Using Fast Discrete Curvel...
 
Single Image Super Resolution using Interpolation and Discrete Wavelet Transform
Single Image Super Resolution using Interpolation and Discrete Wavelet TransformSingle Image Super Resolution using Interpolation and Discrete Wavelet Transform
Single Image Super Resolution using Interpolation and Discrete Wavelet Transform
 
B0141020
B0141020B0141020
B0141020
 
Face Image Restoration based on sample patterns using MLP Neural Network
Face Image Restoration based on sample patterns using MLP Neural NetworkFace Image Restoration based on sample patterns using MLP Neural Network
Face Image Restoration based on sample patterns using MLP Neural Network
 
J017166469
J017166469J017166469
J017166469
 
Literature Review on Single Image Super Resolution
Literature Review on Single Image Super ResolutionLiterature Review on Single Image Super Resolution
Literature Review on Single Image Super Resolution
 
Robust High Resolution Image from the Low Resolution Satellite Image
Robust High Resolution Image from the Low Resolution Satellite ImageRobust High Resolution Image from the Low Resolution Satellite Image
Robust High Resolution Image from the Low Resolution Satellite Image
 
Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
 Image Super-Resolution using Single Image Semi Coupled Dictionary Learning  Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
Image Super-Resolution using Single Image Semi Coupled Dictionary Learning
 
Image resolution enhancement via multi surface fitting
Image resolution enhancement via multi surface fittingImage resolution enhancement via multi surface fitting
Image resolution enhancement via multi surface fitting
 
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...
Enhance Example-Based Super Resolution to Achieve Fine  Magnification of Low ...Enhance Example-Based Super Resolution to Achieve Fine  Magnification of Low ...
Enhance Example-Based Super Resolution to Achieve Fine Magnification of Low ...
 
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm
Single Image Super-Resolution Using Analytical Solution for L2-L2 AlgorithmSingle Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm
Single Image Super-Resolution Using Analytical Solution for L2-L2 Algorithm
 
Novel Iterative Back Projection Approach
Novel Iterative Back Projection ApproachNovel Iterative Back Projection Approach
Novel Iterative Back Projection Approach
 
K01116569
K01116569K01116569
K01116569
 
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...Query Image Searching With Integrated Textual and Visual Relevance Feedback f...
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...
 
IMAGE QUALITY OPTIMIZATION USING RSATV
IMAGE QUALITY OPTIMIZATION USING RSATVIMAGE QUALITY OPTIMIZATION USING RSATV
IMAGE QUALITY OPTIMIZATION USING RSATV
 
Effective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super ResolutionEffective Pixel Interpolation for Image Super Resolution
Effective Pixel Interpolation for Image Super Resolution
 

Último

Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Wonjun Hwang
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticscarlostorres15106
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostZilliz
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii SoldatenkoFwdays
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 

Último (20)

Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
Bun (KitWorks Team Study 노별마루 발표 2024.4.22)
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmaticsKotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
Kotlin Multiplatform & Compose Multiplatform - Starter kit for pragmatics
 
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage CostLeverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
Leverage Zilliz Serverless - Up to 50X Saving for Your Vector Storage Cost
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko"Debugging python applications inside k8s environment", Andrii Soldatenko
"Debugging python applications inside k8s environment", Andrii Soldatenko
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 

OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIGH-FREQUENCY INFORMATION DERIVEDFROM TRAINING IMAGES

  • 1. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 DOI : 10.5121/ijcsit.2013.5202 19 OBTAINING SUPER-RESOLUTION IMAGES BY COMBINING LOW-RESOLUTION IMAGES WITH HIGH-FREQUENCY INFORMATION DERIVEDFROM TRAINING IMAGES Emil Bilgazyev 1 , Nikolaos Tsekos 2 , and Ernst Leiss 3 1,2,3 Department of Computer Science, University of Houston, TX, USA 1 eismailov2@uh.edu, 2 ntsekos@cs.uh.edu, 3 eleiss@uh.edu ABSTRACT In this paper, we propose a new algorithm to estimate a super-resolution image from a given low-resolution image, by adding high-frequency information that is extracted from natural high-resolution images in the training dataset. The selection of the high-frequency information from the training dataset is accomplished in two steps, a nearest-neighbor search algorithm is used to select the closest images from the training dataset, which can be implemented in the GPU, and a sparse-representation algorithm is used to estimate a weight parameter to combine the high-frequency information of selected images. This simple but very powerful super-resolution algorithm can produce state-of-the-art results. Qualitatively and quantitatively, we demonstrate that the proposed algorithm outperforms existing state-of-the-art super-resolution algorithms. KEYWORDS Super-resolution, face recognition, sparse representation. 1. INTRODUCTION Recent advances in electronics, sensors, and optics have led to a widespread availability of video-based surveillance and monitoring systems. Some of the imaging devices, such as cameras, camcorders, and surveillance cameras, have limited achievable resolution due to factors such as quality of lenses, limited number of sensors in the camera, etc. Increasing the quality of lenses or the number of sensors in the camera will also increase the cost of the device; in some cases the desired resolution may be still not achievable with the current technology. However, many applications, ranging from security to broadcasting, are driving the need for higher-resolution images or videos for better visualization [1]. The idea behind super-resolution is to enhance the low-resolution input image, such that the spatial-resolution (total number of independent pixels within the image) as well as pixel-resolution (total number of pixels) are improved.
  • 2. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 20 In this paper, we propose a new approach to estimate super-resolution by combining a given low-resolution image with high-frequency information obtained from training images (Fig. 1). The nearest-neighbor-search algorithm is used to select closest images from the training dataset, and a sparse representation algorithm is used to estimate a weight parameter to combine the high-frequencyinformation of selected images. The main motivation of our approach is that the high-frequency information helps to obtain sharp edges on the reconstructed images (see Fig. 1). (a) (b) (c) Figure 1: Depiction of (a) the super-resolution image obtained by combining (b) a given low-resolution image and (c) high-frequency information estimated from a natural high-resolution training dataset. The rest of the paper is organized as follow: Previous work is presented in Section 2, a description of our proposed method is presented in Section 3, the implementation details are presented in Section 4, and experimental results of the proposed algorithm as well as other algorithms are presented in Section 5. Finally, Section 6 summarizes our findings and concludes the paper. 2. PREVIOUS WORK In this section, we briefly review existing techniques for super-resolution of low-resolution images for general and domain-specific purposes. In recent years, several methods have been proposed that address the issue of image resolution. Existing super-resolution (SR) algorithms can be classified into two classes: multi-frame-based and example-based algorithms [8]. Multi-frame-based methods compute an high-resolution (HR) image from a set of low-resolution (LR) images from any domain [6]. The key assumption of multi-frame-based super-resolution methods is that the set of input LR images overlap and each LR image contains additional information than other LR images. Then multi-frame-based SR methods combine these sets of LR images into one image so that all information is contained in a single output SR image. Additionally, these methods perform super-resolution with the general goal of improving the quality of the image so that the resulting higher-resolution image is also visually pleasing. The example-based methods compute an HR counterpart of a single LR image from a known domain [2, 13, 18, 10, 14]. These methods learn observed information targeted to a specific domain and thus, can exploit prior knowledge to obtain superior results specific to that domain. Our approach belongs to this category, where we use a training database to improve reconstruction output. Moreover, the domain-specific SR methods targeting the same domain differ considerably from each other in the way they model and apply a priori knowledge about natural images. Yang et al. [22] introduced a method to reconstruct SR images using a sparse representation of the input LR images. However, the performance of these example-based SR methods degrades rapidly if the magnification factor is more than 2. In addition, the performance of these SR methods is highly dependent on the size of the training database. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 20 In this paper, we propose a new approach to estimate super-resolution by combining a given low-resolution image with high-frequency information obtained from training images (Fig. 1). The nearest-neighbor-search algorithm is used to select closest images from the training dataset, and a sparse representation algorithm is used to estimate a weight parameter to combine the high-frequencyinformation of selected images. The main motivation of our approach is that the high-frequency information helps to obtain sharp edges on the reconstructed images (see Fig. 1). (a) (b) (c) Figure 1: Depiction of (a) the super-resolution image obtained by combining (b) a given low-resolution image and (c) high-frequency information estimated from a natural high-resolution training dataset. The rest of the paper is organized as follow: Previous work is presented in Section 2, a description of our proposed method is presented in Section 3, the implementation details are presented in Section 4, and experimental results of the proposed algorithm as well as other algorithms are presented in Section 5. Finally, Section 6 summarizes our findings and concludes the paper. 2. PREVIOUS WORK In this section, we briefly review existing techniques for super-resolution of low-resolution images for general and domain-specific purposes. In recent years, several methods have been proposed that address the issue of image resolution. Existing super-resolution (SR) algorithms can be classified into two classes: multi-frame-based and example-based algorithms [8]. Multi-frame-based methods compute an high-resolution (HR) image from a set of low-resolution (LR) images from any domain [6]. The key assumption of multi-frame-based super-resolution methods is that the set of input LR images overlap and each LR image contains additional information than other LR images. Then multi-frame-based SR methods combine these sets of LR images into one image so that all information is contained in a single output SR image. Additionally, these methods perform super-resolution with the general goal of improving the quality of the image so that the resulting higher-resolution image is also visually pleasing. The example-based methods compute an HR counterpart of a single LR image from a known domain [2, 13, 18, 10, 14]. These methods learn observed information targeted to a specific domain and thus, can exploit prior knowledge to obtain superior results specific to that domain. Our approach belongs to this category, where we use a training database to improve reconstruction output. Moreover, the domain-specific SR methods targeting the same domain differ considerably from each other in the way they model and apply a priori knowledge about natural images. Yang et al. [22] introduced a method to reconstruct SR images using a sparse representation of the input LR images. However, the performance of these example-based SR methods degrades rapidly if the magnification factor is more than 2. In addition, the performance of these SR methods is highly dependent on the size of the training database. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 20 In this paper, we propose a new approach to estimate super-resolution by combining a given low-resolution image with high-frequency information obtained from training images (Fig. 1). The nearest-neighbor-search algorithm is used to select closest images from the training dataset, and a sparse representation algorithm is used to estimate a weight parameter to combine the high-frequencyinformation of selected images. The main motivation of our approach is that the high-frequency information helps to obtain sharp edges on the reconstructed images (see Fig. 1). (a) (b) (c) Figure 1: Depiction of (a) the super-resolution image obtained by combining (b) a given low-resolution image and (c) high-frequency information estimated from a natural high-resolution training dataset. The rest of the paper is organized as follow: Previous work is presented in Section 2, a description of our proposed method is presented in Section 3, the implementation details are presented in Section 4, and experimental results of the proposed algorithm as well as other algorithms are presented in Section 5. Finally, Section 6 summarizes our findings and concludes the paper. 2. PREVIOUS WORK In this section, we briefly review existing techniques for super-resolution of low-resolution images for general and domain-specific purposes. In recent years, several methods have been proposed that address the issue of image resolution. Existing super-resolution (SR) algorithms can be classified into two classes: multi-frame-based and example-based algorithms [8]. Multi-frame-based methods compute an high-resolution (HR) image from a set of low-resolution (LR) images from any domain [6]. The key assumption of multi-frame-based super-resolution methods is that the set of input LR images overlap and each LR image contains additional information than other LR images. Then multi-frame-based SR methods combine these sets of LR images into one image so that all information is contained in a single output SR image. Additionally, these methods perform super-resolution with the general goal of improving the quality of the image so that the resulting higher-resolution image is also visually pleasing. The example-based methods compute an HR counterpart of a single LR image from a known domain [2, 13, 18, 10, 14]. These methods learn observed information targeted to a specific domain and thus, can exploit prior knowledge to obtain superior results specific to that domain. Our approach belongs to this category, where we use a training database to improve reconstruction output. Moreover, the domain-specific SR methods targeting the same domain differ considerably from each other in the way they model and apply a priori knowledge about natural images. Yang et al. [22] introduced a method to reconstruct SR images using a sparse representation of the input LR images. However, the performance of these example-based SR methods degrades rapidly if the magnification factor is more than 2. In addition, the performance of these SR methods is highly dependent on the size of the training database.
  • 3. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 21 Freeman et al. [8] proposed an example-based learning strategy that applies to generic images where the LR to HR relationship is learned using a Markov Random Field (MRF). Sun et al. [17] extended this approach by using the Primal Sketch priors to reconstruct edge regions and corners by deblurring them. The main drawback of these methods is that they require a large database of LR and HR image pairs to train the MRF. Chang et al. [3] used the Locally Linear Embedding (LLE) Figure 2: Depiction of the pipeline for the proposed super-resolution algorithm. manifold learning approach to map the the local geometry of HR images to LR images with the assumption that the manifolds between LR and HR images are similar. In addition, they reconstructed a SR image using K neighbors. However, the manifold between the synthetic LR that is generated from HR images is not similar to the manifold of real scenario LR images, which are captured under different environments and camera settings. Also, using a fixed number of neighbors to reconstruct an SR image usually results in blurring effects such as artifacts in the edges, due to over- or under-fitting. Another approach is derived from a multi-frame based approach to reconstruct an SR image from a single LR image [7, 12, 19]. These approaches learn the co-occurrence of a patch within the image where the correspondence between LR and HR is predicted. These approaches cannot be used to reconstruct a SR image from a single LR facial image, due to the limited number of similar patches within a facial image. SR reconstruction based on wavelet analysis has been shown to be well suited for reconstruction, denoising and deblurring, and has been used in a variety of application domains including biomedical [11], biometrics [5], and astronomy [20]. In addition, it provides an accurate and sparse representation of images that consist of smooth regions with isolated abrupt changes [16]. In our method, we propose to take advantage of the wavelet decomposition-based approach in conjunction with compressed sensing techniques to improve the quality of the super-resolution output.
  • 4. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 22 3. METHODOLOGY Table 1. Notations used in this paper Symbols Description X Collection of training images iX th i training image ( ×m n R ) jix , th j patch of th i image ( ×k l R )  threshold value i sparse representation of th i patch p . pl -norm D Dictionary W , −1 W forward and inverse wavelet transforms  ,  high- and low-frequencies of image ][ Concatenation or vectors or matrices Let nm i RX × ∈ be the th i image of the training dataset }0...=:{= NiXX i , and × ∈, k l i j x R be the th j patch of an image }0...=:{= , MjxX jii . The wavelet transform of an image patch x will return low- and high-frequency information: ( ) = [ ( ), ( )] ,W x x x  (1) where W is the forward wavelet transform, )(x is the low-frequency information, and )(x is the high-frequency information of an image patch x . Taking the inverse wavelet transform of high- and low-frequency information of original image (without any processing on them) will result in the original image: 1 = ([ ( ), ( )]) ,x W x x − (2) where −1 W is the inverse wavelet transform. If we use Haar wavelet transform with its coefficients being 0.5 instead of 2 (nonquadratic-mirror-filter), then the low-frequency information of an image x will actually be a low-resolution version of an image x , where four neighboring pixels are averaged; in other words, it is similar to down-sampling an image x by a factor of 2 with nearest-neighbor interpolation, and the high-frequency information )(x of an image x will be similar to the horizontal, vertical and diagonal gradients of x . Assume that, for a given low-resolution image patch iy which is the th i patch of an image y , we can find a similar patch }0={= NMjxj  from the natural image patches, then by combining iy with the high-frequency information )( jx of a high-resolution patch jx , and taking the
  • 5. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 23 inverse wavelet transform, we will get the super-resolution * y (see Fig. 1):   − − ≤ 2* 1 02 = ([ , ( )]) , ( ) ,i i j i jy W y x y x (3) where 0 is a small nonnegative value. It is not guaranteed that we will always find an jx such that 0 2 2 )(  ≤− ji xy , thus, we introduce an approach to estimate a few closest low-resolution patches ( )( jx ) from the training dataset and then estimate a weight for each patch )( jx which will be used to combine high-frequency information of the training patches )( jx . To find closest matches to the low-resolution input patch iy , we use a nearest-neighbor search algorithm: ,},)(=:{= 1 2 2 Xxxyccc iciciii ∈∀≤−  (4) where c is a vector containing the indexes ( ic ) of training patches of the closest matches to input patch iy , and 1 is the radius threshold of a nearest-neighbor search. After selecting the closest matches to iy , we build two dictionaries from the selected patches jx ; the first dictionary will be the joint of low-frequency information of training patches )( jx where it will be used to estimate a weight parameter, and the second dictionary will be the joint of high-frequency information of training patches )( jx : .}:)({=,}:)({= cjxDcjxD jiji ∈∈   (5) We use a sparse representation algorithm [21] to estimate the weight parameter. The sparse-representation i of an input image patch iy with respect to the dictionary  iD , is used as a weight for fusion of the high-frequency information of training patches (  iD ): .argmin= 12 iiii i i Dy    +− (6) The sparse representation algorithm (Eq. 6) tries to estimate iy by fusing a few atoms (columns) of the dictionary  iD , by assigning non-zero weights to these atoms. The result will be the sparse-representation i , which has only a few non-zero elements. In other words, the input image patch iy can be represented by combining a few atoms of  iD ( iii Dy  ≈ ) with a weight parameter i ; similarly, the high-frequency information of training patches  iD can also be
  • 6. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 24 combined with the same weight parameter i , to estimate the unknown high-frequency information of an input image patch iy :  −* 1 = ([ , ]) ,i i i i y W y D (7) where * iy is the output (super-resolution) image patch, and W −1 is the inverse wavelet transform. Figure 2 depicts the pipeline for the proposed algorithm. For the training step, from high-resolution training images we extract patches, then we compute low-frequency (will become low-resolution training image patches) and high-frequency information for each patch in the training dataset. For the reconstruction step, given an input low-resolution image y , we extract a patch iy , find nearest neighbors c within the given radius 1 (this can be speeded-up usinga GPU), then from selected neighbors c, we construct low-frequency  i D and high-frequency dictionaries  i D , where the low-frequency dictionary is used to estimate the sparse representation i of input low-resolution patch iy with respect to the selected neighbors, and the high-frequency dictionary  iD will be used to fuse its atoms (columns) with a weight parameter, where the sparse representation i will be used as a weight parameter. Finally, by taking the inverse wavelet transform ( −1 W ) of a given low-resolution image patch iy with fused high-frequency information, we will get the super-resolution patch * y . Iteratively repeating the reconstruction step (red-dotted block in Fig. 2) for each patch in the low-resolution image y , we will obtain the super-resolution image * y . 4. IMPLEMENTATION DETAILS In this section we will explain the implementation details. As we have pointed out in Sec. 3, we extract patches for each training image }0...=:{= , MjxX jii with lk ji Rx × ∈, . The number M depends on the window function, which determines how we would like to select the patches. There are two ways to select the patches from the image; one is by selecting distinct patches from an image, where two consecutive patches don’t overlap, and another is by selecting overlapped patches (sliding window), where two consecutive patches are overlapped. Since the 2l -norm in nearest-neighbor search is sensitive to the shift, we slide the window by one pixel in horizontal or vertical direction, where the two consecutive patches will overlap each other by lk ×−1)( or 1)( −× lk , where × ∈, k l i jx R . To store these patches we will require an enormous amount of storage space lklnkmN ××−×−× )()( , where N is the number of training images and × ∈ m n i X R . For example, if we have 1000 images natural images in the training dataset, and each has a resolution of 10001000× pixels, to store the patches of 4040× , we will require 1.34TB of storage space, which would be inefficient and computationally expensive.To reduce the number of patches, we removed patches which don’t contain any gradients, or contain very few gradients, 2 2 2, ≥∇ jix where ∇ is the sum of gradients along the vertical and horizontal directions (
  • 7. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 25 y x x x x jiji ji ∂ ∂ + ∂ ∂ ∇ ,, , = ), and 2 is the threshold value to filter out the patches with less gradient variation. Similarly, we calculate the gradients on input low-resolution patches ( iy ), and if they are below the threshold 2 , we upsample them using bicubic interpolation, where no super-resolution reconstruction will be performed on that patch. To improve the computation speed, the nearest-neighbor search can be calculated in the GPU, and since all given low-resolution patches are calculated independently of each other, multi-threaded processing can be used for each super-resolution patch reconstruction. In the wavelet transform, for low-pass filter and high-pass filter we used [ 0.5 , 0.5 ] and [ 0.5− , 0.5 ], where 2D filters for wavelet transform are created from them. These filters are not quadratic-mirror-filters (nonorthogonal), thus, during the inverse wavelet transform we need to multiply the output by 4 . The reason for choosing these values for the filters is, the low-frequency information (analysis part) of the forward wavelet transform will be the same as down-sampling the signal by a factor of 2 with nearest neighbor interpolation, which is used in the nearest-neighbor search.During the experiments, all color images are converted to YCbCr , where only the luminance component ( Y ) is used.For the display, the blue- and red-difference chroma components (Cb and Cr ) of an input low-resolution image are up-sampled and combined with the super-resolution image to obtain the color image * y (Fig. 2). Note that we can reduce the storage space for the patches to zero, by extracting the patches of the training images during reconstruction. This can be accomplished by changing the neighbor-search algorithm, and can be implemented in GPU. During the neighbor-searching, each GPU thread will be assigned to extract low-frequency )( , jlx and high-frequency information  , ( )l j x at an assigned position j of a training image lX ; we compute the distance to the input low-resolution image patch iy , and if the distance is less than the threshold 2 , then the GPU thread will return the high-frequency information )( , jlx , where the returned high-frequency information will be used to construct  i D :    − ≤ ∀ ∈ 2 1 2 = { ( ) : = ( ) , } .i i i i c c i i D c c y x x X (8) As a threshold (radius) value for nearest-neighbor search algorithm we used 0.5 for natural images, and 0.3 for facial images. Both low-frequency information of training image and input image patches are normalized before calculating euclidean distance. We selected these values experimentally, where at these values we get highest SNR and lowest MSE. As we know that that the euclidean distance ( in nearest-neighbor search) is sensitive to noise, but in our approach, our main goal is to reduce the number of training patches which are close to input patch. Thus, we take a higher the threshold value for nearest-neighbor search, where we select closest matches, then the sparse representation is performed on them. Note that, sparse representation estimation (Eq. 6) tends to estimate input patch from training patches, where noise is taken care of [14].Reducing the storage space will slightly increase the super-resolution reconstruction time, since the wavelet transform will be computed during the reconstruction.
  • 8. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 26 5. EXPERIMENT RESULTS We performed experiments on a variety of images to test the performance of our approach (HSR) as well as other super-resolution algorithms: BCI [1], SSR [22] and MSR [8]. We conducted two types of experiments. For the first one, we performed the experiment on the Berkeley Segmentation Dataset 500 [15]. It contains natural images, where the natural images are divided into two groups; the first group of images (Fig. 3(a)) are used to train super-resolution algorithms (except BCI), and the second group images (Fig. 3(b)) are used to test the performance of the super-resolution algorithms. To measure the performance of the algorithms, we use mean-square-error (MSE) and signal-to-noise ratio (SNR) as a distance metric. These algorithms measure the difference between the ground truth and the reconstructed images. The second type of experiment is performed on facial images (Fig. 4), where face recognition system is used as a distance metric to demonstrate the performance of the super-resolution algorithms. (a) (b) Figure 3: Depiction of Berkeley Segmentation Dataset 500 images used for (a) training and (b) testing the super-resolution algorithms. (a) (b) Figure 4: Depiction of facial images used for (a) training and (b) testing the super-resolution algorithms.
  • 9. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 27 5.1 Results on Natural Images In Figure 5, we show the output of the proposed super-resolution algorithm, BCI, SSR, and MSR. The red rectangle is zoomed-in and displayed in Figure 6. In this figure we focus on the effect ofsuper-resolution algorithms on low-level patterns (fur of the bear). Most of the super-resolution algorithms tend to improve the sharpness of the edges along the border of the objects, which looks good to human eyes, and the low-level patterns are ignored. One can see that the output of BCI is smooth (Fig. 5(b)), and from the zoomed-in region (Fig. 6(b)) it can be noticed that the edges along the border of the object are smoothed, and similarly, the pattern inside the regions is also smooth. This is because BCI interpolates the neighboring pixel values in the lower-resolution to introduce a new pixel value in the higher-resolution. This is the same as taking the inverse wavelet transform of a given low-resolution image with its high-frequency information being zero, thus the reconstructed image will not contain any sharp edges. The result of MSR has sharp edges, however, it contains block artifacts (Fig. 5(c)). One can see that the edges around the border of an object are sharp, but the patterns inside the region are smoothed, and block artifact are introduced (Fig. 6(c)). On the other hand, the result of SSR doesn’t contain sharp edges along the border of the object, but it contains sharper patterns compared to BCI and MSR (Fig. 5(d)). The result of the proposed super-resolution algorithm has sharp edges, sharp patterns, as well as fewer artifacts compared to other methods (Fig. 5(e) and Fig. 6(e)), and visually it looks more similar to the ground truth image (Fig. 5(f) and Fig. 6(f)). Figure 7 shows the performance of the super-resolution algorithms on a different image with fewer patterns. One can see that the output of the BCI is still smooth along the borders, and inside the region it is clearer. The output of MSR looks better for the images with fewer patterns, where it tends to reconstruct the edges along the borders. (a) (b) (c) (d) (e ) (f) Figure 5: Depiction of low-resolution, super-resolution and original high-resolution images. (a) Low-resolution image, (b) output of BCI, (c) output of SSR, (d) output of MSR, (e) output of proposed algorithm, and (f) original high-resolution image. The solid rectangle boxes in red color represents the regions that is magnified and displayed in Figure 5 for better visualization. One can see that the output of the proposed algorithm has sharper patterns compared to other SR algorithms.
  • 10. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 28 (a) (b) (c) (d) ( e ) (f) Figure 6: Depiction of a region (red rectangle in Figure 3) for (a) low-resolution image, output of (b) BCI, (c) SSR, (d) MSR, (e) proposed algorithm, and (f) original high-resolution image. Notice that the the proposed algorithm has sharper patterns compared to other SR algorithms. (a) (b) (c) (d) (e ) (f) Figure 7: Depiction of low-resolution, super-resolution and original high-resolution images. (a) Low-resolution image, (b) output of BCI, (c) output of SSR, (d) output of MSR, (e) output of proposed algorithm, and (f) original high-resolution image. The solid rectangle boxes in yellow and red colors represent the regions that were magnified and displayed on the right side of each image for better visualization. One can see that the output of the proposed algorithm has better visual quality compared to other SR algorithms.
  • 11. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 29 In the output of SSR, one can see that the edges on the borders are smooth, and inside the regions it has ringing artifacts. The SSR algorithm builds dictionaries from high-resolution and low-resolution image patches by reducing the number of atoms (columns) of the dictionaries under a constraint that these dictionaries can represent the image patches in the training dataset with minimal difference. This is similar to compressing or dimension reduction, where we try to preserve the structure of the signal, not the details of the signal, and sometimes we get artifacts during the reconstruction1 . We also computed the average SNR and MSE to quantitatively measure the performance of the super-resolution algorithms. Table 5.1 depicts the average SNR and MSE values for BCI, MSR, SSR, and HSR. Notice that the proposed algorithm has the highest signal-to-noise ratio and the lowest difference mean-square-error. Table 5.1: Experimental Results 5.2. Results on Facial Images We conducted experiments on surveillance camera facial images (SCFace)[9]. This database contains 4,160 static images from 130 subjects. The images were acquired in an uncontrolled indoor environment using five surveillance cameras of various qualities and ranges. For each of these cameras, one image from each subject at three distances, 4.2 m, 2.6 m, and 1.0 m was acquired. Another set of images was acquired by a mug shot camera. Nine images per subject provide nine discrete images ranging from left to right profile in equal steps of 22.5 degrees, including a frontal mug-shot image at 0 degrees. The database contains images in visible and infrared spectrum. Images from different quality cameras mimic real-world conditions. The high-resolution images are used as a gallery (Fig. 4(a)), while the images captured by a camera with visible light spectrum from a 4.2 m distance are used as a probe (Fig. 4(b)). Since the SCFace dataset consists of two types of images, high-resolution images and surveillance images, we used the high-resolution images to train SR methods and the surveillance images as a probe. We used Sparse Representation based Face Recognition proposed by Wright .et al [21], to test the performance of the super-resolution algorithms. It has been proven that the performance of the face recognition systems relies on low-level information (high-frequency information) of the facial images [4]. The high-level information, which is the structure of the face, affects less the performance of face recognition systems compared to low-level information, unless we compare human faces with other objects such as monkey, lion, car, etc., where the structures between them 1 The lower-frequencies of the signal affect more the difference between original and reconstructed signals, compared to the higher-frequencies. For example, if we remove the DC component (0 Hz) from one of the signals, original or reconstructed, the difference between them will be very large. Thus keeping the lower-frequencies of the signal helps to preserve the structure and have minimal difference between the original and reconstructed signals. Dist. Metric SR Algorithms BCI SSR MSR HSR SNR ( dB ) 23.08 24.76 18.46 25.34 MSE 5.45 5.81 12.01 3.95
  • 12. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 30 are very different. Most of the human faces have similar structures: two eyes, one nose, two eyebrows, etc., and in the low-resolution facial images, the edges (high-frequency information) around the eyes, eyebrows, nose, mouth, etc., are lost which decreases the performance of face recognition systems [21]. Even for humans it is very difficult to recognize a person from a low-resolution image (see Fig. 4). Figure 8 depicts the given low-resolution images, and the output of super-resolution images. The rank-1 face recognition accuracy for LR, BCI, SSR, MSR, and our proposed algorithms are: 2%, 18%, 13%, 16%, and 21%. Overall, the face recognition accuracy is low, but compared to the face recognition performance on the low-resolution images, we can summarize that the super-resolution algorithms can improve the recognition accuracy. 6. CONCLUSION We have proposed a novel approach to reconstruct super-resolution images for better visual quality as well as for better face recognition purposes, which also can be applied to other fields. We presented a sparse representation-based SR method to recover the high-frequency components of an (a) (b) (c) (d) Figure 8: Depiction of LR and output of SR images. (a) Low-resolution image, output of (b) BCI, (c) SSR, and (d) proposed method. For this experiment we used a patch size of 10×10 pixels; thus when we increase the patch size we introduce ringing artifacts, which can be seen in the reconstructed image (d). Quantitatively, in terms of face recognition our proposed super-resolution algorithm outperforms other super-resolution algorithms. SR image. We demonstrated the superiority of our methods over existing state-of-the-art super-resolution methods for the task of face recognition in low-resolution images obtained from real world surveillance data, as well as better performance in terms of MSE and SNR. We conclude that by having more than one training image for the subject we can significantly improve the visual quality of the proposed super-resolution output, as well as the recognition accuracy. REFERENCES [1] M. Ao, D. Yi, Z. Lei, and S. Z. Li. Handbook of remote biometrics, chapter Face Recognition at a Distance:System Issues, pages 155–167. Springer London, 2009. [2] S. Baker and T. Kanade. Hallucinating faces. In Proc. IEEE International Conference on Automatic Face and Gesture Recognition, pages 83–88, Grenoble, France, March 28-30, 2002. [3] H. Chang, D. Y. Yeung, and Y. Xiong. Super-resolution through neighbor embedding. In Proc. IEEEInternational Conference on Computer Vision and Pattern Recognition, pages 275–282,Washington DC., 27 June-2 July 2004. [4] G. Chen and W. Xie. Pattern recognition using dual-tree complex wavelet features and svm. In Proc. Canadian Conference on Electrical and Computer Engineering, pages 2053–2056, 2008.
  • 13. International Journal of Computer Science & Information Technology (IJCSIT) Vol 5, No 2, April 2013 31 [5] A. Elayan, H. Ozkaramanli, and H. Demirel. Complex wavelet transform-based face recognition. EURASIP Journal on Advances in Signal Processing, 10(1):1–13, Jan 2008. [6] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar. Fast and robust multiframe super resolution. IEEE Transactions on Image Processing, 13(10):1327–1344, 2004. [7] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Learning low-level vision. IEEE International Journal of Computer Vision, 40(1):25–47, 2000. [8] W. T. Freeman and C. Liu. Markov random fields for super-resolution and texture synthesis, chapter 10, pages 1–30. MIT Press, 2011. [9] M. Grgic, K. Delac, and S. Grgic. SCface - surveillance cameras face database. Multimedia Tools and Applications Journal, 51(3):863–879, 2011. [10] P. H. Hennings-Yeomans. Simultaneous super-resolution and recognition. PhD thesis, Carnegie Mellon University, Pittsburgh, PA, USA, 2008. [11] J. T. Hsu, C. C. Yen, C. C. Li, M. Sun, B. Tian, and M. Kaygusuz. Application of wavelet-based POCS superresolution for cardiovascular MRI image enhancement. In Proc. International Conference on Image and Graphics, pages 572 – 575, Hong Kong, China, Dec. 18-20, 2004. [12] K. I. Kim and Y. Kwon. Single-image super-resolution using sparse regression and natural image prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6):1127–1133, 2010. [13] C. Liu, H. Y. Shum, and C. S. Zhang. A two-step approach to hallucinating faces: global parametric model and local nonparametric model. In Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 192–198, San Diego, CA, USA, Jun. 20-26, 2005. [14] J. Mairal, M. Elad, and G. Sapiro. Sparse representation for color image restoration. IEEE Transactions on Image Processing, 17(1):53–69, Jan. 2008. [15] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. 8th International Conference on Computer Vision, volume 2, pages 416–423, July 2001. [16] G. Pajares and J. M. Cruz. A wavelet based image fusion tutorial. Pattern Recognition, 37(9):1855–1872, Sep. 2004. [17] J. Sun, N. N. Zheng, H. Tao, and H. Shum. Image hallucination with primal sketch priors. In Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages II – 729–36 vol.2, Madison, WI, Jun. 18-20, 2003. [18] J. Wang, S. Zhu, and Y. Gong. Resolution enhancement based on learning the sparse association of image patches. Pattern Recognition Letters, 31(1):1–10, Jan. 2010. [19] Q. Wang, X. Tang, and H. Shum. Patch based blind image super-resolution. In Proc. IEEE International Conference on Computer Vision, Beijing, China, Oct. 17-20, 2005. [20] R. Willet, I. Jermyn, R. Nowak, and J. Zerubia. Wavelet based super resolution in astronomy. In Proc. Astronomical Data Analysis Software and Systems, volume 314, pages 107–116, Strasbourg, France, 2003. [21] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210–227, February 2009. [22] J. Yang, J. Wright, T. Huang, and Y. Ma. Image super-resolution as sparse representation of raw image patches. In Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Anchorage, AK, Jun. 23-28, 2008.