SlideShare uma empresa Scribd logo
1 de 8
Baixar para ler offline
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072
© 2015, IRJET.NET- All Rights Reserved Page 240
LOSSLESS IMAGE COMPRESSION AND DECOMPRESSION
USING HUFFMAN CODING
Anitha. S, Assistant Professor, GAC for women, Tiruchendur,TamilNadu,India.
Abstract - This paper propose a novel Image
compression based on the Huffman encoding and
decoding technique. Image files contain some redundant
and inappropriate information. Image compression
addresses the problem of reducing the amount of data
required to represent an image. Huffman encoding and
decoding is very easy to implement and it reduce the
complexity of memory. Major goal of this paper is to
provide practical ways of exploring Huffman coding
technique using MATLAB .
Keywords - Image compression, Huffman encoding, Huffman
decoding, Symbol, Source reduction…
1. INTRODUCTION
A commonly image contain redundant information i.e.
because of neighboring pixels which are correlated and
Contain redundant information. The main objective of image
compression [1][9] is redundancy and irrelevancy
reduction. It needs to represent an image by removing
redundancies as much as possible, while keeping the
resolution and visual quality of compressed image as close
to the original image. Decompression is the inverse
processes of compression i.e. get back the original image
from compressed image. Compression ratio is defined as the
ratio of information units an original image and compressed
Compression is performed by three kinds of redundancies.
(1) coding redundancy
(2) Spatial or temporal or inter pixel redundancy
(3) Psycho visual Redundancy
Compression further divided into predictive and transform
coding. Transformed coding means, the large amount of
information are transferred into very small number of
blocks. One of the best examples of transformed coding
technique is wavelet transform. Predictive means based on
the training set (neighbors), reduce some redundancies.
Context based compression algorithms are used predictive
technique [2][3].
2. HISTORY
Morse code, invented in 1838 for use in telegraphy, is an
early example of data compression based on using shorter
codeword’s for letters such as "e" and "t" that are more
common in English. Modern work on data compression
began in the late 1940s with the development of information
theory. In 1949 Claude Shannon and Robert Fano devised a
systematic way to assign codeword’s based on probabilities
of blocks. An optimal method for doing this was then found
by David Huffman in 1951.In 1980’s a joint committee ,
known as joint photographic experts group (JPEG)
developed first international compression standard for
continuous tone images. JPEG algorithm included several
modes of operations. Steps in JPEG
1. Color image converted from RGB to YCbCr
2. The resolution of chromo components is reduced by a
factor of 2 or 3. This reflects that the eye is less sensitive to
fine color details than to brightness details
3. Image is converted into 8X8 matrix
4. DCT, Quantization is applied and finally compressed
5. Decoding is reversible process except quantization is
irreversible.
3 . CODING REDUNDANCY:
Rk is distinct random variable, for k=1,2,…L with related
probability
Pr(rk)=nk/n
where k=1,2,…L, Kth gray level in an image is represented by
‘nk’and ‘n’ is used to specifiy total number of pixels in the
image. (rk) is the total number of bits used to symbolize
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072
© 2015, IRJET.NET- All Rights Reserved Page 241
each pixel in the still image then the average number of bits
required to represent each pixel [4] is
Lavg= ∑L l(rk)pr(rk)
K=1
The average length of the code words is calculated by
summing the number of bits used to represent the gray level
and the probability that the gray level which placed in an
image.
Table 1:Coding redundancy
Rk Pr(rk) Code1 L1(rk) Code2 L2(rk
R1 0.1875 00 2 011 3
R2 0.5 01 2 1 1
R3 0.1250 10 2 010 3
R4 0.1875 11 2 00 2
Lavg=2 for code1;Lavg=1.81 for code 2.
Coding redundancy is always present when the gray levels
of an image are coded using a binary code. In that table 1,
both a fixed and variable length encoding of a four level
image is shown. Column two represents the gray level
distribution. The 2-bit binary encoding (code1 ) is shown in
column 3. It has an average length of 2 bits. The average
number of bits required by code2 (in column 5) is
Lavg=∑4 l2(k)pr(rk)
=3(0.1875)+1(0.5)+3(0.125)+2(0.1875)=1.8125
compression ratio is
Cr =2/1.8125=1.103
4. INTER PIXEL REDUNDANCY:
In order to reduce inter pixel redundancy, the 2-D pixel
array normally used for human viewing and interpretation
must be transformed into a more efficient format. For
example, the difference between adjacent pixels can be used
to represent an image. It is called mapping. A simple
mapping procedure is lossless predictive coding , eliminates
the inter pixel redundancies of closely spaced pixels by
extracting and coding only the new information in each
pixel. The new information of a pixel is defined as the
difference between the actual and predicted value of that
pixel.
Fig 1:sample(a) Fig 2: sample(b)
Histogram of Fig 1 & 2
These observations highlight the fact that variable – length
coding is not designed to take advantage of the structural
relationship s between the aligned matches in Fig2.Although
the pixel to pixel correlations are more important in the Fig
1. Because the values of the pixels in either image can be
predicted by their neighbors, the information carried by
individual pixels is relatively small. visual contribution of a
single pixel to an image is redundant; it could be predicted
by their neighbors.
5. PSYCHO VISUAL REDUNDANCY:
In general, an observer searches for distinguishing features
such as edges or textual regions and mentally combines
them into recognizable groupings. The brain then correlates
these groupings with prior knowledge in order to complete
the image interpretation process. Thus eye does not respond
with equal sensitivity to all visual information. Certain
information simply has less relative importance than other
information in normal visual processing. This information is
said to be psycho visually redundant. Unlike coding and
inter pixel redundancy psycho visual redundancy is
associated with real or quantifiable visual information. Its
elimination is desirable because the information itself is not
essential for normal visual processing. Since the elimination
of psycho visually redundant data results in a loss of
quantitative information, it is called quantization.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072
© 2015, IRJET.NET- All Rights Reserved Page 242
Fig 4:original Fig 5 :16 gray level Fig 6 : 16 gray
levels/random noise
Consider the image in In fig 4. shows a monochrome image
with 256 gray levels. Fig 5 : is the same image after the
uniform quantization to four bits or 16 possible levels. Fig 6
is called improved gray-scale quantization. It recognizes the
eye’s inherent sensitivity to edges and breaks them up by
adding to each pixel a pseudorandom number, which is
generated from the low-order bits of neighboring pixels,
before quantizing the result. In fig 6 removes a great deal of
psycho visual redundancy with little impact on perceived
image quality.
6. COMPRESSION RATIO:
Compression ratio calculated by the formula
Cr=n1/n2
Where n1 is used to represent the total number of bits in an
original image and n2 is used to specify the total number of
bits in the encoded image. Compression consists of encoding
and decoding techniques. Encoding is used to create a
unique symbols which are used to specify the original image.
The output of encoder is compressed image. The input of
decoder is compressed image and the decoder stage is used
to get the original image with out getting any error.
Cr=bytes(f1)/bytes(f2);
Where f1 is total number of bytes in the original image and
f2 is the total number of bytes in the encoding image.
7 . ROOT MEAN SQUARE ERROR :
To analysis a compressed image, it must be fed into a
decoder .f’(x,y) is the decompressed image . It will
generated by the decoder.The decoded and the original
image is same then the compression is lossless compression
otherwise it is lossy. The error energy between the
compressed and the decompressed image is calculated by
using the formula
e(x,y)=f’(x,y)-f(x,y)
so the total error between the two images is
∑M-1∑n-1[f’(x,y)-f(x,y)]
x=0 Y=0
and the rms(root mean square)error is used to calculate the
error energy between the compressed and the original
image and f’(x,y) is the square root of the squared error
averaged over the M*N array , or
rmse=[1/MN∑M-1∑n-1[f’(x,y)-f(x,y)]2]1/2 ]
x=0 Y=0
8. HUFFMAN CODES:
Huffman Codes[5] contain small number of possible
symbols.All the source symbols are coded one at a time.
STEPS :
1.Symbols are placed on the probability values in a
decreasing manner.
2. Where there is many number of nodes :
* The new node can be form by merging the two smallest
nodes.
* Values ‘1’ or ‘0’ is assigned to each pair of values.
3. From the root we can read the symbols based on their
location.
Table 2 : Huffman Source Reduction
Orginal Source Source REduction
Symbol Probability 1 2
A2 0.5 0.5 0.5
A4 0.1875 0.3125 0.5
A1 0.1875 0.1875
A3 0.125
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072
© 2015, IRJET.NET- All Rights Reserved Page 243
In table 2, the source symbols are arranged based on their
probability values and the values are arranged in decreasing
order. To develop the first source reduction , the bottom
two probabilities are 0.125 and 0.1875 are joined to create a
compound symbol with probability 0.3125. In the first
source reduction the compound and its related probability
are stored. The probability values are arranged in
decreasing order that is from most probable values from the
least probable values. This process is repeated until a two
symbols which are having the reduced source.
Table 3: Code Assignment Procedures
Orginal Source Source Reduction
Symbo
l
Probabilit
y
Code 1 2
A2 0.5 1 0.5 1 0.5 1
A4 0.1875 00 0.312
5
01 0.5 0
A1 0.1875 011 0.187
5
00
A3 0.125 010
In the Huffman’s second step consists[ 5] of the reduced
source ,which are from the very small source to large one.
The reduced source probability symbol 0.5 on the right is
generated by merging the two reduced probability symbols
in the left. The symbols 0 and 1 is appended to each code for
differentiate with each other. This is then repeated for each
reduced source until the original source is reached.
Huffman Code tree[6] based on ordered list . The table can
be developed based on the tree traversal mechanism. The
code for each symbol may be obtain by traveling a direction
of the symbol from the root to leaf of the tree. For right
direction the value ‘1’ is assigned and the left direction the
value ‘0’ is assigned. For example a symbol which is reached
by branching left twice, then right once may be represented
by the pattern '001'. The figure below depicts codes for
nodes of a sample tree.
*
/ 
(0) (1)
/ 
(10) (11)
/ 
(100) (101)
Once a Huffman tree is built, Canonical Huffman codes,
which require less information to rebuild.
9. DEVELOPMENT OF HUFFMAN CODING AND
DECODING ALGORITHM:
At first read the color image. And change the color image
into grey level. Then calculate the total number of unique
symbols to represent the image. And also calculate the
probability values of that symbols. Then create the Symbol
table and assign least value to the most probable values.
This is called Huffman encoding and Huffman decoding is
the reversible process of encoding ,which is used to
decompress the image. Finally the quality of the image
calculated by root mean square error. If it zero means no
difference between the original and decompressed images
otherwise some difference is there.
10. ADAPTIVE HUFFMAN CODING
Adaptive huffman coding (Dynamic Huffman coding) is
based on Huffman coding, and the symbols are can’t having
no prior knowledge of source distribution. one-pass
encoding is used in adaptive huffman [7]. Source symbol
have encoded in real time, while it becomes more perceptive
to show errors, since just a single loss remains the whole
code.
11. IMAGE COMPRESSION TECHNIQUES
The image compression techniques are broadly classified
into two categories depending whether or not an exact
replica of the original image could be reconstructed using
the compressed image .These are:
1. Lossless technique
2. Lossy techniqhe
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072
© 2015, IRJET.NET- All Rights Reserved Page 244
11.1 Lossless compression techniques
In lossless compression techniques, the original image can
be perfectly recovered form the compressed (encoded)
image. These are also called noiseless since they do not add
noise to the signal (image).It is also known as entropy
coding since it use statistics/decomposition techniques to
eliminate/minimize redundancy. Lossless compression is
used only for a few applications with stringent requirements
such as medical imaging. Following techniques are included
in lossless compression:
1. Run length encoding
2. Huffman encoding
3. Dictionary Techniques
a)LZ77
b)LZ78
c)LZW
4. Arithmetic coding
5. BitPlane coding
11.2 Lossy compression technique
Lossy schemes provide much higher compression ratios
than lossless schemes. Lossy schemes are widely used since
the quality of the reconstructed images is adequate for most
applications. By this scheme, the decompressed image is not
identical to the original image, but reasonably close to it.
Following techniques are included in lossy compression:
1.Lossy Predictive coding
a) DPCM
b)ADPCM
c)Delta Modulation
2.Transform Coding
a)DFT
b)DCT
c)Haar
d)Hadamard
3.Subband Coding
4.Wavelet Coding
5.Fractal Coding
6.vector Quantization
12. LZ77 :
In the LZ77 approach, the dictionary is simple a portion of
the previously encoded sequence. The encoder examines the
input sequence through a sliding window. The window
consists of two parts, a search buffer that contains a portion
of the recently encoded sequence, and a look-ahead buffer
that contains the next portion of the sequence to be encoded
[8]. The encoder then examines the symbols following the
symbol at the pointer location to see if they match
consecutive symbols in the look-ahead buffer. The number
of consecutive symbols in the search buffer that match
consecutive symbols in the look-ahead buffer, starting with
the first symbol, is called the length of the match. The
encoder searches the search buffer for the longest match.
Once the longest match has been found, the encoder encode
it with a triple<o,l,c> where o is offset, l is the length of the
match, and c is the codeword corresponding to the symbol in
the look-ahead buffer that follows the match.
13. LZ78:
LZ77 have the drawback of finite view of the past. LZ78
algorithm to solve this problem by dropping the reliance on
the search buffer and keeping an explicit dictionary. This
dictionary has to be built at both the encoder and decoder.
The input are coded as a double<i,c> with I being an index
corresponding to the dictionary entry that was the longest
match to the input, and c being the code for the character in
the input following the matched portion of the input.
14. LZW :
LZW encoding is an example of a category of algorithms
called dictionary- based encoding . The idea is to create a
dictionary (a table) of strings used during the
communication session. If both the sender and the receiver
have a copy of the dictionary, then previously-encountered
strings can be substituted by their index in the dictionary to
reduce the amount of information transmitted. In this phase
there are two concurrent events: building an indexed
dictionary and compressing a string of symbols. The
algorithm extracts the smallest substring that cannot be
found in the dictionary from the remaining uncompressed
string. It then stores a copy of this substring in the
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072
© 2015, IRJET.NET- All Rights Reserved Page 245
dictionary as a new entry and assigns it an index value.
Compression occurs when the substring, except for the last
character, is replaced with the index found in the dictionary.
The process then inserts the index and the last character of
the substring into the compressed string. Decompression is
the inverse of the compression process. The process extracts
the substrings from the compressed string and tries to
replace the indexes with the corresponding entry in the
dictionary, which is empty at first and built up gradually.
The idea is that when an index is received, there is already
an entry in the dictionary corresponding to that index.
15. ARITHMETIC ENCODING:
Arithmetic coding yields better compression because it
encodes a message as a whole new symbol instead of
separable symbols .Most of the computations in arithmetic
coding use floating-point arithmetic. However, most
hardware can only support finite precision [10]. While a
new symbol is coded, the precision required to present the
range grows. Context-Based Adaptive Binary Arithmetic
Coding (CABAC) as a normative part of the new ITU-
T/ISO/IEC standard. By combining an adaptive binary
arithmetic coding technique with context modeling, a high
degree of adaptation and redundancy reduction is achieved.
The CABAC framework also includes a low-complexity
method for binary arithmetic coding and probability
estimation that is well suited for efficient hardware and
software implementations.
16. FILE FORMATS WITH LOSSLESS COMPRESSION
• TIFF, Tagged Image File Format, flexible format often
supporting up to 16 bits/pixel in 4 channels. Can use several
different compression methods, e.g., Huffman, LZW[11].
• GIF, Graphics Interchange Format. Supports 8 bits/pixel in
one channel, that is only 256 colors. Uses LZW compression.
Supports animations.
• PNG, Portable Network Graphics, supports up to 16
bits/pixel in 4 channels (RGB + transparency). Uses Deflate
compression (~LZW and Huffman). Good when interpixel
redundancy is present.
17. EXPERIMENTAL RESULTS
Consider a simple 4x4 image whose histogram models the
symbol probabilities in Table 1. The following command line
sequence generates an image by using MATLAB[12][13].
f2=uint8([1 4 6 8;1 6 3 4;2 5 1 4;1 2 3 4])
f2 =
1 4 6 8
1 6 3 4
2 5 1 4
1 2 3 4
>> whos('f2')
Name Size Bytes Class Attributes
f2 4x4 16 uint8
Each pixel in f2 is an 8 bit byte; 16 bytes are used to
represent the entire image. Function Huffman used to
reduce the amount of memory required to represent the
image [14][15].
>>c=huffman(hist(double(f2(:)),4))
c =
'01'
'1'
'001'
'000'
To compress f2, code c must be applied at the bit level,with
several encoded pixels packed into a single byte .Function
mat2huff returns a structure, c1 requiring 528 bytes of
memory .
c1=mat2huff(f2)
c1=
size: [4 4]
min: 32769
hist: [4 2 2 4 1 2 0 1]
code: [3639 16066 27296]
>> whos('c1')
Name Size Bytes Class Attributes
c1 1x1 528 struct
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072
© 2015, IRJET.NET- All Rights Reserved Page 246
The 16 8-bit pixels of f2 are compressed into two 16-bit
words-the elements in field code of c1 is :
hcode=c1.code;
>> dec2bin(double(hcode))
ans =000111000110111
011111011000010
110101010100000
The dec2bin has been employed to display the individual
bits of c2.code.
>> f=imread('E:ph.dlena512.bmp');
>> c=mat2huff(f);
>> cr1=imratio(f,c)
cr1 =1.0692
>> ntrop(f)
ans = 7.4455
>> save squeezeimage c;
>> load squeezeimage;
>> g=huff2mat(c);
rmse=compare(f,g)
rmse = 0
Overall encoding and decoding process is information
preserving; the root mean square error between the original
and decompressed images is 0.
Fig 7:Before Huffman Fig 8:After Huffman
>> f=imread('E:ph.dcman.tif');
>> c=mat2huff(f);
>>c1=huff2mat(c);
Bits Per Bixel=c/f=0.8959
Fig 9 : Before Huffman Fig 10: After Huffman
18. RESULT ANALYSIS
The experimental results of the implemented algorithms,
Huffman and arithmetic coding for compression ratio and
execution time are depicted in Table 4. As this table shows,
on one hand, the compression ratio of the arithmetic coding
for different image sizes is higher than the Huffman coding.
On the other hand, arithmetic coding needs more execution
time than Huffman coding. This means that the high
compression ratio of the arithmetic algorithm is not free. It
needs more resources than Huffman algorithm.
Table 4. Average of compression results on test image set.
Another behavior that can be seen in Table 4 is, by
increasing image sizes from 128X128 to 2048X2048, the
improvement of the compression ratio of the arithmetic
coding increases more than the Huffman coding. For
instance, the compression ratio of Huffman algorithm for
image sizes of 1024X1024 and 2048X2048 is 5.64 and 6.37,
respectively. While for arithmetic coding is 7.73 and 12.02,
respectively. Figures 5 depict a comparison of the
compression ratio and execution time for the arithmetic and
Huffman algorithms, respectively. In other words, these
figures are the other representation of presented results in
Table 4.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072
© 2015, IRJET.NET- All Rights Reserved Page 247
Figure 11. Comparison of compression ratio for Huffma n
and arithmetic algorithms using different image sizes.
19. CONCLUSION :
This image compression method is well suited for gray scale
bit map images. Huffman coding suffer from the fact that the
uncompress or need have some knowledge of the
probabilities of the symbols in the compressed files this can
need more bits to encode the file This work may be
extended the better compression rate than other
compression techniques. The performance of the proposed
compression technique using hashing and human coding is
performed on GIF,TIFF formats. This technique can be
applied on luminance and chrominance of color images for
getting better compression.
REFERENCES
[1]Rafael C.Gonzalez,Richard E.Woods,Steven L.Eddins
,”Digital image processing using Matlab “,second
edition.McGraw Hill Education private Limited,,pp.378-
380,2013.
[2]Ming Yang & Nikolaos Bourbakis, “ An overview of
lossless digital image compression techniques,”Circuits &
system,2005 48th Midwest symposium,vol.2 IEEE,pp 1099-
1102,7-10 Aug,2005.
[3]Khalid sayood ,” Introduction to data compression “
,Third edition,Morgan Kaufmann Publishers,pp. 121-
125,2006
[4]Jagadish h.pujar,lohit m.kadlaskar ,“A new lossless
method of image compression and decompression using
Huffman coding techniques” ,Journal of theoretical and
Applied information Technology,
[5] Dr.T.Bhaskara Reddy,Miss.Hema suresh yaragunti,
Dr.S.kiran,Mrs.T.Anuradha “ A novel approach of lossless
image compression using hashing and Huffman coding
“,International Journal of Engineering research and
technology ,vol.2 issue 3,march-2013.
[6]Mridul Kumar Mathur,Seema Loonker,Dr.Dheeraj
saxena,“Lossless Huffman coding technique for image
compression and reconstruction using Binary trees.”IJCTA
,Jan-Feb 2012.
[7].Dr.T.Bhaskar Reddy, S.Mahaboob Basha,
Dr.B.Sathyanarayana and “Image Compression Using Binary
Plane Technique” Library Progress, vol1.27, no: 1, June-
2007, pg.no:59-64.
[8]A.said, “Arithmetic coding “ in lossless Compression
Handbook,k.sayood,Ed.san Diego, CA,USA:Academic ,2003.
[9]Anil.K.Jain , “ Fundamentals of digital Image
Processing”,PHI publication,pp.483,2001.
[10] Mamta Sharma ,” Compression Using Huffman Coding”,
IJCSNS International Journal of Computer Science and
Network Security, VOL.10 No.5, May 2010.
[11]The Scientist and Engineer’s Guide to Digital Signal
Processing by Steven W. Smith.
[12] G.C Chang Y.D Lin (2010) “An Efficient Lossless ECG
Compression Method Using Delta Coding and Optimal
Selective Huffman Coding” IFMBE proceedings 2010,
Volume 31, Part 6, 1327-1330, DOI: 10.1007/978-3-642-
14515-5_338.
[13]Wei-Yi Wei,” An Introduction to image compression”.
[14] C. E. Shannon, “Coding theorems for a discrete source
with a delity criterion," in IRE Nat. Conv. Rec., vol. 4, pp. 142-
163, Mar. 1959.
BIOGRAPHIES
ANITHA.S received her Bachelor’s degree
in Computer Science from Govindammal
Aditanar College for women, Tiruchendur
in 2006. She did her Master of Computer
Applications in Noorul Islam College of
Engineering, Nagercoil.Completed her
M.Phil in MS university, Tirunelveli.At
present she is Working as an Assistant
Prof.in Govindammal Aditanar College for
Women, Tiruchendur.

Mais conteúdo relacionado

Mais procurados

Presentation of Lossy compression
Presentation of Lossy compressionPresentation of Lossy compression
Presentation of Lossy compressionOmar Ghazi
 
Interpixel redundancy
Interpixel redundancyInterpixel redundancy
Interpixel redundancyNaveen Kumar
 
Seminar Report on image compression
Seminar Report on image compressionSeminar Report on image compression
Seminar Report on image compressionPradip Kumar
 
Image compression
Image compressionImage compression
Image compressionAle Johnsan
 
Introduction to Image Compression
Introduction to Image CompressionIntroduction to Image Compression
Introduction to Image CompressionKalyan Acharjya
 
image basics and image compression
image basics and image compressionimage basics and image compression
image basics and image compressionmurugan hari
 
Digital image compression techniques
Digital image compression techniquesDigital image compression techniques
Digital image compression techniqueseSAT Publishing House
 
Design of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABDesign of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABIJEEE
 
A Secure Color Image Steganography in Transform Domain
A Secure Color Image Steganography in Transform Domain A Secure Color Image Steganography in Transform Domain
A Secure Color Image Steganography in Transform Domain ijcisjournal
 
Chapter 8 image compression
Chapter 8 image compressionChapter 8 image compression
Chapter 8 image compressionasodariyabhavesh
 
Image compression
Image compressionImage compression
Image compressionHuda Seyam
 

Mais procurados (20)

Presentation of Lossy compression
Presentation of Lossy compressionPresentation of Lossy compression
Presentation of Lossy compression
 
Image compression .
Image compression .Image compression .
Image compression .
 
Interpixel redundancy
Interpixel redundancyInterpixel redundancy
Interpixel redundancy
 
Presentation on Image Compression
Presentation on Image Compression Presentation on Image Compression
Presentation on Image Compression
 
Seminar Report on image compression
Seminar Report on image compressionSeminar Report on image compression
Seminar Report on image compression
 
Image compression
Image compressionImage compression
Image compression
 
Introduction to Image Compression
Introduction to Image CompressionIntroduction to Image Compression
Introduction to Image Compression
 
Image compression
Image compressionImage compression
Image compression
 
Image compression models
Image compression modelsImage compression models
Image compression models
 
image basics and image compression
image basics and image compressionimage basics and image compression
image basics and image compression
 
Digital image compression techniques
Digital image compression techniquesDigital image compression techniques
Digital image compression techniques
 
Data Redundacy
Data RedundacyData Redundacy
Data Redundacy
 
Image compression
Image compressionImage compression
Image compression
 
Design of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLABDesign of Image Compression Algorithm using MATLAB
Design of Image Compression Algorithm using MATLAB
 
A Secure Color Image Steganography in Transform Domain
A Secure Color Image Steganography in Transform Domain A Secure Color Image Steganography in Transform Domain
A Secure Color Image Steganography in Transform Domain
 
Chapter 8 image compression
Chapter 8 image compressionChapter 8 image compression
Chapter 8 image compression
 
Bit plane coding
Bit plane codingBit plane coding
Bit plane coding
 
Image compression
Image compressionImage compression
Image compression
 
Image compression and jpeg
Image compression and jpegImage compression and jpeg
Image compression and jpeg
 
Image compression
Image compressionImage compression
Image compression
 

Semelhante a IRJET-Lossless Image compression and decompression using Huffman coding

Image compression 14_04_2020 (1)
Image compression 14_04_2020 (1)Image compression 14_04_2020 (1)
Image compression 14_04_2020 (1)Joel P
 
Lec_8_Image Compression.pdf
Lec_8_Image Compression.pdfLec_8_Image Compression.pdf
Lec_8_Image Compression.pdfnagwaAboElenein
 
Matlab Based Image Compression Using Various Algorithm
Matlab Based Image Compression Using Various AlgorithmMatlab Based Image Compression Using Various Algorithm
Matlab Based Image Compression Using Various Algorithmijtsrd
 
Low Complex-Hierarchical Coding Compression Approach for Arial Images
Low Complex-Hierarchical Coding Compression Approach for Arial ImagesLow Complex-Hierarchical Coding Compression Approach for Arial Images
Low Complex-Hierarchical Coding Compression Approach for Arial ImagesIDES Editor
 
Blank Background Image Lossless Compression Technique
Blank Background Image Lossless Compression TechniqueBlank Background Image Lossless Compression Technique
Blank Background Image Lossless Compression TechniqueCSCJournals
 
notes_Image Compression.ppt
notes_Image Compression.pptnotes_Image Compression.ppt
notes_Image Compression.pptHarisMasood20
 
notes_Image Compression.ppt
notes_Image Compression.pptnotes_Image Compression.ppt
notes_Image Compression.pptHarisMasood20
 
A Critical Review of Well Known Method For Image Compression
A Critical Review of Well Known Method For Image CompressionA Critical Review of Well Known Method For Image Compression
A Critical Review of Well Known Method For Image CompressionEditor IJMTER
 
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSION
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSIONA REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSION
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSIONNancy Ideker
 
Image Quality Feature Based Detection Algorithm for Forgery in Images
Image Quality Feature Based Detection Algorithm for Forgery in Images  Image Quality Feature Based Detection Algorithm for Forgery in Images
Image Quality Feature Based Detection Algorithm for Forgery in Images ijcga
 
AN ENHANCED CHAOTIC IMAGE ENCRYPTION
AN ENHANCED CHAOTIC IMAGE ENCRYPTIONAN ENHANCED CHAOTIC IMAGE ENCRYPTION
AN ENHANCED CHAOTIC IMAGE ENCRYPTIONijcseit
 
Performance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithmPerformance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithmsipij
 
Survey paper on image compression techniques
Survey paper on image compression techniquesSurvey paper on image compression techniques
Survey paper on image compression techniquesIRJET Journal
 
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...cscpconf
 
notes_Image Compression_edited.ppt
notes_Image Compression_edited.pptnotes_Image Compression_edited.ppt
notes_Image Compression_edited.pptHarisMasood20
 
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHMPIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHMijcsit
 
Comparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression TechniquesComparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression Techniquessipij
 
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...CSCJournals
 
Digital image compression techniques
Digital image compression techniquesDigital image compression techniques
Digital image compression techniqueseSAT Journals
 

Semelhante a IRJET-Lossless Image compression and decompression using Huffman coding (20)

Image compression 14_04_2020 (1)
Image compression 14_04_2020 (1)Image compression 14_04_2020 (1)
Image compression 14_04_2020 (1)
 
Lec_8_Image Compression.pdf
Lec_8_Image Compression.pdfLec_8_Image Compression.pdf
Lec_8_Image Compression.pdf
 
Matlab Based Image Compression Using Various Algorithm
Matlab Based Image Compression Using Various AlgorithmMatlab Based Image Compression Using Various Algorithm
Matlab Based Image Compression Using Various Algorithm
 
B070306010
B070306010B070306010
B070306010
 
Low Complex-Hierarchical Coding Compression Approach for Arial Images
Low Complex-Hierarchical Coding Compression Approach for Arial ImagesLow Complex-Hierarchical Coding Compression Approach for Arial Images
Low Complex-Hierarchical Coding Compression Approach for Arial Images
 
Blank Background Image Lossless Compression Technique
Blank Background Image Lossless Compression TechniqueBlank Background Image Lossless Compression Technique
Blank Background Image Lossless Compression Technique
 
notes_Image Compression.ppt
notes_Image Compression.pptnotes_Image Compression.ppt
notes_Image Compression.ppt
 
notes_Image Compression.ppt
notes_Image Compression.pptnotes_Image Compression.ppt
notes_Image Compression.ppt
 
A Critical Review of Well Known Method For Image Compression
A Critical Review of Well Known Method For Image CompressionA Critical Review of Well Known Method For Image Compression
A Critical Review of Well Known Method For Image Compression
 
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSION
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSIONA REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSION
A REVIEW ON LATEST TECHNIQUES OF IMAGE COMPRESSION
 
Image Quality Feature Based Detection Algorithm for Forgery in Images
Image Quality Feature Based Detection Algorithm for Forgery in Images  Image Quality Feature Based Detection Algorithm for Forgery in Images
Image Quality Feature Based Detection Algorithm for Forgery in Images
 
AN ENHANCED CHAOTIC IMAGE ENCRYPTION
AN ENHANCED CHAOTIC IMAGE ENCRYPTIONAN ENHANCED CHAOTIC IMAGE ENCRYPTION
AN ENHANCED CHAOTIC IMAGE ENCRYPTION
 
Performance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithmPerformance analysis of image compression using fuzzy logic algorithm
Performance analysis of image compression using fuzzy logic algorithm
 
Survey paper on image compression techniques
Survey paper on image compression techniquesSurvey paper on image compression techniques
Survey paper on image compression techniques
 
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
PERFORMANCE EVALUATION OF JPEG IMAGE COMPRESSION USING SYMBOL REDUCTION TECHN...
 
notes_Image Compression_edited.ppt
notes_Image Compression_edited.pptnotes_Image Compression_edited.ppt
notes_Image Compression_edited.ppt
 
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHMPIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
PIXEL SIZE REDUCTION LOSS-LESS IMAGE COMPRESSION ALGORITHM
 
Comparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression TechniquesComparison of different Fingerprint Compression Techniques
Comparison of different Fingerprint Compression Techniques
 
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
An Image Steganography Algorithm Using Huffman and Interpixel Difference Enco...
 
Digital image compression techniques
Digital image compression techniquesDigital image compression techniques
Digital image compression techniques
 

Mais de IRJET Journal

TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...IRJET Journal
 
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURESTUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTUREIRJET Journal
 
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...IRJET Journal
 
Effect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsEffect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsIRJET Journal
 
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...IRJET Journal
 
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...IRJET Journal
 
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...IRJET Journal
 
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...IRJET Journal
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASIRJET Journal
 
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...IRJET Journal
 
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProP.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProIRJET Journal
 
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...IRJET Journal
 
Survey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemSurvey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemIRJET Journal
 
Review on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesReview on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesIRJET Journal
 
React based fullstack edtech web application
React based fullstack edtech web applicationReact based fullstack edtech web application
React based fullstack edtech web applicationIRJET Journal
 
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...IRJET Journal
 
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.IRJET Journal
 
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...IRJET Journal
 
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignMultistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignIRJET Journal
 
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...IRJET Journal
 

Mais de IRJET Journal (20)

TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
TUNNELING IN HIMALAYAS WITH NATM METHOD: A SPECIAL REFERENCES TO SUNGAL TUNNE...
 
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURESTUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
STUDY THE EFFECT OF RESPONSE REDUCTION FACTOR ON RC FRAMED STRUCTURE
 
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
A COMPARATIVE ANALYSIS OF RCC ELEMENT OF SLAB WITH STARK STEEL (HYSD STEEL) A...
 
Effect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil CharacteristicsEffect of Camber and Angles of Attack on Airfoil Characteristics
Effect of Camber and Angles of Attack on Airfoil Characteristics
 
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
A Review on the Progress and Challenges of Aluminum-Based Metal Matrix Compos...
 
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
Dynamic Urban Transit Optimization: A Graph Neural Network Approach for Real-...
 
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
Structural Analysis and Design of Multi-Storey Symmetric and Asymmetric Shape...
 
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
A Review of “Seismic Response of RC Structures Having Plan and Vertical Irreg...
 
A REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADASA REVIEW ON MACHINE LEARNING IN ADAS
A REVIEW ON MACHINE LEARNING IN ADAS
 
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
Long Term Trend Analysis of Precipitation and Temperature for Asosa district,...
 
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD ProP.E.B. Framed Structure Design and Analysis Using STAAD Pro
P.E.B. Framed Structure Design and Analysis Using STAAD Pro
 
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
A Review on Innovative Fiber Integration for Enhanced Reinforcement of Concre...
 
Survey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare SystemSurvey Paper on Cloud-Based Secured Healthcare System
Survey Paper on Cloud-Based Secured Healthcare System
 
Review on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridgesReview on studies and research on widening of existing concrete bridges
Review on studies and research on widening of existing concrete bridges
 
React based fullstack edtech web application
React based fullstack edtech web applicationReact based fullstack edtech web application
React based fullstack edtech web application
 
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
A Comprehensive Review of Integrating IoT and Blockchain Technologies in the ...
 
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
A REVIEW ON THE PERFORMANCE OF COCONUT FIBRE REINFORCED CONCRETE.
 
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
Optimizing Business Management Process Workflows: The Dynamic Influence of Mi...
 
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic DesignMultistoried and Multi Bay Steel Building Frame by using Seismic Design
Multistoried and Multi Bay Steel Building Frame by using Seismic Design
 
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
Cost Optimization of Construction Using Plastic Waste as a Sustainable Constr...
 

Último

Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...Call Girls in Nagpur High Profile
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).pptssuser5c9d4b1
 

Último (20)

Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...Booking open Available Pune Call Girls Koregaon Park  6297143586 Call Hot Ind...
Booking open Available Pune Call Girls Koregaon Park 6297143586 Call Hot Ind...
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur EscortsHigh Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
High Profile Call Girls Nagpur Isha Call 7001035870 Meet With Nagpur Escorts
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
247267395-1-Symmetric-and-distributed-shared-memory-architectures-ppt (1).ppt
 

IRJET-Lossless Image compression and decompression using Huffman coding

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 240 LOSSLESS IMAGE COMPRESSION AND DECOMPRESSION USING HUFFMAN CODING Anitha. S, Assistant Professor, GAC for women, Tiruchendur,TamilNadu,India. Abstract - This paper propose a novel Image compression based on the Huffman encoding and decoding technique. Image files contain some redundant and inappropriate information. Image compression addresses the problem of reducing the amount of data required to represent an image. Huffman encoding and decoding is very easy to implement and it reduce the complexity of memory. Major goal of this paper is to provide practical ways of exploring Huffman coding technique using MATLAB . Keywords - Image compression, Huffman encoding, Huffman decoding, Symbol, Source reduction… 1. INTRODUCTION A commonly image contain redundant information i.e. because of neighboring pixels which are correlated and Contain redundant information. The main objective of image compression [1][9] is redundancy and irrelevancy reduction. It needs to represent an image by removing redundancies as much as possible, while keeping the resolution and visual quality of compressed image as close to the original image. Decompression is the inverse processes of compression i.e. get back the original image from compressed image. Compression ratio is defined as the ratio of information units an original image and compressed Compression is performed by three kinds of redundancies. (1) coding redundancy (2) Spatial or temporal or inter pixel redundancy (3) Psycho visual Redundancy Compression further divided into predictive and transform coding. Transformed coding means, the large amount of information are transferred into very small number of blocks. One of the best examples of transformed coding technique is wavelet transform. Predictive means based on the training set (neighbors), reduce some redundancies. Context based compression algorithms are used predictive technique [2][3]. 2. HISTORY Morse code, invented in 1838 for use in telegraphy, is an early example of data compression based on using shorter codeword’s for letters such as "e" and "t" that are more common in English. Modern work on data compression began in the late 1940s with the development of information theory. In 1949 Claude Shannon and Robert Fano devised a systematic way to assign codeword’s based on probabilities of blocks. An optimal method for doing this was then found by David Huffman in 1951.In 1980’s a joint committee , known as joint photographic experts group (JPEG) developed first international compression standard for continuous tone images. JPEG algorithm included several modes of operations. Steps in JPEG 1. Color image converted from RGB to YCbCr 2. The resolution of chromo components is reduced by a factor of 2 or 3. This reflects that the eye is less sensitive to fine color details than to brightness details 3. Image is converted into 8X8 matrix 4. DCT, Quantization is applied and finally compressed 5. Decoding is reversible process except quantization is irreversible. 3 . CODING REDUNDANCY: Rk is distinct random variable, for k=1,2,…L with related probability Pr(rk)=nk/n where k=1,2,…L, Kth gray level in an image is represented by ‘nk’and ‘n’ is used to specifiy total number of pixels in the image. (rk) is the total number of bits used to symbolize
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 241 each pixel in the still image then the average number of bits required to represent each pixel [4] is Lavg= ∑L l(rk)pr(rk) K=1 The average length of the code words is calculated by summing the number of bits used to represent the gray level and the probability that the gray level which placed in an image. Table 1:Coding redundancy Rk Pr(rk) Code1 L1(rk) Code2 L2(rk R1 0.1875 00 2 011 3 R2 0.5 01 2 1 1 R3 0.1250 10 2 010 3 R4 0.1875 11 2 00 2 Lavg=2 for code1;Lavg=1.81 for code 2. Coding redundancy is always present when the gray levels of an image are coded using a binary code. In that table 1, both a fixed and variable length encoding of a four level image is shown. Column two represents the gray level distribution. The 2-bit binary encoding (code1 ) is shown in column 3. It has an average length of 2 bits. The average number of bits required by code2 (in column 5) is Lavg=∑4 l2(k)pr(rk) =3(0.1875)+1(0.5)+3(0.125)+2(0.1875)=1.8125 compression ratio is Cr =2/1.8125=1.103 4. INTER PIXEL REDUNDANCY: In order to reduce inter pixel redundancy, the 2-D pixel array normally used for human viewing and interpretation must be transformed into a more efficient format. For example, the difference between adjacent pixels can be used to represent an image. It is called mapping. A simple mapping procedure is lossless predictive coding , eliminates the inter pixel redundancies of closely spaced pixels by extracting and coding only the new information in each pixel. The new information of a pixel is defined as the difference between the actual and predicted value of that pixel. Fig 1:sample(a) Fig 2: sample(b) Histogram of Fig 1 & 2 These observations highlight the fact that variable – length coding is not designed to take advantage of the structural relationship s between the aligned matches in Fig2.Although the pixel to pixel correlations are more important in the Fig 1. Because the values of the pixels in either image can be predicted by their neighbors, the information carried by individual pixels is relatively small. visual contribution of a single pixel to an image is redundant; it could be predicted by their neighbors. 5. PSYCHO VISUAL REDUNDANCY: In general, an observer searches for distinguishing features such as edges or textual regions and mentally combines them into recognizable groupings. The brain then correlates these groupings with prior knowledge in order to complete the image interpretation process. Thus eye does not respond with equal sensitivity to all visual information. Certain information simply has less relative importance than other information in normal visual processing. This information is said to be psycho visually redundant. Unlike coding and inter pixel redundancy psycho visual redundancy is associated with real or quantifiable visual information. Its elimination is desirable because the information itself is not essential for normal visual processing. Since the elimination of psycho visually redundant data results in a loss of quantitative information, it is called quantization.
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 242 Fig 4:original Fig 5 :16 gray level Fig 6 : 16 gray levels/random noise Consider the image in In fig 4. shows a monochrome image with 256 gray levels. Fig 5 : is the same image after the uniform quantization to four bits or 16 possible levels. Fig 6 is called improved gray-scale quantization. It recognizes the eye’s inherent sensitivity to edges and breaks them up by adding to each pixel a pseudorandom number, which is generated from the low-order bits of neighboring pixels, before quantizing the result. In fig 6 removes a great deal of psycho visual redundancy with little impact on perceived image quality. 6. COMPRESSION RATIO: Compression ratio calculated by the formula Cr=n1/n2 Where n1 is used to represent the total number of bits in an original image and n2 is used to specify the total number of bits in the encoded image. Compression consists of encoding and decoding techniques. Encoding is used to create a unique symbols which are used to specify the original image. The output of encoder is compressed image. The input of decoder is compressed image and the decoder stage is used to get the original image with out getting any error. Cr=bytes(f1)/bytes(f2); Where f1 is total number of bytes in the original image and f2 is the total number of bytes in the encoding image. 7 . ROOT MEAN SQUARE ERROR : To analysis a compressed image, it must be fed into a decoder .f’(x,y) is the decompressed image . It will generated by the decoder.The decoded and the original image is same then the compression is lossless compression otherwise it is lossy. The error energy between the compressed and the decompressed image is calculated by using the formula e(x,y)=f’(x,y)-f(x,y) so the total error between the two images is ∑M-1∑n-1[f’(x,y)-f(x,y)] x=0 Y=0 and the rms(root mean square)error is used to calculate the error energy between the compressed and the original image and f’(x,y) is the square root of the squared error averaged over the M*N array , or rmse=[1/MN∑M-1∑n-1[f’(x,y)-f(x,y)]2]1/2 ] x=0 Y=0 8. HUFFMAN CODES: Huffman Codes[5] contain small number of possible symbols.All the source symbols are coded one at a time. STEPS : 1.Symbols are placed on the probability values in a decreasing manner. 2. Where there is many number of nodes : * The new node can be form by merging the two smallest nodes. * Values ‘1’ or ‘0’ is assigned to each pair of values. 3. From the root we can read the symbols based on their location. Table 2 : Huffman Source Reduction Orginal Source Source REduction Symbol Probability 1 2 A2 0.5 0.5 0.5 A4 0.1875 0.3125 0.5 A1 0.1875 0.1875 A3 0.125
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 243 In table 2, the source symbols are arranged based on their probability values and the values are arranged in decreasing order. To develop the first source reduction , the bottom two probabilities are 0.125 and 0.1875 are joined to create a compound symbol with probability 0.3125. In the first source reduction the compound and its related probability are stored. The probability values are arranged in decreasing order that is from most probable values from the least probable values. This process is repeated until a two symbols which are having the reduced source. Table 3: Code Assignment Procedures Orginal Source Source Reduction Symbo l Probabilit y Code 1 2 A2 0.5 1 0.5 1 0.5 1 A4 0.1875 00 0.312 5 01 0.5 0 A1 0.1875 011 0.187 5 00 A3 0.125 010 In the Huffman’s second step consists[ 5] of the reduced source ,which are from the very small source to large one. The reduced source probability symbol 0.5 on the right is generated by merging the two reduced probability symbols in the left. The symbols 0 and 1 is appended to each code for differentiate with each other. This is then repeated for each reduced source until the original source is reached. Huffman Code tree[6] based on ordered list . The table can be developed based on the tree traversal mechanism. The code for each symbol may be obtain by traveling a direction of the symbol from the root to leaf of the tree. For right direction the value ‘1’ is assigned and the left direction the value ‘0’ is assigned. For example a symbol which is reached by branching left twice, then right once may be represented by the pattern '001'. The figure below depicts codes for nodes of a sample tree. * / (0) (1) / (10) (11) / (100) (101) Once a Huffman tree is built, Canonical Huffman codes, which require less information to rebuild. 9. DEVELOPMENT OF HUFFMAN CODING AND DECODING ALGORITHM: At first read the color image. And change the color image into grey level. Then calculate the total number of unique symbols to represent the image. And also calculate the probability values of that symbols. Then create the Symbol table and assign least value to the most probable values. This is called Huffman encoding and Huffman decoding is the reversible process of encoding ,which is used to decompress the image. Finally the quality of the image calculated by root mean square error. If it zero means no difference between the original and decompressed images otherwise some difference is there. 10. ADAPTIVE HUFFMAN CODING Adaptive huffman coding (Dynamic Huffman coding) is based on Huffman coding, and the symbols are can’t having no prior knowledge of source distribution. one-pass encoding is used in adaptive huffman [7]. Source symbol have encoded in real time, while it becomes more perceptive to show errors, since just a single loss remains the whole code. 11. IMAGE COMPRESSION TECHNIQUES The image compression techniques are broadly classified into two categories depending whether or not an exact replica of the original image could be reconstructed using the compressed image .These are: 1. Lossless technique 2. Lossy techniqhe
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 244 11.1 Lossless compression techniques In lossless compression techniques, the original image can be perfectly recovered form the compressed (encoded) image. These are also called noiseless since they do not add noise to the signal (image).It is also known as entropy coding since it use statistics/decomposition techniques to eliminate/minimize redundancy. Lossless compression is used only for a few applications with stringent requirements such as medical imaging. Following techniques are included in lossless compression: 1. Run length encoding 2. Huffman encoding 3. Dictionary Techniques a)LZ77 b)LZ78 c)LZW 4. Arithmetic coding 5. BitPlane coding 11.2 Lossy compression technique Lossy schemes provide much higher compression ratios than lossless schemes. Lossy schemes are widely used since the quality of the reconstructed images is adequate for most applications. By this scheme, the decompressed image is not identical to the original image, but reasonably close to it. Following techniques are included in lossy compression: 1.Lossy Predictive coding a) DPCM b)ADPCM c)Delta Modulation 2.Transform Coding a)DFT b)DCT c)Haar d)Hadamard 3.Subband Coding 4.Wavelet Coding 5.Fractal Coding 6.vector Quantization 12. LZ77 : In the LZ77 approach, the dictionary is simple a portion of the previously encoded sequence. The encoder examines the input sequence through a sliding window. The window consists of two parts, a search buffer that contains a portion of the recently encoded sequence, and a look-ahead buffer that contains the next portion of the sequence to be encoded [8]. The encoder then examines the symbols following the symbol at the pointer location to see if they match consecutive symbols in the look-ahead buffer. The number of consecutive symbols in the search buffer that match consecutive symbols in the look-ahead buffer, starting with the first symbol, is called the length of the match. The encoder searches the search buffer for the longest match. Once the longest match has been found, the encoder encode it with a triple<o,l,c> where o is offset, l is the length of the match, and c is the codeword corresponding to the symbol in the look-ahead buffer that follows the match. 13. LZ78: LZ77 have the drawback of finite view of the past. LZ78 algorithm to solve this problem by dropping the reliance on the search buffer and keeping an explicit dictionary. This dictionary has to be built at both the encoder and decoder. The input are coded as a double<i,c> with I being an index corresponding to the dictionary entry that was the longest match to the input, and c being the code for the character in the input following the matched portion of the input. 14. LZW : LZW encoding is an example of a category of algorithms called dictionary- based encoding . The idea is to create a dictionary (a table) of strings used during the communication session. If both the sender and the receiver have a copy of the dictionary, then previously-encountered strings can be substituted by their index in the dictionary to reduce the amount of information transmitted. In this phase there are two concurrent events: building an indexed dictionary and compressing a string of symbols. The algorithm extracts the smallest substring that cannot be found in the dictionary from the remaining uncompressed string. It then stores a copy of this substring in the
  • 6. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 245 dictionary as a new entry and assigns it an index value. Compression occurs when the substring, except for the last character, is replaced with the index found in the dictionary. The process then inserts the index and the last character of the substring into the compressed string. Decompression is the inverse of the compression process. The process extracts the substrings from the compressed string and tries to replace the indexes with the corresponding entry in the dictionary, which is empty at first and built up gradually. The idea is that when an index is received, there is already an entry in the dictionary corresponding to that index. 15. ARITHMETIC ENCODING: Arithmetic coding yields better compression because it encodes a message as a whole new symbol instead of separable symbols .Most of the computations in arithmetic coding use floating-point arithmetic. However, most hardware can only support finite precision [10]. While a new symbol is coded, the precision required to present the range grows. Context-Based Adaptive Binary Arithmetic Coding (CABAC) as a normative part of the new ITU- T/ISO/IEC standard. By combining an adaptive binary arithmetic coding technique with context modeling, a high degree of adaptation and redundancy reduction is achieved. The CABAC framework also includes a low-complexity method for binary arithmetic coding and probability estimation that is well suited for efficient hardware and software implementations. 16. FILE FORMATS WITH LOSSLESS COMPRESSION • TIFF, Tagged Image File Format, flexible format often supporting up to 16 bits/pixel in 4 channels. Can use several different compression methods, e.g., Huffman, LZW[11]. • GIF, Graphics Interchange Format. Supports 8 bits/pixel in one channel, that is only 256 colors. Uses LZW compression. Supports animations. • PNG, Portable Network Graphics, supports up to 16 bits/pixel in 4 channels (RGB + transparency). Uses Deflate compression (~LZW and Huffman). Good when interpixel redundancy is present. 17. EXPERIMENTAL RESULTS Consider a simple 4x4 image whose histogram models the symbol probabilities in Table 1. The following command line sequence generates an image by using MATLAB[12][13]. f2=uint8([1 4 6 8;1 6 3 4;2 5 1 4;1 2 3 4]) f2 = 1 4 6 8 1 6 3 4 2 5 1 4 1 2 3 4 >> whos('f2') Name Size Bytes Class Attributes f2 4x4 16 uint8 Each pixel in f2 is an 8 bit byte; 16 bytes are used to represent the entire image. Function Huffman used to reduce the amount of memory required to represent the image [14][15]. >>c=huffman(hist(double(f2(:)),4)) c = '01' '1' '001' '000' To compress f2, code c must be applied at the bit level,with several encoded pixels packed into a single byte .Function mat2huff returns a structure, c1 requiring 528 bytes of memory . c1=mat2huff(f2) c1= size: [4 4] min: 32769 hist: [4 2 2 4 1 2 0 1] code: [3639 16066 27296] >> whos('c1') Name Size Bytes Class Attributes c1 1x1 528 struct
  • 7. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 246 The 16 8-bit pixels of f2 are compressed into two 16-bit words-the elements in field code of c1 is : hcode=c1.code; >> dec2bin(double(hcode)) ans =000111000110111 011111011000010 110101010100000 The dec2bin has been employed to display the individual bits of c2.code. >> f=imread('E:ph.dlena512.bmp'); >> c=mat2huff(f); >> cr1=imratio(f,c) cr1 =1.0692 >> ntrop(f) ans = 7.4455 >> save squeezeimage c; >> load squeezeimage; >> g=huff2mat(c); rmse=compare(f,g) rmse = 0 Overall encoding and decoding process is information preserving; the root mean square error between the original and decompressed images is 0. Fig 7:Before Huffman Fig 8:After Huffman >> f=imread('E:ph.dcman.tif'); >> c=mat2huff(f); >>c1=huff2mat(c); Bits Per Bixel=c/f=0.8959 Fig 9 : Before Huffman Fig 10: After Huffman 18. RESULT ANALYSIS The experimental results of the implemented algorithms, Huffman and arithmetic coding for compression ratio and execution time are depicted in Table 4. As this table shows, on one hand, the compression ratio of the arithmetic coding for different image sizes is higher than the Huffman coding. On the other hand, arithmetic coding needs more execution time than Huffman coding. This means that the high compression ratio of the arithmetic algorithm is not free. It needs more resources than Huffman algorithm. Table 4. Average of compression results on test image set. Another behavior that can be seen in Table 4 is, by increasing image sizes from 128X128 to 2048X2048, the improvement of the compression ratio of the arithmetic coding increases more than the Huffman coding. For instance, the compression ratio of Huffman algorithm for image sizes of 1024X1024 and 2048X2048 is 5.64 and 6.37, respectively. While for arithmetic coding is 7.73 and 12.02, respectively. Figures 5 depict a comparison of the compression ratio and execution time for the arithmetic and Huffman algorithms, respectively. In other words, these figures are the other representation of presented results in Table 4.
  • 8. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 02 Issue: 01 | Apr-2015 www.irjet.net p-ISSN: 2395-0072 © 2015, IRJET.NET- All Rights Reserved Page 247 Figure 11. Comparison of compression ratio for Huffma n and arithmetic algorithms using different image sizes. 19. CONCLUSION : This image compression method is well suited for gray scale bit map images. Huffman coding suffer from the fact that the uncompress or need have some knowledge of the probabilities of the symbols in the compressed files this can need more bits to encode the file This work may be extended the better compression rate than other compression techniques. The performance of the proposed compression technique using hashing and human coding is performed on GIF,TIFF formats. This technique can be applied on luminance and chrominance of color images for getting better compression. REFERENCES [1]Rafael C.Gonzalez,Richard E.Woods,Steven L.Eddins ,”Digital image processing using Matlab “,second edition.McGraw Hill Education private Limited,,pp.378- 380,2013. [2]Ming Yang & Nikolaos Bourbakis, “ An overview of lossless digital image compression techniques,”Circuits & system,2005 48th Midwest symposium,vol.2 IEEE,pp 1099- 1102,7-10 Aug,2005. [3]Khalid sayood ,” Introduction to data compression “ ,Third edition,Morgan Kaufmann Publishers,pp. 121- 125,2006 [4]Jagadish h.pujar,lohit m.kadlaskar ,“A new lossless method of image compression and decompression using Huffman coding techniques” ,Journal of theoretical and Applied information Technology, [5] Dr.T.Bhaskara Reddy,Miss.Hema suresh yaragunti, Dr.S.kiran,Mrs.T.Anuradha “ A novel approach of lossless image compression using hashing and Huffman coding “,International Journal of Engineering research and technology ,vol.2 issue 3,march-2013. [6]Mridul Kumar Mathur,Seema Loonker,Dr.Dheeraj saxena,“Lossless Huffman coding technique for image compression and reconstruction using Binary trees.”IJCTA ,Jan-Feb 2012. [7].Dr.T.Bhaskar Reddy, S.Mahaboob Basha, Dr.B.Sathyanarayana and “Image Compression Using Binary Plane Technique” Library Progress, vol1.27, no: 1, June- 2007, pg.no:59-64. [8]A.said, “Arithmetic coding “ in lossless Compression Handbook,k.sayood,Ed.san Diego, CA,USA:Academic ,2003. [9]Anil.K.Jain , “ Fundamentals of digital Image Processing”,PHI publication,pp.483,2001. [10] Mamta Sharma ,” Compression Using Huffman Coding”, IJCSNS International Journal of Computer Science and Network Security, VOL.10 No.5, May 2010. [11]The Scientist and Engineer’s Guide to Digital Signal Processing by Steven W. Smith. [12] G.C Chang Y.D Lin (2010) “An Efficient Lossless ECG Compression Method Using Delta Coding and Optimal Selective Huffman Coding” IFMBE proceedings 2010, Volume 31, Part 6, 1327-1330, DOI: 10.1007/978-3-642- 14515-5_338. [13]Wei-Yi Wei,” An Introduction to image compression”. [14] C. E. Shannon, “Coding theorems for a discrete source with a delity criterion," in IRE Nat. Conv. Rec., vol. 4, pp. 142- 163, Mar. 1959. BIOGRAPHIES ANITHA.S received her Bachelor’s degree in Computer Science from Govindammal Aditanar College for women, Tiruchendur in 2006. She did her Master of Computer Applications in Noorul Islam College of Engineering, Nagercoil.Completed her M.Phil in MS university, Tirunelveli.At present she is Working as an Assistant Prof.in Govindammal Aditanar College for Women, Tiruchendur.