Huffman Coding Technique For Image Compression: ISSN:2320-0790
Huffman Coding Technique For Image Compression: ISSN:2320-0790
Huffman Coding Technique For Image Compression: ISSN:2320-0790
ISSN:2320-0790
Abstract: Image compression is one of the most important steps in image transmission and storage. “A picture is
worth more than thousand words “is a common saying. Images play an indispensable role in representing vitalin
formation and needs to be saved for further use or can be transmitted over a medium. In order to have efficient
utilization of disk space and transmission rate, images need to be compressed. Image compression is the technique of
reducing the file size of a image without compromising with the image quality at acceptable level. Image
compression is been used from a long time and many algorithms have been devised. In this paper we have converted
an image into an array using Delphi image control tool. An algorithm is created in Delphi to implement Huffman
coding method that removes redundant codes from the image and compresses a BMP image file (especially gray
scale image) and it is successfully reconstructed. This reconstructed image is an exact representation of the original
because it is loss less compression technique.
Computer technology, however, provides flexible This 8-bit representation will code integer 0.
principles for processing large amounts of Similarly 11111111 will code integer 255. As we
information. Among the algorithms available is move from right to left in a string the significance of
image data reduction or ‘image compression’. The that bit becomes twice. Thus the LSB holds a value
1585
COMPUSOFT, An international journal of advanced computer technology, 4 (4), April-2015 (Volume-IV, Issue-IV)
20(=1), similarly the nest LSB holds a value 21(=2) changes of brightness more sharply than those of
and so on. Thus the MSB will hold a value 27(=128). color, by averaging or dropping some of the
So, the 8-bit representation of 153 wouldlook likes chrominance information in the image.
this… Transform coding. This is the most commonly
used method. In particular, a Fourier-related
142 = 1 0 0 0 1 1 1 0 transform such as the Discrete Cosine Transform
(DCT) is widely used: N. Ahmed, T. Natarajan
128 + 0 + 0 + 0 + 8 + 4 + 2 + 0 = 142 and K.R.Rao, "Discrete Cosine Transform,"
IEEE Trans. Computers, 90-93, Jan. 1974. The
So, now we are able to encode these images using 8- DCT is sometimes referred to as "DCT-II" in the
bit representations of each pixel to code for a specific context of a family of discrete cosine transforms;
shade of gray. Now, once we have coded this image e.g., see discrete cosine transform. The more
using this method, image compression is utilized in recently developed wavelet transform is also
order to reduce the size of the image down so that used extensively, followed by quantization and
fewer bits per pixel are used in the coding of the entropy coding.
image. The idea behind image compression is to use
the fewest possible bits per pixel to code the image Fractal compression.
while still having a compressed image comparable to
the original image. 2. HUFFMAN CODING
Proposed by Dr. David A. Huffman in 1952. A
Image compression may be lossy or lossless. Lossless method for the construction of minimum redundancy
compression is preferred for archival purposes and code. Huffman code is a technique for compressing
often for medical imaging, technical drawings, clip data. Huffman's greedy algorithm looks at the
art, or comics. Lossy compression methods, occurrence of each character and it as a binary string
especially when used at low bit rates, introduce in an optimal way. Huffman coding is a form of
compression artifacts. Lossy methods are especially statistical coding which attempts to reduce the
suitable for natural images such as photographs in amount of bits required to represent a string of
applications where minor (sometimes imperceptible) symbols. The algorithm accomplishes its goals by
loss of fidelity is acceptable to achieve a substantial allowing symbols to vary in length. Shorter codes are
reduction in bit rate. The lossy compression that assigned to the most frequently used symbols, and
produces imperceptible differences may be called longer codes to the symbols which appear less
visually lossless. frequently in the string (that's where the statistical
part comes in). Code word lengths are no longer
C. Loss And Lossless Compression fixed like ASCII .Code word lengths vary and will be
Methods for lossless image compression are: shorter for the more frequently used characters.
Run-length encoding – used as default method in
PCX and as one of possible in BMP, TGA, TIFF 3. ALORITHM AND WORKING OF HUFFMAN
Area image compression CODING
DPCM and Predictive Coding Following steps describes working and algorithm for
Entropy encoding Huffman coding.
Adaptive dictionary algorithms such as LZW – 1. Read a BMP image using image box control in
used in GIF and TIFF Delphi language. The T Image control can be used to
display a graphical image-Icon (ICO), Bitmap
Deflation – used in PNG, MNG, and TIFF
(BMP), Metafile (WMF), GIF, JPEG, etc. This
Chain codes
control will read an image and convert the mina text
file.
Methods for loss compression:
2. Call a function that will Sort or prioritize
Reducing the color space to the most common
characters based on frequency count of each
colors in the image. The selected colors are
characters in file.
specified in the color palette in the header of the
compressed image. Each pixel just references the 3. Call a function that will create an initial heap.Then
index of a color in the color palette, this method re heap that tree according to occurrence of each
node in the tree, lower the occurrence earlier it is
can be combined with dithering to avoid
attached in heap. Create a new node where the left
pasteurization.
child is the lowest in the sorted list and the right is
Chroma subsampling. This takes advantage of
the second lowest in the sorted list.
the fact that the human eye perceives spatial
1586
COMPUSOFT, An international journal of advanced computer technology, 4 (4), April-2015 (Volume-IV, Issue-IV)
REFERENCES
8. Once a Huffman tree is built, Canonical Huffman [1] J. Ziv and A. Lempel, ``Compression of
codes, which require less information to rebuild, may Individual Sequences Via Variable-Rate Coding,''
be generated by the following steps: IEEE Transactions on Information Theory, Vol. 24,
pp. 530--536, 1978.
Step 1.Rememberthelengthsofthecodesresultingfrom [2] Chiu-Yi Chen; Yu-Ting Pai; Shanq-Jang Ruan,
a Huffman tree generated perabove. Low Power Huffman Coding for HighPerformance
Step 2. Sort the symbols to be encoded by the Data Transmission, International Conference on
lengths of their codes (use symbol value Hybrid Information Technology, 2006, 1(9-11),
to break ties). 2006 pp.71 – 77.
Step 3. Initialize the current code to all zeros and [3] Lakhani, G, Modified JPEG Huffman coding,
assign code values to symbols from longest to IEEETransactions Image Processing, 12(2),2003 pp.
shortest code as follows: 159 – 169.
A. If the current code length is greater than the [4] Rudy Rucker, "Mind Tools", Houghton
length of the code for the current symbol , MifflinCompany,1987
right shift off the extra bits. Assign the code [5] J. Ziv and A. Lempel, ``A Universal Algorithm
to the current symbol. for Sequential Data Compression,'' IEEE
B. Increment the code value. Transactions On Information Theory, Vol. 23, pp.
C. Get the symbol with the next longest code. 337--342, 1977.
D. Repeat from Auntil all symbols are assigned [6] T. A. Welch, ``A Technique for High-
codes. Performance Data Compression,'' Computer, pp. 8--
9. Encoding Data- Once a Huffman code has been 18, 1984
generated, data may be encoded simply by replacing
each symbol with its code.
1587