Path: EDN Asia >> Design Centre >> Consumer Electronics >> Algorithm for lossy image compression using FPGA
Consumer Electronics Share print

Algorithm for lossy image compression using FPGA

05 Apr 2013  | K. Rajesh Kumar

Share this page with your friends

Due to the advancement of technologies in the medical and security fields, there is a requirement for huge image storage. The main challenges in the compression and decompression of the image are the method and the storage area. In this paper we have presented a novel algorithm for the lossy compression method. In addition, Mean Squared Error (MSE), Peak Signal to Noise Ratio (PSNR), Power and Area have been calculated and compared with various fractal compression algorithms.

The lossy compression is a data encoding method that compresses data by discarding (losing) some of it. The procedure aims to minimise the amount of data that needs to be held, handled, and/or transmitted by a computer.


Types of lossy compression
There are three major types of lossy data compression technique. They are as follows:

 • Lossy transform codecs
 • Lossy predictive codecs
 • Chroma subsampling
Let us now discuss these types of lossy compression in detail.

Lossy transform codecs: The lossy transform codecs compression is generally used for JPEG images only. In this case samples of the picture is taken, they are then choped into smaller segments and then transformed into a new image. The resulted image has fewer colours than its original, hence decreasing its quality.

Lossy predictive codecs: In predictive codecs, previous and/or subsequent decoded data is used to predict the compressed image frame.

Chroma subsampling: Chroma subsampling is another type of lossy compression that takes into account that the human eye perceives changes in brightness more sharply than changes of colour, and takes advantage of it by dropping or averaging some chroma (colour) information while maintaining luma (brightness) information. It's commonly used in video encoding schemes and in JPEG images. At some places the above two techniques are combined to compress the image in a better way.

This paper presents a new algorithm for lossy compression and compared with the existing techniques such as the standard LMS, the normalized LMS (NLMS), the MVSS, the conventional TDLMS, the DCT-LMS, the TDVSS, and the VSSTDLMS.

Several existing compression scheme are analysed in the succeeding section. Thereafter, the proposed method is described in detail. Then the performance comparisons are described. Finally the conclusion is given in the last section.


Analysis of existing method
Tyseer Aboulnasr and K. Mayyas presented a robust variable step-size LMS-type algorithm [1] providing fast convergence at early stages of adaptation while ensuring small final misadjustment. The performance of the algorithm is not affected by existing uncorrelated noise disturbances. An approximate analysis of convergence and steady-state performance for zero-mean stationary Gaussian inputs and for non-stationary optimal weight vector is provided.

S.Shankar, Allen m.peterson and Madihally [2] explained that filtering in the transform domain results in great improvements in convergence rate over the conventional time domain adaptive filtering.

K.Mayyas performed analysis of the DCT-LMS adaptive filtering algorithm [3] and showed how the integrity has to be preserved and in there another paper "Mean-Square Analysis of a Variable Step She Transform Domain LMS Adaptive Algorithm"[5] presented MSE analysis of a new variable step size TDLMS algorithm. Analysis has yielded a set of difference equations that describe the mean square behaviour of the algorithm, and a formula for the steady state excess MSE was derived. Steady state MSE analysis, supported by experimental results, indicated that the algorithm misadjustment essentially depends on γ with a very little effect of the input signal statistics and the adaptive filter length. Consequently, the algorithm steady state performance can easily be predicted with the knowledge of γ. Radu Ciprian Bilcu, Pauli Kuosmanen, and Karen Egiazarian in there paper" A Transform Domain LMS Adaptive Filter With Variable Step-Size" [4] introduced a new transform domain (least mean square)LMS algorithm with variable step . The step-sizes are time-variable due to the power estimates of each transform coefficient. In there approach, for each step-size they defined a local component that is given by the power normalisation, and a global component that is the same for each filter coefficient.


Proposed image compression method

Figure 1: Block diagram of proposed scheme.



1 • 2 • 3 Next Page Last Page


Want to more of this to be delivered to you for FREE?

Subscribe to EDN Asia alerts and receive the latest design ideas and product news in your inbox.

Got to make sure you're not a robot. Please enter the code displayed on the right.

Time to activate your subscription - it's easy!

We have sent an activate request to your registerd e-email. Simply click on the link to activate your subscription.

We're doing this to protect your privacy and ensure you successfully receive your e-mail alerts.


Add New Comment
Visitor (To avoid code verification, simply login or register with us. It is fast and free!)
*Verify code:
Tech Impact

Regional Roundup
Control this smart glass with the blink of an eye
K-Glass 2 detects users' eye movements to point the cursor to recognise computer icons or objects in the Internet, and uses winks for commands. The researchers call this interface the "i-Mouse."

GlobalFoundries extends grants to Singapore students
ARM, Tencent Games team up to improve mobile gaming


News | Products | Design Features | Regional Roundup | Tech Impact