Papers by Aleksej Avramovic
There are many examples of digital image processing where lossless image compression is necessari... more There are many examples of digital image processing where lossless image compression is necessarily, due to the costs of data acquisition or legal issues, such as aerial and medical imaging. Need for lossless compression of large amounts of data requires speed and efficiency, so predictive methods are chosen before transform-based methods. Predictive methods rely on prediction, context modeling and entropy coding. Predictor is the first and the most important step which removes a large amount of spatial redundancy. The most representative predictors are median edge detection (MED) predictor used in JPEG-LS standard and gradient adjusted predictor (GAP) used in CALIC. This paper presents a novel threshold controlled, gradient edge detection (GED) predictor which combines simplicity of MED and efficiency of GAP. Amount of removed redundancy is estimated with entropy after prediction. Analysis shows that GED gives comparable entropies with much complicated GAP.
In this paper, a novel predictive-based lossless image compression algorithm is presented. Lossle... more In this paper, a novel predictive-based lossless image compression algorithm is presented. Lossless compression must be applied when data acquisition is important and expensive, as in aerial, medical and space imaging. Besides requirements of high compression ratios as much as it is possible, lossless image coding algorithms must be fast. Proposed algorithm is developed for efficient and fast processing of 12-bit medical images. Comparison with standardized lossless compression algorithm, JPEG-LS is done on a set of 12-bit medical images with different statistical features. It is shown that proposed solution can achieve approximately same bitrates as JPEG-LS even though it is much simpler.
It is often the case in image classification tasks that image descriptors are of high dimensional... more It is often the case in image classification tasks that image descriptors are of high dimensionality. While adding new, independent, features generally improves performance of a classifier, it increases its cost and complexity. In this paper we investigate how descriptor dimensionality reduction techniques, namely principal component analysis and independent component analysis affect classification accuracy. We test their performance for the task of semantic classification of aerial images. We show that, even with much lower dimensional descriptors, classification accuracy is still near 90%.
This paper presents a new multiplier with possibility to achieve an arbitrary accuracy. The multi... more This paper presents a new multiplier with possibility to achieve an arbitrary accuracy. The multiplier is based upon the same idea of numbers representation as Mitchell's algorithm, but does not use logarithm approximation. The proposed iterative algorithm is simple and efficient, achieving an error percentage as small as required, until the exact result. Hardware solution involves adders and shifters, so it is not gate and power consuming. Parallel circuits are used for error correction. The error summary for operands ranging from 8-bits to 16-bits operands indicates very low error percentage with only two parallel correction circuits.
Microprocessors and Microsystems, 2011
The paper presents a new multiplier enabling achievement of an arbitrary accuracy. It follows the... more The paper presents a new multiplier enabling achievement of an arbitrary accuracy. It follows the same idea of number representation as the Mitchell's algorithm, but does not use logarithm approximation. The proposed iterative algorithm is simple and efficient and its error percentage is as small as required. As its hardware solution involves adders and shifters, it is not gate and power consuming. Parallel circuits are used for error correction. The error summary for operands ranging from 8-bit to 16-bit operands indicates a very low error percentage with only two parallel correction circuits.
Microprocessors and Microsystems, 2011
The paper presents a new multiplier enabling achievement of an arbitrary accuracy. It follows the... more The paper presents a new multiplier enabling achievement of an arbitrary accuracy. It follows the same idea of number representation as the Mitchell's algorithm, but does not use logarithm approximation. The proposed iterative algorithm is simple and efficient and its error percentage is as small as required. As its hardware solution involves adders and shifters, it is not gate and power consuming. Parallel circuits are used for error correction. The error summary for operands ranging from 8-bit to 16-bit operands indicates a very low error percentage with only two parallel correction circuits.
Digital signal processing algorithms often rely heavily on a large number of multiplications, whi... more Digital signal processing algorithms often rely heavily on a large number of multiplications, which is both time and power consuming. However, there are many practical solutions to simplify multiplication, like truncated and logarithmic multipliers. These methods consume less time and power but introduce errors. Nevertheless, they can be used in situations where a shorter time delay is more important than accuracy. In digital signal processing, these conditions are often met, especially in video compression and tracking, where integer arithmetic gives satisfactory results. This paper presents and compare different multipliers in a logarithmic number system. For the hardware implementation assessment, the multipliers are implemented on the Spartan 3 FPGA chip and are compared against speed, resources required for implementation, power consumption and error rate. We also propose a simple and efficient logarithmic multiplier with the possibility to achieve an arbitrary accuracy through an iterative procedure. In such a way, the error correction can be done almost in parallel (actually this is achieved through pipelining) with the basic multiplication. The hardware solution involves adders and shifters, so it is not gate and power consuming. The error of proposed multiplier for operands ranging from 8 bits to 16 bits indicates a very low relative error percentage.
During the years image classification gained important significance in practice, especially in th... more During the years image classification gained important significance in practice, especially in the fields of digital radiology, remote sensing, image retrieval, etc. Typical algorithm for image classification contains descriptor extraction phase, learning phase and testing phase. Testing phase calculates accuracy of the classifier based on predetermined set of labelled images. This paper analyse performance of texture descriptors combined with SVMs, in the case when test dataset contains images not belonging to any predetermined class. A robustness of texture descriptors on outsiders is analysed, to see if descriptor is able to separate outsiders in specific class. Medical dataset containing various radiology images is used for testing. It was shown that it is possible to separate images not belonging to any class with cost of decreased performance by few percent.
Uploads
Papers by Aleksej Avramovic