Multi-exposure fusion (MEF) is a technique that combines different snapshots of the same scene, c... more Multi-exposure fusion (MEF) is a technique that combines different snapshots of the same scene, captured with different exposure times, into a single image. This combination process (also known as fusion) is performed in such a way that the parts with better exposure of each input image have a stronger influence. Therefore, in the result image all areas are well exposed. In this paper, we propose a new method that performs MEF and noise removal. Rather than denoising each input image individually and then fusing the obtained results, the proposed strategy jointly performs fusion and denoising in the Discrete Cosinus Transform (DCT) domain, which leads to a very efficient algorithm. The method takes advantage of spatio-temporal patch selection and collaborative 3D thresholding. Several experiments show that the obtained results are significantly superior to the existing state of the art.
In this note, we propose a general definition of shape which is both compatible with the one prop... more In this note, we propose a general definition of shape which is both compatible with the one proposed in phenomenology (gestaltism) and with a computer vision implementation. We reverse the usual order in Computer Vision. We do not define “shape recognition” as a task which requires a “model” pattern which is searched in all images of a certain kind. We give instead a “blind” definition of shapes relying only on invariance and repetition arguments. Given a set of images I, we call shape of this set any spatial pattern which can be found at several locations of some image, or in several different images of I. (This means that the shapes of a set of images are defined without any a priori assumption or knowledge.) The definition is powerful when it is invariant and we prove that the following invariance requirements can be matched in theory and in practice: local contrast invariance, robustness to blur, noise and sampling, affine deformations. We display experiments with single images...
2017 IEEE International Conference on Image Processing (ICIP), 2017
We propose a new video denoising algorithm combining state of the art image and video denoising a... more We propose a new video denoising algorithm combining state of the art image and video denoising algorithms. We extend the DDID [1] algorithm to video sequences and then combine it with the SPTWO [2] method. The experimentation illustrates how the new method keeps the best of each algorithm, being superior both visually and numerically to other state of the art techniques.
The color histogram (or color cloud) of a digital image displays the colors present in an image r... more The color histogram (or color cloud) of a digital image displays the colors present in an image regardless of their spatial location and can be visualized in (R,G,B) coordinates. Therefore, it contains essential information about the structure of colors in natural scenes. The analysis and visual exploration of this structure is difficult. The color cloud being thick, its more dense points are hidden in the clutter. Thus, it is impossible to properly visualize the cloud density. This paper proposes a visualization method that also enables one to validate a general model for color clouds. It argues first by physical arguments that the color cloud must be essentially a two-dimensional (2D) manifold. A color cloud-filtering algorithm is proposed to reveal this 2D structure. A quantitative analysis shows that the reconstructed 2D manifold is strikingly close to the color cloud and only marginally depends on the filtering parameter. Thanks to this algorithm, it is finally possible to visualize the color cloud density as a gray-level function defined on the 2D manifold.
ABSTRACT Methods and systems for performing object directed recognition based on two-dimensional ... more ABSTRACT Methods and systems for performing object directed recognition based on two-dimensional images and three-dimensional models. Transforms are used to map features of an object to an image seen from a vantage point and to map features of images best seen from the vantage points to three-dimensional models. Mapping of features and images to the three-dimensional model and then to image planes of other images in comparison of corresponding features from the mapped images to original images allows for determination of coherence between the mapped images and the original images.
We present a method for the automatic estimation of the minimum set of colors needed to describe ... more We present a method for the automatic estimation of the minimum set of colors needed to describe an image. We call this minimal set “color palette”. The proposed method combines the well-known K-Means clustering technique with a thorough analysis of the color information of the image. The initial set of cluster seeds used in K-Means is automatically inferred from this analysis. Color information is analyzed by studying the 1D histograms associated to the hue, saturation and intensity components of the image colors. In order to achieve a proper parsing of these 1D histograms a new histogram segmentation technique is proposed. The experimental results seem to endorse the capacity of the method to obtain the most significant colors in the image, even if they belong to small details in the scene. The obtained palette can be combined with a dictionary of color names in order to provide a qualitative image description. 1
When analyzing the RGB distribution of colors in natural images we notice that they are organized... more When analyzing the RGB distribution of colors in natural images we notice that they are organized into spatial structures. This observation is not new, quoting Omer and Werman in [5]: “... when looking at the RGB histogram of
One of the aims of computer vision in the past 30 years has been to recognize shapes by numerical... more One of the aims of computer vision in the past 30 years has been to recognize shapes by numerical algorithms. Now, what are the geometric features on which shape recognition can be based? In this paper, we review the mathematical arguments leading to a unique definition of planar shape elements. This definition is derived from the invariance requirement to not less than five classes of perturbations, namely noise, affine distortion, contrast changes, occlusion, and background. This leads to a single possibility: shape elements as the normalized, affine smoothed pieces of level lines of the image. As a main possible application, we show the existence of a generic image comparison technique able to find all shape elements common to two images.
Abstract. In this note, we propose a general definition of shape which is both compatible with th... more Abstract. In this note, we propose a general definition of shape which is both compatible with the one proposed in phenomenology (gestaltism) and with a computer vision implementation. We reverse the usual order in Computer Vision. We do not define “shape recognition ” as a task which requires a “model ” pattern which is searched in all images of a certain kind. We give instead a “blind ” definition of shapes relying only on invariance and repetition arguments. Given a set of images I, wecallshape of this set any spatial pattern which can be found at several locations of some image, or in several different images of I. (This means that the shapes of a set of images are defined without any apriori assumption or knowledge.) The definition is powerful when it is invariant and we prove that the following invariance requirements can be matched in theory and in practice: local contrast invariance, robustness to blur, noise and sampling, affine deformations. We display experiments with sin...
Multi-exposure fusion (MEF) is a technique that combines different snapshots of the same scene, c... more Multi-exposure fusion (MEF) is a technique that combines different snapshots of the same scene, captured with different exposure times, into a single image. This combination process (also known as fusion) is performed in such a way that the parts with better exposure of each input image have a stronger influence. Therefore, in the result image all areas are well exposed. In this paper, we propose a new method that performs MEF and noise removal. Rather than denoising each input image individually and then fusing the obtained results, the proposed strategy jointly performs fusion and denoising in the Discrete Cosinus Transform (DCT) domain, which leads to a very efficient algorithm. The method takes advantage of spatio-temporal patch selection and collaborative 3D thresholding. Several experiments show that the obtained results are significantly superior to the existing state of the art.
In this note, we propose a general definition of shape which is both compatible with the one prop... more In this note, we propose a general definition of shape which is both compatible with the one proposed in phenomenology (gestaltism) and with a computer vision implementation. We reverse the usual order in Computer Vision. We do not define “shape recognition” as a task which requires a “model” pattern which is searched in all images of a certain kind. We give instead a “blind” definition of shapes relying only on invariance and repetition arguments. Given a set of images I, we call shape of this set any spatial pattern which can be found at several locations of some image, or in several different images of I. (This means that the shapes of a set of images are defined without any a priori assumption or knowledge.) The definition is powerful when it is invariant and we prove that the following invariance requirements can be matched in theory and in practice: local contrast invariance, robustness to blur, noise and sampling, affine deformations. We display experiments with single images...
2017 IEEE International Conference on Image Processing (ICIP), 2017
We propose a new video denoising algorithm combining state of the art image and video denoising a... more We propose a new video denoising algorithm combining state of the art image and video denoising algorithms. We extend the DDID [1] algorithm to video sequences and then combine it with the SPTWO [2] method. The experimentation illustrates how the new method keeps the best of each algorithm, being superior both visually and numerically to other state of the art techniques.
The color histogram (or color cloud) of a digital image displays the colors present in an image r... more The color histogram (or color cloud) of a digital image displays the colors present in an image regardless of their spatial location and can be visualized in (R,G,B) coordinates. Therefore, it contains essential information about the structure of colors in natural scenes. The analysis and visual exploration of this structure is difficult. The color cloud being thick, its more dense points are hidden in the clutter. Thus, it is impossible to properly visualize the cloud density. This paper proposes a visualization method that also enables one to validate a general model for color clouds. It argues first by physical arguments that the color cloud must be essentially a two-dimensional (2D) manifold. A color cloud-filtering algorithm is proposed to reveal this 2D structure. A quantitative analysis shows that the reconstructed 2D manifold is strikingly close to the color cloud and only marginally depends on the filtering parameter. Thanks to this algorithm, it is finally possible to visualize the color cloud density as a gray-level function defined on the 2D manifold.
ABSTRACT Methods and systems for performing object directed recognition based on two-dimensional ... more ABSTRACT Methods and systems for performing object directed recognition based on two-dimensional images and three-dimensional models. Transforms are used to map features of an object to an image seen from a vantage point and to map features of images best seen from the vantage points to three-dimensional models. Mapping of features and images to the three-dimensional model and then to image planes of other images in comparison of corresponding features from the mapped images to original images allows for determination of coherence between the mapped images and the original images.
We present a method for the automatic estimation of the minimum set of colors needed to describe ... more We present a method for the automatic estimation of the minimum set of colors needed to describe an image. We call this minimal set “color palette”. The proposed method combines the well-known K-Means clustering technique with a thorough analysis of the color information of the image. The initial set of cluster seeds used in K-Means is automatically inferred from this analysis. Color information is analyzed by studying the 1D histograms associated to the hue, saturation and intensity components of the image colors. In order to achieve a proper parsing of these 1D histograms a new histogram segmentation technique is proposed. The experimental results seem to endorse the capacity of the method to obtain the most significant colors in the image, even if they belong to small details in the scene. The obtained palette can be combined with a dictionary of color names in order to provide a qualitative image description. 1
When analyzing the RGB distribution of colors in natural images we notice that they are organized... more When analyzing the RGB distribution of colors in natural images we notice that they are organized into spatial structures. This observation is not new, quoting Omer and Werman in [5]: “... when looking at the RGB histogram of
One of the aims of computer vision in the past 30 years has been to recognize shapes by numerical... more One of the aims of computer vision in the past 30 years has been to recognize shapes by numerical algorithms. Now, what are the geometric features on which shape recognition can be based? In this paper, we review the mathematical arguments leading to a unique definition of planar shape elements. This definition is derived from the invariance requirement to not less than five classes of perturbations, namely noise, affine distortion, contrast changes, occlusion, and background. This leads to a single possibility: shape elements as the normalized, affine smoothed pieces of level lines of the image. As a main possible application, we show the existence of a generic image comparison technique able to find all shape elements common to two images.
Abstract. In this note, we propose a general definition of shape which is both compatible with th... more Abstract. In this note, we propose a general definition of shape which is both compatible with the one proposed in phenomenology (gestaltism) and with a computer vision implementation. We reverse the usual order in Computer Vision. We do not define “shape recognition ” as a task which requires a “model ” pattern which is searched in all images of a certain kind. We give instead a “blind ” definition of shapes relying only on invariance and repetition arguments. Given a set of images I, wecallshape of this set any spatial pattern which can be found at several locations of some image, or in several different images of I. (This means that the shapes of a set of images are defined without any apriori assumption or knowledge.) The definition is powerful when it is invariant and we prove that the following invariance requirements can be matched in theory and in practice: local contrast invariance, robustness to blur, noise and sampling, affine deformations. We display experiments with sin...
Uploads
Papers by J. Lisani