Skip to main content

Mixture of the Robust L1 Distributions and Its Applications

  • Conference paper
AI 2007: Advances in Artificial Intelligence (AI 2007)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4830))

Included in the following conference series:

  • 2498 Accesses

Abstract

Recently a robust probabilistic L1-PCA model was introduced in [1] by replacing the conventional Gaussian noise model with the Laplacian L1 model. Due to the heavy tail characteristics of the L1 distribution, the proposed model is more robust against data outliers. In this paper, we generalized the L1-PCA into a mixture of L1-distributions so that the model can be used for possible multiclustering data. For the model learning we use the property that the L1 density can be expanded as a superposition of infinite number of Gaussian densities to include a tractable Bayesian learning and inference based on the variational EM-type algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Gao, J.: Robust L1 principal component analysis and its bayesian variational inference. Neural Computation (to appear, 2008)

    Google Scholar 

  2. McLachlan, G., Peel, D.: Finite Mixture Models. John Wiley, New York (2000)

    MATH  Google Scholar 

  3. Dempster, A., Laird, N., Rubin, D.: Maximum-likelihood from incomplete data via the EM algorithm. J. Royal Statistical Soceity, Ser. B 39, 1–38 (1977)

    MATH  MathSciNet  Google Scholar 

  4. Tipping, M., Bishop, C.: Mixtures of probabilistic principal component analyzers. Neural Computation 11, 443–482 (1999)

    Article  Google Scholar 

  5. Verbeek, J.: Learning nonlinear image manifolds by global alignment of local linear models. IEEE Trans. on Pattern Analysis and Machine Intelligence 28(8), 1236–1250 (2006)

    Article  Google Scholar 

  6. Bhowmick, D., Davison, A., Goldstein, D., Ruffieux, Y.: A Laplace mixture model for identification of differential expression in microarray experiments. Biostatistics 7, 630–641 (2006)

    Article  MATH  Google Scholar 

  7. Jordan, M.: Graphical models. Statistical Science (Special Issue on Bayesian Statistics) 19, 140–155 (2004)

    MATH  Google Scholar 

  8. Peel, D., McLachlan, G.: Robust mixture modelling using the t distribution. Statistic and Computing 10, 339–348 (2000)

    Article  Google Scholar 

  9. Ridder, D.D., Franc, V.: Robust subspace mixture models using t-distributions. In: Harvey, R., Bangham, A. (eds.) BMVC 2003. Proceedings of the 14th British Machine Vision Conference, pp. 319–328 (2003)

    Google Scholar 

  10. Archambeau, C.: Probabilistic models in noisy environments and their application to a visual prosthesis for the blind. Doctoral dissertation, Université Catholique de Louvain, Belgium (2005)

    Google Scholar 

  11. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B. 58, 267–288 (1996)

    MATH  MathSciNet  Google Scholar 

  12. Ng, A.: Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Proceedings of Intl Conf. Machine Learning (2004)

    Google Scholar 

  13. Jolliffe, I.: Principal component analysis, 2nd edn. Springer, New York (2002)

    MATH  Google Scholar 

  14. Zou, H., Hastie, T., Tibshirani, R.: Sparse principal component analysis. Technical report, Statistics Department, Stanford University (2004)

    Google Scholar 

  15. Park, H.J., Lee, T.W.: Modeling nonlinear dependencies in natural images using mixture of laplacian distribution. In: Saul, L.K., Weiss, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems 17, pp. 1041–1048. MIT Press, Cambridge (2005)

    Google Scholar 

  16. Gao, J., Gunn, S., Kandola, J.: Adapting kernels by variational approach in svm. In: McKay, B., Slaney, J.K. (eds.) AI 2002. LNCS (LNAI), vol. 2557, pp. 395–406. Springer, Heidelberg (2002)

    Google Scholar 

  17. Tipping, M., Lawrence, N.: Variational inference for Student-t models: Robust Bayesian interpolation and generalized component analysis. NeuroComputing 69, 123–141 (2005)

    Article  Google Scholar 

  18. Pontil, M., Mukherjee, S., Girosi, F.: On the noise model of support vector machine regression. In: A.I. Memo 1651, AI Laboratory, MIT, Cambridge (1998)

    Google Scholar 

  19. Guo, Y., Gao, J.B., Kwan, P.W.: Kernel Laplacian eigenmaps for visualization of non-vectorial data. In: Sattar, A., Kang, B.-H. (eds.) AI 2006. LNCS (LNAI), vol. 4304, pp. 1179–1183. Springer, Heidelberg (2006)

    Google Scholar 

  20. Guo, Y., Gao, J.B., Kwan, P.W.: Visualization of non-vectorial data using twin kernel embedding. In: Ong, K., Smith-Miles, K., Lee, V., Ng, W. (eds.) AIDM 2006. Proceedings of the International Workshop on Integrating AI and Data Mining, pp. 11–17. IEEE Computer Society Press, Los Alamitos (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Mehmet A. Orgun John Thornton

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gao, J., Xu, R.Y. (2007). Mixture of the Robust L1 Distributions and Its Applications. In: Orgun, M.A., Thornton, J. (eds) AI 2007: Advances in Artificial Intelligence. AI 2007. Lecture Notes in Computer Science(), vol 4830. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-76928-6_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-76928-6_5

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-76926-2

  • Online ISBN: 978-3-540-76928-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics