Skip to main content

FMD-cGAN: Fast Motion Deblurring Using Conditional Generative Adversarial Networks

  • Conference paper
  • First Online:
Computer Vision and Image Processing (CVIP 2021)

Abstract

In this paper, we present a Fast Motion Deblurring-Conditional Generative Adversarial Network (FMD-cGAN) that helps in blind motion deblurring of a single image. FMD-cGAN delivers impressive structural similarity and visual appearance after deblurring an image. Like other deep neural network architectures, GANs also suffer from large model size (parameters) and computations. It is not easy to deploy the model on resource constraint devices such as mobile and robotics. With the help of MobileNet [1] based architecture that consists of depthwise separable convolution, we reduce the model size and inference time, without losing the quality of the images. More specifically, we reduce the model size by 3–60x compare to the nearest competitor. The resulting compressed Deblurring cGAN faster than its closest competitors and even qualitative and quantitative results outperform various recently proposed state-of-the-art blind motion deblurring models. We can also use our model for real-time image deblurring tasks. The current experiment on the standard datasets shows the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
€32.70 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
EUR 29.95
Price includes VAT (France)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
EUR 93.08
Price includes VAT (France)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
EUR 116.04
Price includes VAT (France)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://pytorch.org/.

  2. 2.

    https://github.com/sksq96/pytorch-summary.

  3. 3.

    https://github.com/zhijian-liu/torchprofile.

  4. 4.

    https://github.com/SeungjunNah/DeepDeblur-PyTorch.

References

  1. Howard, A.G., et al.: MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv preprint arXiv:1704.04861v1 (2017)

  2. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You Only Look Once: Unified, Real-Time Object Detection. arXiv e-prints, June 2015

    Google Scholar 

  3. Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: DeblurGAN: blind motion deblurring using conditional adversarial networks. In: CVPR (2018)

    Google Scholar 

  4. Gong, D., et al.: From motion blur to motion flow: a deep learning solution for removing heterogeneous motion blur. IEEE (2017)

    Google Scholar 

  5. Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In: CVPR (2015)

    Google Scholar 

  6. He, K., Hang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  7. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2017)

    Google Scholar 

  8. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9906, pp. 694–711. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46475-6_43

    Chapter  Google Scholar 

  9. Hansen, P.C., Nagy, J.G., O’Leary, D.P.: Deblurring images:matrices, spectra, and filterin. SIAM (2006)

    Google Scholar 

  10. Almeida, M.S.C., Almeida, L.B.: Blind and semi-blind deblurring of natural images. IEEE (2010)

    Google Scholar 

  11. Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding blind deconvolution algorithms. IEEE (2011)

    Google Scholar 

  12. Szeliski, R.: Computer Vision: Algorithms and Applications. Springer, London (2011). https://doi.org/10.1007/978-1-84882-935-0

    Book  MATH  Google Scholar 

  13. Richardson, W.H.: Bayesian-based iterative method of image restoration. JoSA 62(1), 55–59 (1972)

    Article  Google Scholar 

  14. Wiener, N.: Extrapolation, interpolation, and smoothing of stationary time series, with engineering applications. Technology Press of the MIT (1950)

    Google Scholar 

  15. Cai, X., Song, B.: Semantic object removal with convolutional neural network feature-based inpainting approach. Multimedia Syst. 24(5), 597–609 (2018)

    Article  Google Scholar 

  16. Chen, J., Tan, C.H., Hou, J., Chau, L.P., Li, H.: Robust video content alignment and compensation for rain removal in a CNN framework. In: CVPR (2018)

    Google Scholar 

  17. Luan, F., Paris, S., Shechtman, E., Bala, K.: Deep photo style transfer. In: CVPR (2017)

    Google Scholar 

  18. Zhang, K., Zuo, W., Che, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: residual learning of deep CNN for image denoising. IEEE (2017)

    Google Scholar 

  19. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: CVPR (2017)

    Google Scholar 

  20. Zhang, J., et al.: Dynamic scene deblurring using spatially variant recurrent neural networks. In: CVPR (2018)

    Google Scholar 

  21. Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. IEEE (2015)

    Google Scholar 

  22. Nah, S., Kim, T.H., Lee, K.M.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: CVPR (2017)

    Google Scholar 

  23. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR (2017)

    Google Scholar 

  24. Ramakrishnan, S., Pachori, S., Gangopadhyay, A., Raman, S.: Deep generative filter for motion deblurring. In: ICCVW (2017)

    Google Scholar 

  25. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN. arXiv preprint arXiv:1701.07875 (2017)

  26. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein GANs. In: Advances in Neural Information Processing Systems (2017)

    Google Scholar 

  27. Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: DeblurGAN-v2: deblurring (orders-of-magnitude) faster and better. In: ICCV, August 2019

    Google Scholar 

  28. Lin, T., DollĂ¡r, P., Girshick, R., He, K., Hariharan, B., Belongieg, S.: Feature pyramid networks for object detection. In: CVPR, July 2017

    Google Scholar 

  29. Lai, W., Huang, J.B., Hu, Z., Ahuja, N., Yang, M.H.: A comparative study for single image blind deblurring. In: CVPR (2016)

    Google Scholar 

  30. Goodfellow, I.J., et al.: Generative adversarial nets. In: NIPS (2014)

    Google Scholar 

  31. Lim, J.H., Ye, J.C.: Geometric GAN. arXiv preprint arXiv:1705.02894 (2017)

  32. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-Attention GANs. arXiv:1805.08318v2 (2019)

  33. Ulyanov, D., Vedaldi, A., Lempitsky, V.S.: Instance normalization: the missing ingredient for fast stylization. CoRR, abs/1607.08022 (2016)

    Google Scholar 

  34. Fred, A., Agarap, M.: Deep Learning using Rectified Linear Units (ReLU). arXiv:1803.08375v2, February 2019

  35. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  36. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  37. Liang, C.H., Chen, Y.A., Liu, Y.C., Hsu, W.H.: Raw image deblurring. IEEE Trans. Multimedia (2020)

    Google Scholar 

  38. Nah, S., et al.: NTIRE 2019 challenge on video deblurring and super-resolution: dataset and study. In: CVPR Workshops, June 2019

    Google Scholar 

  39. Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: CVPR (2018)

    Google Scholar 

  40. Mirza, M., Osindero, S.: Conditional Generative Adversarial Nets. arXiv preprint arXiv:1411.1784v1, November 2014

  41. Xu, L., Zheng, S., Jia, J.: Unnatural L0 sparse representation for natural image deblurring. In: CVPR (2013)

    Google Scholar 

  42. Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z.: Least squares generative adversarial networks. arxiv:1611.04076 (2016)

  43. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

    Google Scholar 

  44. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. CoRR, abs/1311.2901 (2013)

    Google Scholar 

  45. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR, abs/1412.6980 (2014)

    Google Scholar 

  46. Ren, W., Cao, X., Pan, J., Guo, X., Zuo, W., Yang, M.H.: Image deblurring via enhanced low-rank prior. IEEE Trans. Image Process. 25(7), 3426–3437 (2016)

    Article  MathSciNet  Google Scholar 

  47. Pan, J., Sun, D., Pfister, H., Yang, M.H.: Blind image deblurring using dark channel prior. In: CVPR, June 2016

    Google Scholar 

  48. Nah, S.: DeepDeblur-PyTorch. https://github.com/SeungjunNah/DeepDeblur-PyTorch

  49. Gong, X., Chang, S., Jiang, Y., Wang, Z.: AutoGAN: neural architecture search for GANs. In: ICCV (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jatin Kumar .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 1297 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kumar, J., Mastan, I.D., Raman, S. (2022). FMD-cGAN: Fast Motion Deblurring Using Conditional Generative Adversarial Networks. In: Raman, B., Murala, S., Chowdhury, A., Dhall, A., Goyal, P. (eds) Computer Vision and Image Processing. CVIP 2021. Communications in Computer and Information Science, vol 1568. Springer, Cham. https://doi.org/10.1007/978-3-031-11349-9_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-11349-9_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-11348-2

  • Online ISBN: 978-3-031-11349-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics