ICRA19_porav_Images_Restoration_De_raining
ICRA19_porav_Images_Restoration_De_raining
ICRA19_porav_Images_Restoration_De_raining
I. I NTRODUCTION
If we want machines to work outdoors and see while
doing so, they have to work in the rain. When rain and
lenses interact, computer vision becomes harder - wild local
distortions of the image appear which dramatically impede
image understanding tasks. However the distortions are not Fig. 1. We learn a de-noising generator that can remove noise and artefacts
induced by the presence of adherent rain droplets and streaks. On the top
noise, they are structured, the light field is simply bent and left,input images that are affected by real rain drops. On the top right, the
attenuated, and accordingly can be modelled and reversed. cleaned, de-rained images. On the bottom left, input images that are affected
In this work we develop a filter which as a pre-processing by computer-generated rain drops. On the bottom right, the cleaned, de-
rained images.
step removes the effect of raindrops on lenses. Several tasks
are affected by the presence of adherent water droplets on is able to both drastically improve the visual quality of im-
camera lenses or enclosures, such as semantic segmentation ages and restore performance on road marking segmentation
[1], localisation using segmentation [2], [3] or road marking tasks.
segmentation [4]. In this paper we choose to use segmenta- Secondly, we describe a way of efficiently adding
tion as an example task by which to test the effectiveness computer-generated adherent rain droplets and adherent
of our method. Many approaches so far have reached for streaks to any image using GPU shaders. This system is
multi-modal data [5], domain adaptation [6], [7] or training presented in section III-A. As the Cityscapes dataset provides
on synthetic data [8], however this can become awkward as: a good groundtruth for segmentation but does not contain
1) Acquiring rainy images is time-consuming, expensive images with significant rain on the lens, we modify it using
or impossible for many tasks or setups, especially in this technique and use it as a proxy to study the effects
the case of supervised training, where ground truth data of rain on general semantic segmentation. Additionally, we
is needed. create a synthetic rain dataset by adding computer-generated
2) Training, domain-adapting or fine-tuning each individ- rain drops to a full Oxford RobotCar dataset [9] and to the
ual task with augmented data is intractable. CamVid [10] dataset.
Our main contributions include:
We take a different approach and build a system as an
• a de-raining model that produces state of the art results;
image preprocessor, the output of which is a cleaned, de-
• using computer-generated water drops as a proxy to
rained image that improves the performance of many tasks
study the effects of rain on segmentation for datasets
performed on the image.
We begin by creating a bespoke real-world small baseline that provide a ground truth but do not normally contain
stereo dataset where one lens is affected by real water rainy images; and
• a real-world very-narrow-baseline stereo dataset with
droplets and the other is kept dry. The methodology and
apparatus for doing so is presented in section IV-A. Using rainy & clear images covering a wide array of dynamic
this dataset, we train a de-raining generator and show that it scenes.
Our aim is to show that pre-processing the image leads
Authors are from the Oxford Robotics Institute, University of Oxford, to better performance as compared to training, retraining or
UK. {horia,tombruls,pnewman}@robots.ox.ac.uk fine-tuning a task-specific model with rain-augmented data.
7088
A proto-raindrop is created using a simple refractive model motivate the addition of skip connections by observing that
that assumes a pinhole camera. The refraction angle is most of the structure of the input image should be kept, along
encoded following a scheme similar to normal mapping [29] with illumination levels and fine details.
by using a 2D look-up table represented by the RED and To promote better generalization and inpainting, we refrain
GREEN channels of a texture T , with the thickness of the from using any direct pixel-wise loss and instead use a
drop encoded in the BLUE channel of the same texture. This combination of adversarial, perceptual, and multi-scale dis-
texture T is then masked using an alpha layer that allows criminator feature losses. The discriminator architecture is a
blending of the water drops with the background image and CNN with 5 layers, similar to PatchGAN [32]. We present
other drops, as shown in Figure 3a. With the drop acting as the full structure of the losses in the next section.
a simple lens, the coordinate (xr , yr ) of the world point that C. Losses
is rendered at the location (u, v) on the surface of a drop is Similar to [33], we apply an adversarial loss through a
given by the following simplified distortion model: discriminator on the output of the generator. This loss is
xr = u + (R ∗ B) (1) formulated as:
yr = v + (G ∗ B). (2)
Ladv = (D(G(Irainy )) − 1)2 . (3)
Each image location (u, v) has a probability Pr of becoming The discriminator is trained to minimize the following
the center of a proto-raindrop whose dimensions are scaled
loss:
along the horizontal and vertical directions by a tuple of
random values Sx and Sy . For each timestep, the center of a Ldisc = (D(Iclear ) − 1)2 + (D(Ide−rained ))2 , (4)
droplet may undergo a slip of Dx pixels along the horizontal
and Dy pixels along the vertical direction as a function of where Iderained is sampled from a pool of previously de-
the droplet diameter d: rained images.
The perceptual loss [34] is applied between the label and
0, 0 d ≤ 4mm
Dx , Dy = reconstructed image:
x ∼ N (0, 3), Pd ∗ 5 d > 4mm,
nX
V GG
1
where Pd represents the probability of slip along the vertical Lperc = kV GG(Iclear )i − V GG(G(Irainy ))i k1 ,
direction and x denotes the random deviation of the slip i=1
wiperc
along the horizontal direction. We empirically choose a (5)
maximum of 5 pixels of vertical displacement. where nV GG represents the number of VGG layers that are
For each timestep, droplets that are close to each other used to compute the loss and wiperc = 2(nV GG −i) weighs the
are merged using the metaballs approach [20], as shown in importance of each layer.
Figure 3b. By default, each texture location T (u, v) that Additionally, a multi-scale discriminator feature loss [30]
does not fall under a droplet encodes a normal that is is applied between the label and reconstructed image:
perpendicular to the background image. Finally, the image is nX
ADV
1
sampled using the normal map defined by the texture T to Lmsadv = kD(Iclear )i − D(G(Irainy ))i k1 , (6)
produce a result similar to the one in the top-left corner of i=1
wiadv
Fig 1. Using this technique we have created three synthetic where nADV represents the number of discriminator layers
rain datasets: that are used to compute the loss and wiadv = 2(nADV −i)
• synthetic rain added to CamVid, complete with road weighs the importance of each layer.
marking ground truth; The complete generator objective Lgen becomes:
• synthetic rain added to Cityscapes, complete with se-
mantic segmentation ground truth; and Lgen = λadv ∗ Ladv + λperc ∗ Lperc + λmsadv ∗ Lmsadv .
• synthetic rain added to the dry images from our stereo (7)
dataset, complete with road marking ground truth. Each λ term is a hyperparameter that weights the impor-
tance of each term of the loss equation. We wish to estimate
the generator G and discriminator D functions such that:
G, D = arg minLgen + Ldisc . (8)
G,D
7089
Fig. 4. CamVid road marking segmentation results. From left to right: rainy input image, segmentation result on rainy image, derained input image,
segmentation result on derained image.
Fig. 5. RobotCar road marking segmentation results.First column shows a RobotCar(R) real rain image and segmentation result. Second column shows
the derained real rain image and segmentation result. Third column shows a RobotCar(S) computer-generated rain image and segmentation result. Fourth
column shows the derained computer-generated rain image and segmentation result.
Fig. 6. Cityscapes semantic segmentation results. The first row shows a rainy image on the left and its corresponding semantic segmentation on the right.
The second row shows the derained image on the left and its corresponding semantic segmentation on the right.
Fig. 7. An example from our stereo dataset. The image on the left is produced by the left lens, which is affected by water drops. The image in the middle
is produced by the dry right hand lens. The image on the right is the road marking segmentation ground truth.
7090
the right-hand section is sprayed with water droplets using an C. Segmentation Tasks
internal nozzle fitted at the top of the chamber. The angle of We used the trained generator G to de-rain all of the
this chamber with respect to the axes of the cameras can be rainy input images. To benchmark both the images with
modified to simulate a slanted windscreen or enclosure, and computer-generated water drops and the images with real
the distance from the lenses can be increased or decreased water drops, in the context of road marking segmentation,
accordingly to replicate different levels of focus or blur on we used the approach of [4] which trains a U-Net to segment
the droplets. road markings in a binary way. To benchmark the computer-
The nozzle spans the entire width of the right chamber generated water drop images in the context of semantic
and is capable of producing water droplets with a diameter segmentation, we used DeepLab v3 [36] which has achieved
between 1mm and 8mm, as well as streaks of water. This state-of-the-art performance on the Cityscapes dataset.
variability is achieved by modulating the water pressure The generator runs at approximately 1 Hz for images with
using a number of pulse width modulation regimes. The a resolution of 1280 × 960, and at approximately 3 Hz for
water is drained from the bottom of the chamber and is images with a resolution of 640 × 480 on an Nvidia Titan X
returned to a storage tank for recirculation. The cameras GPU.
used are Point Grey Grasshopper 2 with 4.5 mm F/1.4 lenses,
a baseline of 29 mm and automatic synchronisation. The V. R ESULTS
system is fully portable and the water is completely contained
within the circuit formed by the right chamber, pump and We benchmark our results taking into consideration several
tank. metrics across several tasks, and also present results on the
We have collected approximately 50000 pairs of images quality of the image reconstruction.
by driving in and around the city of Oxford. The image pairs
are undistorted, cropped and aligned. We have selected 4818 A. Quantitative results
image pairs to form a training, validation and testing dataset. Table I presents results for road marking segmentation,
From the testing partition, we have created ground truth road in the case of RobotCar with real water drops (R), Robot-
marking segmentations for 500 images. An example from our Car with computer-generated water drops (S) and CamVid
dataset is shown in Figure 7. with computer-generated water drops (S). Our baseline is
Compared to the painstakingly-collected dataset of [11], represented by the performance of clear images tested on
our setup is a set-and-forget approach: once the stereo models that were trained using clear images (REFERENCE).
camera has been mounted on a vehicle, it is trivial to collect For both RobotCar (R), Robotcar (S), and the CamVid (S)
large amounts of well-synchronised and well-aligned pairs datasets, the results show a severely degraded performance
of images. when testing rainy images on models that were trained
using clear images (RAINY). Retraining the road mark-
ing segmentation models with a dataset augmented with
rainy images will lead to an improvement in performance
(AUGM). However, de-raining the images using our method
and testing them on a model trained using clear images
(DERAINED) restores the performance of the segmentation
to levels that are close to the baseline recorded on clear
images. Figure 4 shows road marking segmentation results
on CamVid, before and after deraining. Figure 5 shows road
marking segmentation results on RobotCar(R)&(S), before
and after deraining.
As expected, re-training the segmentation model with
a dataset that is augmented with rainy images helps to
improve performance, however using a specialised de-raining
preprocessing step significantly outperforms this approach,
even when tested on a model trained exclusively with clear
Fig. 8. Our small-baseline stereo camera setup. A bi-partite chamber with images. This is the expected advantage of having a model
acrylic clear panels is placed in front of the lenses, with the left-hand section
being kept dry at all times, while the right-hand section is sprayed with water dedicated, in its entirety, to a specific image-to-image map-
droplets using an internal nozzle. ping task (de-raining), which narrows the variety of images
fed to the segmentation task.
Table II presents results for semantic segmentation on the
B. Training Cityscapes dataset. We benchmark 4 different combinations
We used a network training regimen similar to [30]. For of models and datasets:
each iteration we first trained the discriminator on a clear • Cityscapes-clear images tested on a model trained using
image and a de-rained image from a previous iteration with Cityscapes-clear images;
the goal of minimizing Ldisc , and then trained the generator • Cityscapes-rainy images tested on a model trained using
on rainy input images to minimize Lgen . We used the Adam Cityscapes-clear images;
solver [35] with an initial learning rate set at 0.0002, a batch • Cityscapes-rainy images tested on a model trained using
size of 1, λadv = 1, λperc = 1 and λmsadv = 1. Cityscapes-clear and Cityscapes-rainy images; and
7091
TABLE I
ROAD MARKING SEGMENTATION RESULTS
REFERENCE(CLEAR) RAINY AUGM. DERAINED
Dataset Prec. Rec. F1 IOU Prec. Rec. F1 IOU Prec. Rec. F1 IOU Prec. Rec. F1 IOU
RobotCar(R) 0.627 0.918 0.734 0.594 0.512 0.628 0.550 0.396 0.486 0.807 0.593 0.434 0.603 0.841 0.689 0.544
RobotCar(S) 0.627 0.918 0.734 0.594 0.364 0.595 0.437 0.287 0.654 0.770 0.690 0.541 0.661 0.816 0.715 0.569
CamVid(S) 0.576 0.927 0.699 0.551 0.353 0.576 0.425 0.279 0.457 0.771 0.563 0.405 0.520 0.755 0.603 0.444
TABLE II TABLE IV
C ITYSCAPES S EMANTIC SEGMENTATION RESULTS R ECONSTRUCTION QUALITY COMPARISON TO STATE OF THE ART
Dataset from [11]
Cityscapes Model vs. Dataset mIOU Model vs. Dataset
PSNR SSIM
CLEAR on CLEAR 0.692 Original 24.09 0.8518
RAINY on CLEAR 0.405 Eigen13[21] 28.59 0.6726
RAINY on AUGMENTED 0.611 Pix2Pix[37] 30.14 0.8299
DERAINED on CLEAR 0.651 Qian et al.(no att.)[11] 30.88 0.8670
Qian et al.(full att.)[11] 31.51 0.9213
Ours(no att.) 31.55 0.9020
TABLE III
R ECONSTRUCTION RESULTS
while, in contrast to [11], not requiring an attention [28]
RAW DERAINED
Dataset PSNR SSIM PSNR SSIM mechanism, which simplifies and speeds up inference and
RobotCar-Rainy(R) 13.02 0.5574 22.82 0.8188 training.
RobotCar-Rainy(S) 16.80 0.6134 25.17 0.8699
CamVid-Rainy(S) 16.89 0.6064 22.11 0.7524 VI. C ONCLUSIONS
Qian et al.[11](R) 24.09 0.8518 31.55 0.9020 We have presented a system that restores performance of
images affected by adherent raindrops on important segmen-
• Cityscapes-derained(Cityscapes-rainy preprocessed us- tation tasks. Our results show that road marking segmenta-
ing our deraining model) images tested on a model tion, an important task for autonomous driving systems, is
trained using Cityscapes-clear images. severely affected by adherent rain and that performance can
be restored by first running the images through a de-raining
Similar to the case of road marking segmentation, we notice preprocessor. Similarly, we show the same reduction and
the same severe degradation of performance when testing restoration of performance in the case of semantic segmen-
with rainy images (RAINY on CLEAR) as compared to tation, a task that is important in many fields. Additionally,
the baseline (CLEAR on CLEAR). Again, the performance we produce state-of-the-art results in terms of the quality
of derained images tested on a model trained using clear of image restoration, while being able to run in real time.
images (DERAINED on CLEAR) is significantly better Finally, our system processes the image streams outside
than the performance of rainy images tested on a model of the segmentation pipeline, either offline or online, and
trained using a dataset augmented with rainy images (RAINY hence can be used naturally as a front end to many existing
on AUGMENTED). Figure 6 shows semantic segmentation systems. The dataset will be made available at https://
results on Cityscapes, before and after deraining. ciumonk.github.io/RobotCar-rainy/, along with
B. Reconstruction results a video describing our results at https://ciumonk.
Table III presents results on the quality of the image github.io/RobotCar-rainy/video.html.
reconstruction using two widely used image-quality metrics, VII. F UTURE WORK
Peak signal-to-noise ratio (PSNR), and Structural similar- Future work may involve designing a mechanism for
ity (SSIM). We benchmark our model on our real-world producing computer-generated rain that is indistinguishable
RobotCar-Rainy (R) dataset, RobotCar-Rainy with computer- from real rain in terms of its usefulness in training models
generated rain (S), CamVid-Rainy with computer-generated that quantitatively rather than qualitatively improve perfor-
rain (S), and on the dataset provided by [11]. The RAW mance on image-based tasks.
column shows the quality of the rainy images, while the
DERAINED column shows the quality of the de-rained VIII. ACKNOWLEDGEMENTS
images, all relative to their clear ground truth. We show that This work was supported by Oxford-Google
in all cases, de-raining the rain-affected images using our DeepMind Graduate Scholarships and Programme Grant
preprocessor significantly increases the quality of the images, EP/M019918/1. The authors wish to thank Valentina Musat
as compared to the reference case where raw rainy images for labelling the road markings in our dataset.
are used. Both the real-world rainy dataset images and the
images with computer-generated rain are significantly more R EFERENCES
[1] M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Be-
degraded than the rainy images provided by [11], as seen in nenson, U. Franke, S. Roth, and B. Schiele, “The cityscapes dataset
column RAW. for semantic urban scene understanding,” in Proceedings of the IEEE
Table IV presents reconstruction results on the reference Conference on Computer Vision and Pattern Recognition, 2016, pp.
rainy dataset provided by [11]. We show that we achieve 3213–3223.
[2] E. Stenborg, C. Toft, and L. Hammarstrand, “Long-term visual
state-of-the-art PSNR reconstruction results on images af- localization using semantically segmented images,” CoRR, vol.
fected by real water drops and only slightly lower SSIM, abs/1801.05269, 2018.
7092
[3] J. L. Schönberger, M. Pollefeys, A. Geiger, and T. Sattler, “Semantic [27] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-
visual localization,” CoRR, vol. abs/1712.05773, 2017. Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial
[4] T. Bruls, W. Maddern, A. A. Morye, and P. Newman, “Mark yourself: nets,” in Proceedings of the 27th International Conference on
Road marking segmentation via weakly-supervised annotations from Neural Information Processing Systems - Volume 2, ser. NIPS’14.
multimodal data,” in Robotics and Automation (ICRA), 2018 IEEE Cambridge, MA, USA: MIT Press, 2014, pp. 2672–2680.
International Conference on. IEEE, 2018, p. in press. [28] V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu, “Recurrent
[5] A. Valada, J. Vertens, A. Dhall, and W. Burgard, “Adapnet: Adaptive models of visual attention,” CoRR, vol. abs/1406.6247, 2014.
semantic segmentation in adverse environmental conditions,” in 2017 [29] J. Cohen, M. Olano, and D. Manocha, “Appearance-preserving
IEEE International Conference on Robotics and Automation (ICRA), simplification,” in Proceedings of the 25th Annual Conference on
May 2017, pp. 4644–4651. Computer Graphics and Interactive Techniques, ser. SIGGRAPH ’98.
[6] Y. Chen, W. Li, C. Sakaridis, D. Dai, and L. V. Gool, “Domain New York, NY, USA: ACM, 1998, pp. 115–122.
adaptive faster R-CNN for object detection in the wild,” CoRR, vol. [30] T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro,
abs/1803.03243, 2018. “High-resolution image synthesis and semantic manipulation with
[7] M. Wulfmeier, A. Bewley, and I. Posner, “Addressing appearance conditional GANs,” in Computer Vision and Pattern Recognition
change in outdoor robotics with adversarial domain adaptation,” CoRR, (CVPR), 2018 IEEE Conference on. IEEE, 2018, pp. 1–13.
vol. abs/1703.01461, 2017. [31] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
[8] G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, “The recognition,” in Proceedings of the IEEE conference on computer
synthia dataset: A large collection of synthetic images for semantic vision and pattern recognition, 2016, pp. 770–778.
segmentation of urban scenes,” in The IEEE Conference on Computer [32] C. Li and M. Wand, “Precomputed real-time texture synthesis with
Vision and Pattern Recognition (CVPR), June 2016. markovian generative adversarial networks,” in European Conference
[9] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, “1 Year, 1000km: on Computer Vision. Springer, 2016, pp. 702–716.
The Oxford RobotCar Dataset,” The International Journal of Robotics [33] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image
Research (IJRR), vol. 36, no. 1, pp. 3–15, 2017. translation using cycle-consistent adversarial networkss,” in Computer
[10] G. J. Brostow, J. Fauqueur, and R. Cipolla, “Semantic object classes in Vision (ICCV), 2017 IEEE International Conference on, 2017.
video: A high-definition ground truth database,” Pattern Recognition [34] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-
Letters, vol. 30, no. 2, pp. 88–97, 2009. time style transfer and super-resolution,” in European Conference on
[11] R. Qian, R. T. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative Computer Vision. Springer, 2016, pp. 694–711.
adversarial network for raindrop removal from a single image,” [35] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimiza-
CoRR, vol. abs/1711.10098, 2017. tion,” arXiv preprint arXiv:1412.6980, 2014.
[12] J. Chen and L. Chau, “A rain pixel recovery algorithm for videos [36] L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam,
with highly dynamic scenes,” IEEE Transactions on Image Processing, “Encoder-decoder with atrous separable convolution for semantic
vol. 23, no. 3, pp. 1097–1104, March 2014. image segmentation,” in ECCV, 2018.
[13] J. Kim, J. Sim, and C. Kim, “Stereo video deraining and desnowing [37] P. Isola, J. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation
based on spatiotemporal frame warping,” in 2014 IEEE International with conditional adversarial networks,” CoRR, vol. abs/1611.07004,
Conference on Image Processing (ICIP), Oct 2014, pp. 5432–5436. 2016.
[14] ——, “Video deraining and desnowing using temporal correlation and
low-rank matrix completion,” IEEE Transactions on Image Processing,
vol. 24, no. 9, pp. 2658–2670, Sept 2015.
[15] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single
image dehazing via multi-scale convolutional neural networks,” in
European conference on computer vision. Springer, 2016, pp. 154–
169.
[16] X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the skies:
A deep network architecture for single-image rain removal,” CoRR,
vol. abs/1609.02087, 2016.
[17] M. Roser and A. Geiger, “Video-based raindrop detection for improved
image registration,” in 2009 IEEE 12th International Conference on
Computer Vision Workshops, ICCV Workshops, Sept 2009, pp. 570–
577.
[18] M. Roser, J. Kurz, and A. Geiger, “Realistic modeling of water droplets
for monocular adherent raindrop recognition using bézier curves,” in
Computer Vision – ACCV 2010 Workshops, R. Koch and F. Huang,
Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 235–
244.
[19] S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi,
“Waterdrop stereo,” CoRR, vol. abs/1604.00730, 2016.
[20] J. F. Blinn, “A generalization of algebraic surface drawing,” ACM
Trans. Graph., vol. 1, no. 3, pp. 235–256, July 1982.
[21] D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken
through a window covered with dirt or rain,” in 2013 IEEE Interna-
tional Conference on Computer Vision, Dec 2013, pp. 633–640.
[22] S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi,
“Adherent raindrop modeling, detectionand removal in video,” IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 38,
no. 9, pp. 1721–1733, Sept 2016.
[23] A. Yamashita, M. Kuramoto, T. Kaneko, and K. T. Miura, “A virtual
wiper - restoration of deteriorated images by using multiple cameras,”
in IROS. IEEE, 2003, pp. 3126–3131.
[24] A. Yamashita, T. Kaneko, and K. T. Miura, “A virtual wiper-restoration
of deteriorated images by using a pan-tilt camera,” in IEEE Interna-
tional Conference on Robotics and Automation, 2004. Proceedings.
ICRA ’04. 2004, vol. 5, April 2004, pp. 4724–4729 Vol.5.
[25] A. Yamashita, Y. Tanaka, and T. Kaneko, “Removal of adherent wa-
terdrops from images acquired with stereo camera,” in 2005 IEEE/RSJ
International Conference on Intelligent Robots and Systems, Aug
2005, pp. 400–405.
[26] M. Kuramoto, A. Yamashita, T. Kaneko, and K. T. Miura, “Removal
of adherent waterdrops in images by using multiple cameras,” in MVA,
2002, pp. 80–83.
7093