Academia.eduAcademia.edu

Imprecise computations

1994, Proceedings of the IEEE

Shadow detection is critical for robust and reliable vision-based systems for traffic vision analysis. Shadow points are often misclassified as object points causing errors in localization, segmentation, tracking and classification of moving vehicles. This paper proposes a novel shadow elimination method SEBG for resolving shadow occlusion problems of vehicle analysis. Different from some traditional method which only consider intensity properties, this method introduces gradient feature to eliminate shadows. In this approach, moving foregrounds are first segmented from background by using a background subtraction technique. For all moving pixels, the approach SEBG using gradient feature to detect shadow pixels is presented in detail. This method is based on the observation that shadow regions present same textural characteristics in each frame of the video as in the corresponding adaptive background model. Gradient feature is robust to illumination changes. The method also needs no predefined parameters, which can well adapt to other video scene. Results validate the algorithm's good performance on traffic video.

MVA2007 IAPR Conference on Machine Vision Applications, May 16-18, 2007, Tokyo, JAPAN 13-4 Shadow Elimination in Traffic Video Segmentation Hong Liu Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China Graduate University of Chinese Academy of Sciences hliu@ict.ac.cn Jintao Li Institute of Institute of Computing Computing Technology, Chinese Technology, Academy of Sciences, Chinese Academy Beijing 100080, of Sciences, China Beijing 100080, China Abstract Yueliang Qian Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100080, China as object shape distortion and object merging, affecting surveillance capability like target counting and identification. For this reason, shadow identification is a fundamental and critical step in visual surveillance and monitoring systems and has become an active research area in recent years [2]. Shadow is categorized into self-shadow and cast shadow [3]. We only concern moving cast shadow. In order to solve the problems caused by shadows, [4] proposed a pyramid model and a fuzzy neural network approach to eliminate shadows found along the road. [5] used two cameras to eliminate the shadows of pedestrian-like moving objects based object heights. In addition, [6] proposed a disparity model that is invariant to arbitrarily rapid changes in illumination for modeling background. However, in order to overcome rapid changes in illumination, at least three cameras are required. All these approaches model shadows only based on color features. Then, photometric constraints are locally imposed on individual points and then shadow pixels can be identified based on local a priori threshold. Some other shadow removal approaches are based on an assumption that the shadow pixels have the same chrominance as the background but are of lower luminance. In [7, 8] a brightness/chromaticity distortion model is evaluated, so a pixel is classified as shaded background or shadow if it has similar chromaticity but lower brightness than the same background pixel. In [9, 10] the adoption of hue/saturation/value information and the ratio between image and corresponding background luminance improve shadow detection. The method may suffer from dynamic scene changes, especially for reflection on highly specular surfaces. Unfortunately, the assumptions of these approaches are difficult to justify in general. When vehicles are handled, color features cannot provide enough information to discriminate black vehicles from shadows. Detection based on the luminance will fail when pixels of foreground objects are darker than the background and have a uniform gain with respect to the reference surface they cover. Some methods need several predefined parameters to shadow detection, which is not possible to achieve robust shadow elimination for a wide spectrum of conditions. Thus, it is not surprised that only very limited results were achieved by these approaches for shadow elimination. In this paper, we present a novel approach SEBG to detect moving cast shadow based on gradient feature. This method is based on the observation that shadow regions present same textural characteristics in each frame of Shadow detection is critical for robust and reliable vision-based systems for traffic vision analysis. Shadow points are often misclassified as object points causing errors in localization, segmentation, tracking and classification of moving vehicles. This paper proposes a novel shadow elimination method SEBG for resolving shadow occlusion problems of vehicle analysis. Different from some traditional method which only consider intensity properties, this method introduces gradient feature to eliminate shadows. In this approach, moving foregrounds are first segmented from background by using a background subtraction technique. For all moving pixels, the approach SEBG using gradient feature to detect shadow pixels is presented in detail. This method is based on the observation that shadow regions present same textural characteristics in each frame of the video as in the corresponding adaptive background model. Gradient feature is robust to illumination changes. The method also needs no predefined parameters, which can well adapt to other video scene. Results validate the algorithm’s good performance on traffic video. 1. Qun Liu Introduction Design of vision-based systems for traffic analysis is an important and challenging problem. In Intelligent Transportation Systems, the information added by image processing techniques is very useful and it has very low computational load [1]. Many works on ITS aim at helping traffic flow management by providing information on how many vehicles are in the scene. Moreover, incident detection, intersection management, and many other applications could exploit such information provided by visual tasks. All the above mentioned ITS applications aim, as first step, at detecting vehicles in the scene in order to count them, unauthorized operations or simply track them. This task can be achieved by means of motion segmentation. Segmentation moving objects is the core of many applications for video processing. However, neither motion segmentation nor change detection methods can distinguish between moving foreground objects and moving shadows. Since the shadow intensity differs from the background and the shadow moves with the foreground object. This may misclassify shadows as foreground objects, which can cause various unwanted behavior such 445 We set learning rate D =0.002. The parameters and V for the matching distribution are updated as: video as in the corresponding adaptive background model. In this paper, an adaptive background subtraction approach using Gaussian Mixture Model (GMM) is performed to motion segmentation. For all moving pixels, the approach using gradient feature to detect shadow pixels is presented. Most gradient of moving vehicles is reserved and most gradient of moving cast shadow is removed by gradient difference. Connected components analysis with morphologic process is then used to reduce noise and filled holes. Finally get moving vehicles. Our algorithm is simple and robust for traffic video. To compare our approach, we also carry out method DNM1 [9, 10]. In the next section we describe our moving cast shadow elimination method. In section 3, we present experimental results. The final section presents concluding remarks. Pt (1  U ) Pt 1  U X t (3) V t2 (1  U )V t21  U ( X t  Pt )T ( X t  Pt ) Where, U P DK ( X t | Pk , V k ) (4) (5) The Gaussians are ordered based on the ratio of Z V . This increases as the Gaussian’s weight increases and its variance decreases. The first B distributions accounting for a proportion T of the observed data are defined as background. We set T=0.8 here. b B arg min b ( ¦ Zk ! T ) (6) k 1 2. For the non-background pixel, we calculate the difference between this pixel in current image and in background model. Only the pixel with the difference over the threshold 10 is labeled as foreground pixel. Figure 1 shows an example of motion segment. Figure 1a is the color background image without moving objects constructed by GMM method. Figure 1b is the source frame no.793 in traffic video. Figure 1c is the moving foreground image by above GMM match method. We make background pixels as black and remind foreground pixels. Figure 1d is the results of motion segmentation. Moving cast shadow is extracted as moving foreground, which makes object segmentation failure and distort the shape of object and also cause vehicles connected. Moving Shadow Elimination For moving shadow elimination, all moving foreground should by detected first. We present a robust and automatic segmentation approach based on the background subtraction. Then describes the methodology for shadow detection based on gradient feature in detail. 2.1. Moving Foreground Detection In vision-based surveillance systems, moving region extraction is the first step in video processing. The background subtraction method provides a simple yet useful solution. In recent years time-adaptive per pixel mixtures of Gaussians background models have been a popular choice for model complex and time varying backgrounds. This method has the advantage that multi-modal backgrounds can be model. In [11], each pixel is modeled as a pixel process; each process consists of a mixture of k adaptive Gaussian distributions. The distributions with least variance and maximum weight are isolated as the background. The probability that a pixel of a particular distribution will occur at time t is determined by: k P( X t ) ¦Z i ,t *K ( X t , Pi ,t , 6i ,t ) (a)Background by GMM (b)Source image (c)Moving foreground (d)Motion segment (1) i 1 where K is the number of Gaussian distributions, Zi ,t is the weight estimate of the ith Gaussian in the mixture at time t, Pi ,t and ¦i ,t are the mean value and covariance matrix of the ith Gaussian at time t, andK is the Gaussian probability density function. :e set k=3. An on-line k-means approximation algorithm is used for the mixture model. Every new pixel X t is checked against the K existing Gaussian distribution. A match is found if the pixel value is within L = 2.5 standard deviation of a distribution. This is effectively per pixel per distribution threshold and can be used to model regions with periodically changing lighting conditions. If the current pixel value matches none of the distributions the least probable distribution is updated with the current pixel values, a high variance and low prior weight. The prior weights of the K distributions are updated at time t according to: (2) Zk ,t (1  D )Zk ,t 1  D ( M k ,t ) Figure 1. Results of Motion Foreground Detection 2.2. Shadow Elimination Based on Gradient Feature (SEBG) Figure 1a and figure 1c shows moving shadow presents same texture feature in each frame as in the corresponding background model, while texture of moving object is different to the relevant background it covered [2]. In this paper, to reduce computation cost, we use gradient feature, which can well represent texture information. Also, gradient feature is robust to illumination changes than color feature. Our approach is to get the gradient image of moving foreground and the relevant background firstly. Gradient information of moving foreground includes gradient of moving vehicles and gradient of moving shadows. Gradient information of relevant background includes gradient of only background. where D is the learning rate and M k ,t is 1 for the model which matched the pixel and 0 for the remaining models. 446 reduce surface brightness and saturation while maintaining chromaticity properties. For the shadow detection part DNM1works in the HSV color space. DNM1 shows a shadow cast on a background does not change significantly its hue. The authors exploit saturation information since has been experimentally evaluated than shadow often lower the saturation of the points. Both our method SEBG and DNM1 should detect moving foreground first. To compare the two methods, we use our moving foreground detecting method GMM instead of background suppression method in paper [11]. Then use SEBG and DNM1 to detect shadow pixels respectively. Here, we give two examples, frame 313 and frame 793 in traffic video. Figure 4 shows the results of shadow detection using DNM1 method on frame 793. Figure 4a shows the binary image after shadow elimination, morphologic process and connected components analysis. We can see some marker on the road is misclassified as object pixels, for they have different hue value with moving shadows. Some parts of the vehicles such as windows are misclassified as shadow for they are very dark and have similar color as shadow. Motion segment results have some non vehicles as figure 4b shows. Figure 5 shows the results of our approach SEBG on frame 793, which can well remove shadow pixels. After connected components analysis, more vehicle pixels are reserved as figure 5a shows. The finally motion segment result are shown as figure 5b, which can also resolve occlusion problem compared with figure 1d. Gradient of moving vehicles is different to gradient of relevant background, while gradient of moving shadows is similar to that of relevant background. Based on the above analysis, the difference of the two gradient images will reserve more gradient information at the moving vehicles areas and remove most of the shadow gradient at shadow region. To reduce process cost, we use the follow simple four gradient operators as figure 2 shows. The position meaning of each gradient operator is shown as table 1. g1 g2 g3 g4 Figure 2. Gradient operators Table 1. The four position of above operator (x-1,y-1) (x,y-1) (x-1,y) (x,y) The above operators consider the horizontal, vertical and diagonal edge, which is simple but can well present gradient feature. To calculate the gradient information, we get the grey image of the result of moving foreground such as figure 1.c shows. Then using the above four gradient operators, the gradient information of pixel at coordinate (x, y) can be calculated by the following formula. 4 ­ 255 , if gi ( x, y ) >255 ¦ ° ° i=1 G(x)= ® 4 ° g ( x, y ) i °̄ ¦ i=1 (7) The gradient image of moving foreground blobs (as figure 1c) and relevant background parts (as figure 3a) can be calculated by formula 7. The difference image of the above two gradient images as figure 3b shows most gradient of the moving vehicle is reserved and most gradient of moving shadow is eliminated. Then binary the result image to remove noise. After above process, most of the shadow edges are deleted and most of the object edges are reserved. Finally, connected components analysis and morphologic process are performed to remove small blobs, fill holes and label each moving objects regions. The element size of morphologic process is 5 here. Another example is described in detail in next section. (a)Moving object (b)Motion segment Figure 4. Results of shadow elimination using DNM1 (a)Moving object (b)Motion segment Figure 5. Results of shadow elimination using SEBG Example about no.313 frame is shown as figure 6 and figure 7. Figure 6 is the results of shadow elimination using DNM1, which also shows some shadow pixels are misclassified as object pixels. (a)Relevant background (b)Different of gradient images Figure 3. Results of gradient images’ difference 3. Experiment Results The test video sequences were taken using a camera on a cloverleaf junction in urban. The video was sampled at a resolution of 320×240 and a rate of 25 frames per second. To compare our proposed method SEBG, we carried out the method as paper [11] describes. The system described in [11] is an example of deterministic non-model based approach (DNM1). DNM1 uses assumptions that shadows (a) motion segment (b) moving objects without shadow remove Figure 6. Results of shadow elimination using DNM1 Figure 7 is the results of our method SEBG in detail. Figure 7a is the moving foreground image after 447 simple. The method is computationally low-power, so it could be merged in wide integrated vision systems. The proposed method uses gradient information in shadow elimination, which can get well results in traffic video segmentation for vehicles having more edge feature. But in other type of video, if moving objects have less edge information, this method will not effective even after morphologic process. Some future works will include studying possible integration of the methods SEBG and other method such as DNM1. Besides, how to evaluate the performance of moving shadow elimination is a very interesting future direction that we will try to research. background subtraction process and figure 7b is the relevant background that moving foreground covers. Figure 7c and 7d are gradient image of moving foreground and relevant background respectively. Then figure 7e is different image of above two gradient images. By a certain threshold, we get the binary image as figure 7f shows. We can see most of the shadow edge is removed. Then morphologic process is used to remove noise and fill small holes and connected components analysis is used to label each moving objects regions. Figure 7h shows final result of motion segmentation, which can well extract moving vehicles and eliminate shadow parts. Acknowledgments The research is sponsored by National Hi-Tech Program of China (No. 2004AA114010) and National Science Foundation of China (No. 60473043). References (a).Moving foreground (c).Gradient image of (a) (e).Different image of (c,d) (b).Relevant background [1] Prati, A., Mikic, I., Grana, C. and Trivedi, M.M.: Shadow Detection Algorithms for Traffic Flow Analysis: a Comparative Study. Proceedings of IEEE Int’l Conference on Intelligent Transportation Systems, (2001) 340–345 [2] Leone, A., Distante, C., and Buccolieri, F., A texture-based approach for shadow detection. In Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance. (2005) 371-376 (d).Gradient image of (b) [3] Yoneyama, A.,Yeh, C.H., Kuo, C.C.J.: Moving cast shadow elimination for robust vehicle extraction based on 2D joint vehicle/shadow models. Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance. Miami,FL, USA. (2003) 229-236 [4] Tao, X., et al.: A Neural Network Approach to Elimination of Road Shadow for Outdoor Mobile Robot. IEEE Int’l Conf. Intelligent Processing Systems, Beijing, (1997) 1302-1306 (f).Binary image of (e) [5] Onoguchi, K.: Shadow Elimination Method for Moving Object Detection. Proc. Fourteenth Int’l Conf. Pattern Recognition. (1998) 583 -587 [6] Ivanov, Y.: Fast Lighting Independent Background Subtraction. International Journal of Computer Vision. (2000) 199-207 [7] Horprasert, T., Harwood, D., Davis, L: A Statistical Approach for Real-time Robust Background Subtraction and Shadow Detection. Proc. 7th IEEE ICCV Frame-rate Workshop Corfu. (1999) 1-19 (g).Morphologic result of (f) (h).Motion segment Figure 7. Results of shadow elimination using SEBG 4. Conclusions [8] Kaewtrakulpong, P., Bowden, R.: An Improved Adaptive Background Mixture Model for Realtime Tracking with Shadow Detection. Proc. 2th European Workshop on Advanced Video Based Surveillance Systems. (2001) In this paper, we present a novel method for ITS management able to handle shadows to improve vehicles detection and tracking. First, we describe a robust and automatic segmentation approach based on the background subtraction scheme. Then shadow elimination method based on gradient feature is described in detail. To compare our method, we carry out DNM1 method and show some experiment results. The contribution of this paper is that we propose a novel method SEBG to remove moving cast shadow in video using gradient feature. Gradient feature is robust to illumination changes. The method also needs no predefined parameters, which can well adapt to other video scene. Experimental results prove that the approach SEBG is robust, powerful and [9] Cucchiara, R., Grana, C., Piccardi, M., Prati, A., Sirotti, S: Improving Shadow Suppression in Moving Object Detection with HSV Color Information. Proc. 4th IEEE Conference on Intelligent Transportation Systems, USA, (2001) 334-339 [10] Cucchiara, R., Grana, C., Piccardi, M., Prati, A.: Detecting Moving Objects, Ghosts and Shadows in Video Streams. IEEE Trans. PAMI, 25(10) (2003) 1337-1342 [11] Stauffer, C., Grimson, W.: Adaptive Background Mixture Models for Real-Time Tracking. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, (1999) 246–25 448