Vehicle Detection with Background Subtraction and Contours.

In this tutorial we'll learn how to leverage Background Subtraction and Contours in order to detect moving objects.

Import the Libraries

Let's First start by importing the libraries.

Car Detection using Background Subtraction

Background subtraction is a simple yet effective technique to extract objects from an image/video. Consider a highway on which cars are moving, and you want to extract each car. One easy way can be that you take a picture of the highway with the cars (called foreground image) then you also have an image saved in which the highway does not contain any cars (background image) and then you subtract the background image from the foreground to get the segmented mask of the cars and then use that mask to extract the cars.

But in many cases you don't have a clear background image, an example of this can be a highway that is always busy, or maybe a walking destination that is always crowded. So in those cases, you can subtract the background by other means, for example, in the case of a video you can detect the movement of the object, so the objects which move can be foreground and the other part that remain static can be the background.

Several algorithms have been invented for this purpose. OpenCV has implemented a few such algorithms which are very easy to use. Let's see one of them.


It is a Background/Foreground Segmentation Algorithm, based on two papers by Z.Zivkovic, "Improved adaptive Gaussian mixture model for background subtraction" (IEEE 2004) and "Efficient Adaptive Density Estimation per Image Pixel for the Task of Background Subtraction" (Elsevier BV 2006). One important feature of this algorithm is that it provides better adaptability to varying scenes due to illumination changes etc.

Function Syntax:

object = cv2.createBackgroundSubtractorMOG2(history, varThreshold, detectShadows)



Creating the Vehicle Detection Application.

To Perform the complete background Subtraction based contour detection, we'll be performing these simple steps

1) First, we will load a video using the function cv2.VideoCapture() and create a background subtractor object using the function cv2.createBackgroundSubtractorMOG2().

2) Then we will use the backgroundsubtractor.apply() method to get the segmented masks for the frames of the video after reading the frames one by one using the function

3) We will then apply thresholding on the mask using the function cv2.threshold() to get rid of shadows and then perform Erosion and Dilation to improve the mask further using the functions cv2.erode() and cv2.dilate().

4) Then we will use the function cv2.findContours() to detect the contours on the mask image and convert the contour coordinates into bounding box coordinates for each car in the frame using the function cv2.boundingRect(). We will check if the area of the contour is greater than a threshold to make sure that it's a car which we will find using the function cv2.contourArea().

5) After that we will use the functions cv2.rectangle() and cv2.putText() to draw and label the bounding boxes on each frame and then we will extract the foreground part of the video with the help of the segmented mask using the function cv2.bitwise_and().

There are many other background subtraction algorithm in OpenCV that you can check out here and here along with their parameters and other details as well.