Pairing Lane Detection with Object Detection

Motivation:

Object detection plays an integral role in the field of autonomous vehicle safety. The attempt I have made in this project is to develop a pipeline that can not only detect lane lines but also detect cars.

Lane Detection Pipeline

The overall structure of the pipeline is as follows:

  • Apply a distortion correction to these images.
  • Use colour transforms to create a binary image.
  • Apply a perspective transform to get a “birds-eye view”
  • Detect lane pixels and fit to find the lane boundary.
  • Determine the curvature of the lane and vehicle position with respect to the center.
  • Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.

Calibrate the camera using a chess/checkerboard to prevent distortion.

Cameras are not perfect. In the real world, using a camera that is distorted or warps the ends of its input can be fatal. Not being able to detect lane lines correctly and/or not detecting other cars is extremely dangerous. We can simply calibrate the camera by taking in the input image of a raw chessboard and calibrating it so that it undistorts.

Input, raw image of the camera taking a picture of a chessboard.
Here’s how the camera looks after we undistort it.
Input image
Output image, after undistortion

Creating a binary image using advanced colour transformation techniques

Now that we’ve undistorted our input camera feed, we’d now need to transform our image into a binary threshold scale (black and white, 0 = black and 1 = white).

Perspective transformation — getting that bird's eye view

What we’d do now in this step is warp our perspective. Instead of getting that front view from the camera placed at the front of the car, we can warp our perspective to a birds-eye view. This allows us to get rid of all the noise (cars, trees, etc.) and only focus on the actual lane lines themselves.

Input image
Trapezoidal image

The fun part — Lane Line Detection

Now that we’re really preprocessed our image and got the bird’s eye view, it’s time to detect lane lines!

Input image
We’ve gotten some really smooth, beautiful lines!!

Boom! Lane Detection

Now, we can simply grab a video and fit our lane detection algorithm onto it!

Part 2: Object Detection

Now that we’ve developed our lane detection algorithm, we’d now need to develop a pipeline for detecting cars and objects.

  • Build an AI model that can detect cars vs not a car
  • Create a sliding window algorithm that’ll slide across the image and make predictions
  • Create a heatmap for false positives
  • Limit the false positives by merging them into 1 collective prediction
  • Merge them all together and get our final object detection pipeline!

Data preprocessing

The first step in our object detection case would be to preprocess our data and get it ready to feed it in into our AI model. This would be done by taking a histogram of all the values in our image. In the real world, cars are not always the same size and vary in size by a fair amount. In template matching, the model is dependant on the raw colour values and the order of those values. That can vary substantially.

Building an AI model that can detect cars

Now that we’ve preprocessed our data, it’s ready to be fed into a model. I chose to develop a Support Vector Machine algorithm (which is a Supervised learning algorithm) to be able to detect vehicles and ones that are not. Note: remember that we are working with binary classes — one class for cars, the other class being a non-car (not a car).

Creating a sliding window for object detection

Now that we’ve successfully trained our SVM model to detect vehicles vs non-vehicles, a sliding window algorithm would now have to be built in order to put the model into use.

Credit: Udacity
Examples of car detection given this input image

False positives

What you might’ve noticed in the image above was the fact that we did have quite a few false positives and overlapping predictions. Since an algorithm such as this where there are multiple overlapping boxes is impractical, we’d want to eliminate false positives. This can be done by applying a heatmap and taking the average pixel values out of all the bounding boxes in order to get a stable prediction.

Obviously, there is room for improvement but not bad!

The final part: Putting this all together

Now that we’re able to make predictions on a single image, let’s feed in our input video from the lane detection to detect objects and cars. Here’s what a gif of the final video looks like:

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store