Building a real, self-driving car!

Demo!

The TLDR; of how I built this

I used ROS (Robotic Operating System), to help successfully communicate between the traffic light detection nodes, a waypoint updater node, and a drive-by-wire (DBW) node via the use of data from sensor fusion (don’t know what that is, click here). The self-driving car works by using a waypoint updater algorithm by creating a trajectory (set of waypoints), the car can follow the path correctly while ensuring passenger safety.

System architecture

Here’s a rough outline of how the self-driving outline worked and the ROS nodes + topics that were used.

Source: Udacity

Traffic Detection

Getting a close-up perspective on the Traffic Detection node, this is how it looks like.

Waypoint Updater

The purpose of the waypoint updater node is to basically allow us to update our target velocities for each waypoint given the traffic light + obstacle detection data (camera + sensor fusion input respectively). We would then publish a new list of waypoints ahead of the car with the desired velocities while incorporating the velocities of the car.

DBW

Drive-by-wire (DBW) is a type of system that CARLA (Udacity’s self-driving car) uses. What this means is that the throttle, brake, and steering values all have an electronic control (no mechanical control, we’d use this electronic system to activate these functions).

Breaking all of this down

As you might’ve noticed, there are 3 main steps with respect to the Perception, Planning, and Control steps; the traffic light detection, the waypoint updater, and the DBW steps. Let’s break this down and how all of this really works and plays a part in the overall building of the self-driving car.

Perception

In the 2 nodes as seen in the System architecture diagram, we have both the Object Detection and the Traffic Light Detection node.

  1. From there, use a “width multiplier” which allows the size of input + output channels to be thresholded between 0 and 1.
  2. We’d also then use a “resolution multiplier” to reduce the original input size and threshold to 0 and 1.

Planning

Just as we mentioned before, we’d use a waypoint updater algorithm to help create the desired trajectory.

The green path is our trajectory with those spheres being the waypoints

Control

For the control subsystem, we’d be using a DBW node to help control the car’s throttle, brake, and steering. We can mainly do this through the use of a PID controller.

Results

Feel free to take a look at the Github code to run the code yourself!

Planning for the future

Due to COVID circumstances, I, unfortunately, haven’t been able to run the code on Udacity’s real, self-driving car (if you’re a Udacity employee reading this right now, send me an email please!!!)

Connect with me

Email

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store