This repo contains the code written to complete the first project on Udacity Self-Driving Car Nanodegree. This project consists of algorithms to identify lane lines on the road on a video. The video is taken from a camera at the center of a vehicle. This code is able to identify yellow and while lane lines on road.
To run this project, you need Miniconda installed(Follow instructions to quickly install)
To create an environment for this project use the following command:
conda env create -f environment.yml
After the environment is created, it needs to be activated with the command:
source activate carnd-term1
and open the project's notebook P1.ipynb inside jupyter notebook:
jupyter notebook P1.ipynb
The repo contains the jupyter notebook P1.ipynb where the processing pipeline is implemented inside the function detect_lanes
. The pipeline consists on six steps represented by six different functions which are called using functions:
- Gray conversion : Returns a gray scaled version of the input image using cv2.cvtColor method.
- Blur Image
gaussian_blur
: Applies a Gaussian blur to the provided image using cv2.GaussianBlur method. - Canny
canny
: Use a Canny transformation to find edges on the image using cv2.Canny method. - Masking
region_of_interest
: Only keeping region where lane line are available (for now...). - Hough:Transform
hough_lines
: Use a Hough transformation to find the lines on the masked image using cv2.HoughLinesP. Functionhough_lines
callsdraw_lines
which extrapolates detected segments of lines to create single lane line. - Weighted image
weighted_img
: Combining lane lines image and original image.