Performing Face Detection on videos in Python Using OpenCV


Hey, Wake up nerds.

Today we are going to build a face detection model on a video using Python and OpenCV. In the previous blog post, we discussed how to perform face detection on still images and the script worked awesomely.

As mentioned earlier, performing face detection on a still image or a video feed is an extremely similar operation. The latter is just the sequential version of the former. Face detection on videos is simply, face detection applied to each frame read into the program from the camera.

For better understanding, let's breakdown the overall process of face detection into several tasks, that our program should perform.
Firstly, it should open a camera feed, then read a single frame, examine the captured frame for faces, scan for eyes within the faces detected and lastly it should draw rectangles around the faces and eyes.

Before proceeding further, make sure that you have OpenCV installed in your machine and a working webcam connected to it (you can also use the built-in camera of your laptop).

So, let’s dive straight into the code.


face_detection_video.py

Start by importing the necessary modules, in this case, OpenCV is the only requirement.

import cv2

Here we are using Haar Cascades for detecting faces and this is a machine learning-based approach where a cascade function is trained with a set of input data. OpenCV already contains many pre-trained classifiers for face, eyes, smiles, etc. Today we will be using the face as well as eye classifier. You need to download the trained  classifier XML file (haarcascade_frontalface_default.xml, haarcascade_eye.xml), which is available in OpenCV's GitHub repository.

Load the Haar cascade files so that OpenCV can operate face detection. We declare a face_cascade variable, which is a CascadeClassifier object for faces, and responsible for face detection. Also, we declare an eye_cascade variable, which is a CascadeClassifier object for eyes, since we need to detect eyes within the detected faces.

face_cascade = cv2.CascadeClassifier("C:\\Users\Cyril Tom Mathew\Desktop\haarcascade_frontalface_default.xml") 
eye_cascade = cv2.CascadeClassifier("C:\\Users\Cyril Tom Mathew\Desktop\haarcascade_eye.xml")


Then, we open a VideoCaptureobject, basically the camera. The VideoCapture constructor takes a parameter, which indicates the camera to be used; zero indicates the first camera available (your built-in camera).

camera = cv2.VideoCapture(0)


Next up, we capture a frame. The read() method returns two values: a Boolean indicating the success of the frame read operation, and the frame itself. We capture the frame, and then we convert it to grayscale. This is a necessary operation because face detection in OpenCV happens in the grayscale color space.

while True:
    ret, frame = camera.read() 
    gray_img = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)


The next step is where the actual face detection happens.

faces = face_cascade.detectMultiScale(gray_img, 1.2, 5) 
 
The detectmultiScale function takes 3 arguments, they are the input image, scaleFactor, and minNeighbours. scaleFactor determines the percentage reduction of the image at each iteration of the face detection process, and minNeighbours specifies the minimum number of neighbors retained by each face rectangle at each iteration. You may have to slightly change these values to get the best results.


This detection operation returns a list of coordinates for the rectangular regions where faces were found. We use these coordinates to draw the rectangles in our image.

for (x, y, w, h) in faces:
    cv2.rectangle(frame, (x,y), (x+w,y+h), (0,255,0), 2) 
    crop_face = gray_img[y:y+h, x:x+w] 
     crop = frame[y:y+h, x:x+w]     


We create a region of interest corresponding to the face rectangle, and within this rectangle, we perform eye detection using the CascadeClassifier object for eyes. Again, we loop through the resulting eye tuples and draw rectangles around them with a different color.

eyes = eye_cascade.detectMultiScale(crop_face, 1.03, 5, 0, (40,40))
for (p, q, r, s) in eyes:
    cv2.rectangle(crop, (p,q), (p+r,q+s), (255,0,0), 2)


Finally, we show the resulting frame in the window. All being well if any face is within the field of view of the camera, you will have a green rectangle around their face and a blue rectangle around each eye. To prevent the window from closing automatically, we insert a call to waitKey, which closes the window down at the press of 'q' key.

cv2.imshow("Camera", frame) 
if cv2.waitKey(1) & 0xff == ord("q"): 
    break


That's it guys, we are finished. Let's run the code.

Face Detection on videos in Python Using OpenCV
Output



Wow, it works really well. I hope you found this useful and if you have any trouble implementing this or if you need any help, feel free to comment below.

Thank You.




Performing Face Detection on videos in Python Using OpenCV Performing Face Detection on videos in Python Using OpenCV Reviewed by Cyril Tom Mathew on July 16, 2019 Rating: 5

No comments:

Powered by Blogger.