0% found this document useful (0 votes)
76 views2 pages

Emotion Detection

The document loads a pre-trained Keras model for facial expression recognition and OpenCV's Haar cascade classifier for face detection. It then runs a real-time webcam video feed through the pipeline, detecting faces, extracting 48x48 crops, passing them to the Keras model for classification of 7 expressions (angry, disgust, fear, happy, sad, surprise, neutral), drawing the prediction on screen. It runs until the 'q' key is pressed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views2 pages

Emotion Detection

The document loads a pre-trained Keras model for facial expression recognition and OpenCV's Haar cascade classifier for face detection. It then runs a real-time webcam video feed through the pipeline, detecting faces, extracting 48x48 crops, passing them to the Keras model for classification of 7 expressions (angry, disgust, fear, happy, sad, surprise, neutral), drawing the prediction on screen. It runs until the 'q' key is pressed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

#!

/usr/bin/env python2
# -*- coding: utf-8 -*-
"""
Created on Sat Jun 3 15:36:49 2017

@author: adam
"""

import cv2
import numpy as np
from keras.models import load_model
faceCascade = cv2.CascadeClassifier('haarcascade_frontalface_alt2.xml')

video_capture = cv2.VideoCapture(0)
model = load_model('keras_model/model_5-49-0.62.hdf5')

target = ['angry','disgust','fear','happy','sad','surprise','neutral']
font = cv2.FONT_HERSHEY_SIMPLEX
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()

gray = cv2.cv2tColor(frame, cv2.COLOR_BGR2GRAY)

faces = faceCascade.detectMultiScale(gray,scaleFactor=1.1)

# Draw a rectangle around the faces


for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2,5)
face_crop = frame[y:y+h,x:x+w]
face_crop = cv2.resize(face_crop,(48,48))
face_crop = cv2.cv2tColor(face_crop, cv2.COLOR_BGR2GRAY)
face_crop = face_crop.astype('float32')/255
face_crop = np.asarray(face_crop)
face_crop = face_crop.reshape(1,
1,face_crop.shape[0],face_crop.shape[1])
result = target[np.argmax(model.predict(face_crop))]
cv2.putText(frame,result,(x,y), font, 1, (200,0,0), 3, cv2.LINE_AA)
# Display the resulting frame
cv2.imshow('Video', frame)

if cv2.waitKey(1) & 0xFF == ord('q'):


break

# When everything is done, release the capture


video_capture.release()
cv2.destroyAllWindows()

You might also like