PYTHON PROJECT
AIM: Age and Gender predictor using Python OpenCV and CNN
THEORY:
Age and gender prediction is a computer vision application where a system detects a human
face and estimates the person’s age range and gender based on facial features. This is a
classification problem that uses Convolutional Neural Networks (CNNs) trained on labeled
datasets.
Core Concepts
1. Face Detection (OpenCV)
• The first step is detecting faces in an image or video.
• OpenCV’s dnn module allows the use of pre-trained deep learning models (like the
Caffe-based face detector).
• Models used:
o opencv_face_detector.pbtxt
o opencv_face_detector_uint8.pb
2. Age and Gender Prediction (CNN Models)
• Two separate CNN models are used: one for gender classification, the other for age
classification.
• These models are usually trained on datasets like:
o IMDB-WIKI or Adience (for age)
o Adience or custom sets (for gender)
• Model files:
o Gender:
▪ gender_net.caffemodel
▪ gender_deploy.prototxt
o Age:
▪ age_net.caffemodel
▪ age_deploy.prototxt
3. CNN Architecture
• Convolutional Neural Networks (CNNs) are designed to extract spatial features from
images.
• Layers:
o Convolution + ReLU
o Pooling
o Fully Connected (Dense)
o Softmax (for classification)
WORKFLOW
1. Capture Image/Video Frame
2. Detect Face using OpenCV DNN
3. Preprocess Face (resize to 227x227, mean subtraction)
4. Feed Face to Gender CNN → Predict Gender
5. Feed Face to Age CNN → Predict Age Range
6. Display Results on Image
OUTPUT
• Gender: ['Male', 'Female']
• Age Ranges:
['(0-2)', '(4-6)', '(8-12)', '(15-20)', '(25-32)', '(38-43)', '(48-53)', '(60-100)']
APPLICATIONS
• Targeted advertising
• Demographic analytics
• Human-computer interaction
• Surveillance and security
SOURCE CODE:
import cv2
#Read Image
image = cv2.imread('boy_2.png')
image = cv2.resize(image , (650,675))
#Define Models
face_pbtxt = 'models/opencv_face_detector.pbtxt'
face_pb = "models/opencv_face_detector_uint8.pb"
age_protoxt = "models/age_deploy.prototxt"
age_model = "models/age_net.caffemodel"
gender_prototxt = "models/gender_deploy.prototxt"
gender_model = "models/gender_net.caffemodel"
MODEL_MEAN_VALUES = [104, 117, 123]
#Load Models
face = cv2.dnn.readNet(face_pb,face_pbtxt)
age = cv2.dnn.readNet(age_model,age_protoxt)
gen = cv2.dnn.readNet(gender_model,gender_prototxt)
#Setup Classifications
age_classifications = ['(0-2)', '(4-6)', '(8-12)', '(15-20)', '(25-32)', '(38-43)', '(48-53)', '(60-
100)']
gender_classifications = ['Male','Female']
#Copy Image
img_cp = image.copy()
#Get Image Dimensions & Blob
img_h = img_cp.shape[0]
img_w = img_cp.shape[1]
blob = cv2.dnn.blobFromImage(img_cp, 1.0,(300,300), MODEL_MEAN_VALUES, True,
False)
face.setInput(blob)
detected_faces = face.forward()
face_bounds = []
#Draw Rectangle Over Faces
for i in range(detected_faces.shape[2]):
confidence = detected_faces[0,0,i,2]
if (confidence > 0.99):
x1 = int(detected_faces[0,0,i,3]*img_w)
y1 = int(detected_faces[0,0,i,4]*img_h)
x2 = int(detected_faces[0,0,i,5]*img_w)
y2 = int(detected_faces[0,0,i,6]*img_h)
cv2.rectangle(img_cp, (x1,y1), (x2,y2), (0,255,0), int(round(img_h/175)), 8,)
face_bounds.append([x1,y1,x2,y2])
if not face_bounds:
print("No faces were detected.")
exit
for face_bound in face_bounds:
try:
face = img_cp[max(0, face_bound[1] - 15): min(face_bound[3] + 15, img_cp.shape[0] -
1),
max(0, face_bound[0] - 15): min(face_bound[2] + 15, img_cp.shape[1] - 1)]
blob = cv2.dnn.blobFromImage(face, 1.0, (227, 227), MODEL_MEAN_VALUES,
True)
gen.setInput(blob)
gender_prediction = gen.forward()
gender = gender_classifications[gender_prediction[0].argmax()]
age.setInput(blob)
age_prediction = age.forward()
age = age_classifications[age_prediction.argmax()]
cv2.putText(img_cp, f'{gender},{age}', (face_bound[0], face_bound[1] + 10),
cv2.FONT_HERSHEY_COMPLEX, 1, (0,0,255), 2, cv2.LINE_AA)
except Exception as e:
print(e)
continue
cv2.imshow('Result', img_cp)
cv2.waitKey(0)
cv2.destroyAllWindows()
OUTPUT: