Militant and Weapon Detection Final Report
Militant and Weapon Detection Final Report
Security is always a main concern in every domain, due to a rise in crime rate in a
crowded event or suspicious lonely areas. Abnormal detection and monitoring have major
applications of computer vision to tackle various problems. Due to growing demand in the
protection of safety, security and personal properties, needs and deployment of surveillance
systems can recognize and interpret the scene and anomaly events play a vital role in
intelligence monitoring. Weapon and militant detection using a convolution neural network
(CNN) based. Proposed implementation uses two types of datasets. One dataset, which had
pre-labelled images and the other one is a set of images, which were labelled manually.
Results are tabulated, both algorithms achieve good accuracy, but their application in real
situations can be based on the trade-off between speed and accuracy.
Crime is defined as an act harmful not only to the person involved, but also to the
community as a whole. It is to predict the crime using image dataset and finally calculate
accurate performance of the detector. The propose algorithms that are able to alert the human
operator when a weapon and militant is visible in the image. It is mainly focused on limiting
the number of false alarms in order to allow for real life application of the system. For future
work, it is planned to use in live application and to improve the detection and reduce the
crime.
i
Table of Contents
CHAPTER TITLE PAGE NO
Abstract i
Acknowledgements ii
Table of Contents iii
Table of Figures vi
List of Tables vii
List of Snapshots viii
Chapter 1 Introduction
1.1. Introduction 1
1.2. Motivation 1
1.3. Problem Statement 2
1.4. Scope of the project 2
1.5. Objectives 2
1.6. Literature review 3
1.7. Organization of the report 5
1.8. Summary 6
ii
3.4. Specification using use case diagram 12
3.5. Data flow diagram 13
3.5.1. DFD for Preprocessing 14
3.5.2. DFD for Identification 14
3.5.3. DFD for feature extraction 15
3.5.4. DFD for classification and detection 16
3.6. Summary 16
Chapter 5 Implementation
5.1. Implementation requirements 34
5.2. Programming language used 34
5.2.1. Key features of python 35
5.2.2. Python GUI 35
5.3. Pseudocode for each module 36
5.3.1. Pseudocode for dataset 36
5.3.2. Pseudocode for image extraction 36
5.3.3. Pseudocode for analyzing the image 36
5.3.4. Pseudocode for detection 37
iii
5.4. Summary 37
References 51
Appendix 52
iv
Table of Figures
FIGURE NO. FIGURE NAME PAGE NO
vi
List of Tables
TABLE NO. TABLE NAME PAGE NO
vii
List of Snapshots
SNAPSHOT NO. SNAPSHOT DESCRIPTION PAGE NO
viii
“Militant Intrusion Detection using Image Processing and Deep Learning”
CHAPTER 1
INTRODUCTION
Introduction
Nowadays, many cases of crimes are report in public place, home using different types of
weapon and militants such as firearms, swords, cutters, etc. To monitor and minimize such
types of crimes, CCTV camera is installed in public places. Generally, the video footages
recorded through these cameras are monitored by security staff. Success and failure of
detecting crime depends on the attention of operator. It is not always possible for a person to
pay attention on all the video feeds on a single screen recorded through multiple video
cameras. Nature and extent of crime depends on the types of weapon and militant that is used.
If a video surveillance system has the ability to generate a prior alert then by timely
reaction losses may be reduced to the maximum extent. Advantage of weapon and militant
classification can also be added to a surveillance system. Weapon and militants may be
classified either using standard techniques with machine learning classifier or by using deep
learning-based techniques. The trained model is further used for classifying any new input
image. Accuracy of such types of approaches depends on the robustness and diversity of
extracted features. To overcome these limitations, deep Convolutional Neural Networks is
better to be used as it does not require any explicit feature of the input image. Deep
Convolutional Neural Networks consist of a number of convolutional layers, pooling layers and
fully connected layers.
Convolutional layers extract various features from the input image which includes a high
degree of invariance, scaling and other forms of deformation. Thereafter, fully connected layers
learn from these features. Due to this property, deep CNN is applied in many applications and
achieves better accuracy in comparison to standard machine learning based approach. CNN
architecture is inspired by the organization and functionality of the visual cortex and
designed to mimic the connectivity pattern of neurons within the human brain. The neurons
within a CNN are split into a three-dimensional structure, with each set of neurons analyzing
a small region feature of the image. One more benefit of using such architecture is that it can
be or fully reused for related applications. It can be done by using the concept of transfer
learning which reduces model development time and also the requirement of large dataset.
Page 1
“Militant and Weapon Detection using Image Processing and
Deep Learning”
Motivation
Public safety is a major concern in today’s modern society. Modern weapon and
militantry and firearms pose serious threats to the safety and security of the everyday people,
and recent events and Media coverage have only further publicized the inherent dangers that
one may face in even the most of the public places. It is hoped that through the vigilance of the
common citizen and the swift response of the authorities, violent perpetrators who threaten
others with dangerous weapon and militantry can be quickly and reliably apprehended and
fatalities prevented. But often times, when threatened with the very real danger of alive
firearm, people panic, and their justified self-preservation may prevent the proper authorities
from being notified, causing small but noticeable delays in police response time at best and
resulting in the loss of lives from failure to response at worst.
Problem Statement
To design and implement an efficient system to detect the weapon and militant in the
surrounding Area.
Process:
The processing consists of identification of the individual component part of the weapon
and militant and Militant by using CNN algorithm.
After identification if any weapon and militants are found that will be detected.
Output: Display weapon and militant type when weapon and militant and Militant detected.
Page 3
“Militant and Weapon Detection using Image Processing and
Deep Learning”
Objectives.
The main objectives of the project are:
To detect the segment containing weapon and militantry in the detected image segment
containing human.
To detect the Militant and Intimate through telegram.
To track multiple weapon and militants at a time to identify the type of the weapon and
militant.
Page 4
“Militant and Weapon Detection using Image Processing and
Deep Learning”
Literature Review
1. Weapon and militant detection using artificial Intelligence and deep learning for
security applications; Harsha Jain ICESC 2020. [1]
Security is always a main concern in every domain, due to a rise in crime rate in a
crowded event or suspicious lonely areas. Abnormal detection and monitoring have major
applications of computer vision to tackle various problems. Due to growing demand in the
protection of safety, security and personal properties, needs and deployment of video
surveillance systems can recognize and interpret the scene and anomaly events play a vital
role in intelligence monitoring. This paper implements automatic gun (or) weapon and
militant detection using a convolution neural network (CNN) based SSD and Faster RCNN
algorithms. Proposed implementation uses two types of datasets. One dataset, which had pre-
labelled images and the other one is a set of images, which were labelled manually. Results
are tabulated, both algorithms achieve good accuracy, but their application in real situations
can be based on the trade-off between speed and accuracy. SSD and Faster RCNN algorithms
are simulated for pre labelled and self- created image dataset for weapon and militant (gun)
detection. Both the algorithms are efficient and give good results but their application in real
time is based on a trade-off between speed and accuracy. In terms of speed, SSD algorithm
gives better speed with 0.736 s/frame. Whereas Faster RCNN gives speed 1.606s/frame,
which is poor compared to SSD. With respect to accuracy, Faster RCNN gives better
accuracy of 84.6%. Whereas SSD gives an accuracy of 73.8%, which is poor compared to
faster RCNN.SSD provided real time detection due to faster speed but Faster RCNN provided
superior accuracy.
2. Automatic handgun and knife detection algorithms; Arif Warsi IEEE Conference
2019. [2]
Nowadays, the surveillance of criminal activities requires constant human monitoring.
Most of these activities are happening due to handheld weapon and militants mainly pistol
and gun. Object detection algorithms have been used in detecting weapon and militants like
knives and handguns. Handgun
Page 5
“Militant and Weapon Detection using Image Processing and
Deep Learning”
and knives detection are one of the most challenging tasks due to occlusion, variation in
viewpoint and background cluttering that occurs frequently in a scene. This paper reviewed
and categorized various algorithms that have been used in the detection of handgun and knives
with their strengths and weaknesses. This paper presents a review of various algorithms used
in detecting handguns and knives. The detection of handguns and knives algorithms is
classified into two major categories namely Non-deep learning and Deep learning algorithms.
Non-deep algorithms are heavily depending on the quality of image. Noise and occlusion
impact the algorithms used for edge detection and color segmentation. Hence, they suitable for
images like X-ray and Terahertz. One of the major problems with all non-deep learning
algorithms and some deep learning algorithm for handgun and knife detection is the use of
different custom datasets. It makes the comparison of results unreliable as they are not sharing
the same dataset. Some deep learning algorithms are using Image net dataset and provided the
accuracy and performance report. Taking Image net dataset as a benchmark and the available
results, Faster RCNN has the best speed of 0.2 frames per seconds whereas Over feat
algorithm has shown more accurate result of 89% map.
Page 6
4. Handheld Gun detection using Faster R-CNN Deep Learning; Gyanendra Kumar
Verma IEEE Conference 2019. [4]
In this paper we present an automatic handheld gun detection system using deep learning
particularly CNN model. Gun detection is a very challenging problem because of the various
subtleties associated with it. One of the most important challenges of gun detection is
occlusion of gun that arises frequently. There are two types of occlusions of gun, namely gun
to object and gun to site/scene occlusion. Normally, occlusions in gun detection are arises
beneath three conditions: self-occlusion, inter-object occlusion or by background site/scene
structure. Self- occlusion arises when one portion of the gun is occluded by another. Inter-
object occlusion occurs when some different object like hand, clothes etc. occluded the gun.
Though those Occlusion, which takes places due to background, occurs while the gun is
occluded by the structure in the background. These occlusions can be of two types either
partial or full. Partial or full occlusion takes place due to carrying of guns in either hand or in
a halter. There are many methods to handle occlusion based on depth analysis of object from
the camera, fusion of colour and shape features of object and optimal position of camera.
Inter class variation in guns occurs due to variation in colour and structure of different models
of guns. Guns are extensively accessible in various colours like silver, black etc., because of
which detection of images is a challenging task. View variation in gun detection arises due to
different 2-D view of a gun from different viewpoints or orientation. Rotation variation in gun
detection arises due to rotation of objects in its plane whereas scale variation appears because
of shifting in distance of gun from CCTV, when it takes the video.
Chapter 2 - System requirement specification: As the name suggested the second chapter
consisting of specific requirement, software and hardware requirements that used in this
project. Also, we summarize this chapter at the end.
Chapter 3 - High level Design: This chapter contains design consideration which we made,
architecture of the proposed system, use case diagrams are used by the specification of the
system. It also includes data flow diagrams for each module and used states chart for the
proposed method and module specification.
Page 5
Chapter 4 - Detail design: The chapter 4 explains about the detail functionalities and
description of each module and the structural chart diagram.
Summary
The first chapter describes the short livid introduction of the weapon and militant
detection system, theory about Convolution Neural Network. The motivation of the project is
discussed insection1.2. Problem statement of the project explained in section 1.3 and the
scope and objectives of the project is described in section 1.4 and 1.5 respectively. Finally,
section 1.6 gives details the reviews of paper reference.
Page 6
CHAPTER 2
Specific Requirement
Require access to a client session of Python and Keras toolbox for job submission.
A shared file system between user desktops and cluster.
Maximum of Python worker per physical CPU core.
Hardware Requirement
Processor: Intel core
Processor Speed: 1.86 GHz.
RAM: 4GB+
Hard Disk Space: 500 GB+
Monitor: 15 VGA Color
Software Requirement
Operating system: Windows 10
Coding Language: Python
Software Tool: Keras
Toolbox: Image processing Toolbox
Interfaces
The Jupyter Notebook
The Jupyter Notebook is an open-source web application that allows you to create and
share documents that contain live code, equations, visualizations and narrative text. Uses
include: data cleaning and transformation, numerical simulation, statistical modeling, data
visualization, machine learning, and much more.
Page 7
Image Processing Toolbox Key
Features:
Image enhancement, including filtering, filters design, deblurring and contrast
enhancement.
Image analysis including features detection, morphology, segmentation, and measurement.
Spatial transformations and image registration.
Support for multidimensional image processing.
Summary
The chapter 2 consider all the system requirements which we require to develop this
proposed system. Section 2.1 Specific requirement and 2.2 grants hardware requirements and
2.3 grants software requirement like programming languages. Section 2.4 provide interface
used for this project have been explained. Section 2.4.1 and 2.4.2 jupyter notebook and image
processing toolbox respectively.
Page 8
CHAPTER 3
Design Consideration
The design consideration briefs about how the system behaves for the boundary environments and
what action should be taken if the abnormal case happens. Some of the design considerations are data
collection, pre-processing methods and Classification and prediction.
The design considerations are formulated to bring to the attention of the designers in applying the
universal accessibility design principles and requirements to buildings and facilities. They can also be
used to identify barriers in existing systems.
The proposed system has the following steps for weapon and militant detection
1. Image Pre-Processing.
2. Identification
3. Feature Extraction
4. Weapon and militant Recognition
5. Militant Detection
6. Intimation
Page 9
Image Pre processing
The image processing is a mechanism that focuses on the manipulation of images in different ways in
order to enhance the image quality. Images are taken as the input and output for image processing
techniques. It is the analysis of image to image transformation which is used for the enhancement of
image. Firstly, we convert RGB image to gray scale image. It helps to reduce the complexity in the
image and also make the work easy. Then by using min-max
scalar method converts the gray scale values into binary values. The obtained binary values are taken as
the input for the further process. In the obtained binary matrix consider one value regions as white and
zero value region black. By using these values, the region of interest can be identified. So that the values
are useful for feature extraction and identification of region of interest.
Identification
In this stage identify the region which needs to proceed for further process, it is involved in the
identification of the particular region of the image that is used for the further process like feature
extraction and classification of the images. The output of the pre-processing step is given as the input for
the identification process. This process is based on the binary values obtained in the pre-processing step.
The region with black are consider as region of interest. The region of interest obtained by the pre-
processing of the images. That region is considered as proceeding part of the image from which weapon
and militant will be identified. The identified weapon and militant images are given to the feature
extraction process.
Feature Extraction
In this stage extract the required feature from the identified region which are obtained from the
previous step. That region is compressed by converting reduced size matrix to control over fitting. The
reduction of the matrix size helps in reduce the memory size of the images. Then the flattening process
is applied to the reduced matrix, in which the reduced matrix is converted to one-dimension array,
which is used for final detection.
Militant Detection:
For detecting militant data is trained using Yolo Model. YOLO, in a single glance, takes the entire image and
predicts for these boxes the bounding box coordinates and class probabilities. YOLO's greatest advantage is
its outstanding pace, it's extremely fast, and it can handle 45 frames per second . Amongst the three versions
of YOLO 3 and 5, is fastest and more accurate in terms of detecting small objects. The proposed algorithm,
YOLO consists of total 106 layers . The architecture is made up of 3 distinct layer forms. Firstly, the residual
layer which is formed when activation is easily forwarded to a deeper layer in the neural network. In a
residual setup, outputs of layer 1 are added to the outputs of layer 2. Second is the detection layer which
performs detection at 3 different scales or stages. Size of the grids is increased for detection. Third is the up-
sampling layer which increases the spatial resolution of an image. Here image is up sampled before it is
scaled. Also, concatenation operation is used, to concatenate the outputs of previous layer to the present layer.
Addition operation is used to add previous layers. In the Fig 3, the pink colored blocks are the residual layers,
orange ones are the detection layers and the green are the up-sampling layers. Detection at three different
scales is as shown Fig.
Page 11
YOLO predicts 3 different scales of prediction. The detection layer is used to detect feature maps of three
different sizes, with strides 32, 16, 8 respectively. This means that detections are made on scales of 13 x 13,
26 x 26 and 52 x 52 with an input of 416 x 416.
The working of Yolo is mentioned below.
Darknet: This algorithm is implemented using an open-source neural network framework i.e., Darknet which
was developed in C Language and CUDA technology to render speedy calculations on a GPU necessary for
real-time predictions.
• DNModel.py: Darknet Model file is a computer vision code used for building the model using the
configuration file and it appends each layer. • Util.py: Contains all the formulas used.
• imageprocees.py: Required to perform the image processing task. It takes all the input images to resize them
and perform Up-sampling, also performs transpose function.
• detect.py: The main code which is run to perform object detection. This code uses all the abovementioned
files to perform object detection. Performs all the functions according to the YOLO concept.
Page 12
System Architecture
The figure 3.1 shows the system architecture for the proposed system. The input image
is pre-processed and converted to gray scale image to get the clear vision of the image. Then
it will be converted into binary values. In the next step identifies the part which needs to
proceed further. Then required feature are extracted by In the CNN convolution layer. By
passing those features into different layer of CNN we get compressed image, that feature is
used for detection of weapon and militant using SoftMax activation function.
Module Specification
Module Specification is the way to improve the structure design by breaking down the
system into modules and solving it as independent task. By doing so the complexity is
reduced and the modules can be tested independently. The number of modules for our model
is three, namely pre-processing, identification, feature extraction and detection.
This project has four sets in the weapon and militant detection system as shown below
figure. So, each phase signifies the functionalities provided by the proposed system. In the
data pre- processing phase conversion of RGB to gray conversion and that gray scale value is
converted to binary value for efficient calculation of features in next phase. The second
phase is to identify the required part of the image from which we need to detect the weapon
and militant. In the binary values zero is considered as white and one is considered as black.
Page 13
The region with black is identified as required region to extract feature in the next phase.
For next process convolution neural network algorithm is applied. The third phase is to extract
the feature from the identified region in the convolution layer of CNN. This includes the part
of image which is considered as a required part of image which is used for the detection of the
weapon and militant. All the required information of the image is converted into pixel and
stored in the form of image.
In the final phase each feature from the previous phase is considered these features are
extracted from the convolution layer of the CNN and send to fully connected layer. Apply
artificial neural network to those features by continuous iteration. In the hidden layer of ANN
each feature is efficiently identified and finally get the predictive value for output by using
SoftMax activation function. Based on that value weapon and militant will be detected.
Page 14
The figure 3.2 shows that the use case diagram in the Unified Modelling Language
(UML) is a type of behavioral diagram defined by and crated from a use-case analysis. Here
the user can collect the data and load the data to the system. The system can store the data for
training and testing the model, here system is taken as actor. The training and testing data are
given to the CNN for further classification.
Classification of data done by different layers of CNN. The feature extraction is done by
convolution layer of the CNN and then using the Artificial Neural Network in the fully
connected layer weapon and militant can be identified. The detection is based on the
prediction value calculated by using softmax activation function. Based on prediction value
weapon and militant will be detected.
A data flow diagram (DFD) is graphic representation of the "flow" of data through an
information system. A data flow diagram can also be used for the visualization of data
processing (structured design). It is common practice for a designer to draw a context level
DFD first which shows the interaction between the system and outside entities.
Data flow diagrams show the flow of data from external entities into the system, how the
data moves from one process to another, as well as its logical storage. There are only four
symbols:
1. Squares representing external entities, which are sources and destinations of information
entering and leaving the system.
2. Rounded rectangles representing processes, in other methodologies, may be called
'Activities', 'Actions', 'Procedures', 'Subsystems' etc. which take data as input, do processing
to it, and output it.
3. Arrows representing the data flows, which can either, be electronic data or physical
items. It is impossible for data to flow from data store to data store except via a process, and
external entities are not allowed to access data stores directly.
4. The flat three-sided rectangle is representing data stores should both receive information
for storing and provide it for further processing.
5. It is also used to analyses a particular problem and the solution for it in steps.
6. A user loads the data and the system reads the data provided by the user.
7. Based on feature extraction and classifier the model will be trained and tested.
Page 15
Data Flow Diagram for Pre-processing
The figure 3.3 shows that the image is given as input. As we giving the colour image so
that RGB image is converted into gray scale values to reduce complexity in the image. For
efficient feature extraction gray scale values are converted into binary values. Then the image
with reduced complexity is send to the next process.
The figure 3.4 shows that the image with reduced complexity is considered as input.
Here the region with the value of one is considered as black that region is considered for next
process.
Page 16
Data Flow Diagram for Feature Extraction
The figure 3.5 shows that the region of interest from the identification step is considered as
input. The region of interest is obtained from converting RGB color image to the gray scale
image by using minmax scalar method. For that region CNN algorithm is applied. A CNN
consists of an input layer and an output layer, as well as multiple hidden layers between them.
The hidden layer basically consists of the convolution layer, pooling layer, relu layer and fully
connected layers.
In this the RGB color image is converted into gray scale image by using minmax scalar
method. The binary valued image is given as input to the convolution layer. In the convolution
layer binary matrix is multiplied with filter to extract feature from the region.
In this the binary valued images are used which reduces the complexity. Then is send to
max pooling layer where the matrix size is converted into reduced matrix. The output of the
pooling layer is given as input to the flattening layer. Then the reduced matrix is converted to
one-dimension array after sending into flattening layer. The one-dimension array is given for
further classification to obtain the accurate result. The image having highest accuracy value
that is considered as the output. Finally, the weapon and militant is detected by using above
four layers.
Page 17
Data Flow Diagram for Classification and Detection
Summary
In third chapter, high level design of the propose method is discussed. Section3.1
presents the design considerations for the project. Section 3.2 discusses the system
architecture of proposed system. This gives a basic working of the system. Section 3.3
describes module specification for all the five modules. Section 3.4 gives specifications using
use casediagrams and 3.5 gives the data flow of weapon and militant detection system.
Page 18
CHAPTER 4
DETAILED DESIGN
Structural Design
The figure 4.1 shows that the proposed system involves the following steps. First step
involves pre-processing of captured images. The pre-processed image undergoes feature
extraction, where various features of the weapon and militant types are extracted and certain
algorithms are applied. The data that is stored is compared with the pre-processed image and
approximate result is generated.
Convolutional neural network is the special type of feed forward artificial neural
network in which the connectivity between the layers are inspired by the visual cortex.
Convolutional Neural Network (CNN) is a class of deep neural networks which is applied for
analyzing visual imagery. They have applications in image and video recognition, image
classification, natural language processing etc. Convolution is the first layer to extract
features from an input image. Convolution preserves the relationship between pixels by
learning image features using small squares of input data. It is a mathematical operation that
takes two inputs such as image matrix and a filter or kernel. Each input image will be passed
through a series of convolution layers with filters (kernels) to produce output feature maps.
Here is how exactly the CNN works.
Page 19
Basically, the convolutional neural networks have 4 layers that is the convolutional layers,
ReLU layer, pooling layer and the fully connected layer.
Convolutional Layer
In convolution layer after the computer reads an image in the form of pixels, then with
the help of convolution layers we take a small patch of the images. These images or patches
are called the features or the filters. By sending these rough feature matches is roughly the
same position in the two images, convolutional layer gets a lot better at seeing similarities
than whole image matching scenes. These filters are compared to the new input images if it
matches then the image is classified correctly. Here line up the features and the image and
then multiply each image, pixel by the corresponding feature pixel, add the pixels up and
divide the total number of pixels in the feature. We create a map and put the values of the
filter at that corresponding place. Similarly, we will move the feature to every other position
of the image and will see how the feature matches that area. Finally, we will get a matrix as
an output.
ReLU Layer
ReLU layer is nothing but the rectified linear unit, in this layer we remove every
negative value from the filtered images and replaces it with zero. This is done to avoid the
values from summing up to zeroes. This is a transform function which activates a node only
if the input value is above a certain number while the input is below zero the output will be
zero then remove all the negative values from the matrix.
Pooling Layer
In this layer we reduce or shrink the size of the image. Here first we pick a window size,
then mention the required stride, then walk your window across your filtered images. Then
from each window take the maximum values. This will pool the layers and shrink the size of
the image as well as the matrix. The reduced size matrix is given as the input to the fully
connected layer.
Page 20
“Weapon Detection using Image Processing and Deep Learning”
Figure 4.2 Flowchart for the weapon and militant and Militant detection using deep
learning
The figure 4.2 shows the flowchart of the weapon and militant detection using deep
learning process. Image dataset which contain weapon and militants are loaded into it then
pre-processing of the image is done by RGB to grayscale conversion. Feature Extraction is
done by passing to CNN layers. If the weapon and militant is detected it gives the message
that the weapon and militant type present in it.
Page 19
“Weapon Detection using Image Processing and Deep Learning”
The flowchart for collecting data is as depicted in the figure 4.3. The data set is
collected from a source and a complete analysis is carried out. The image is selected to be
used for training/testing purposes only if it matches our requirements and is not repeated.
The figure 4.4 shows the flowchart for the pre-processing of the images received
from the output of the previous step. This involves converting the image from the RGB
format to greyscale to ease processing, the use of an averaging filter to filter out the noise,
global basic thresholding to remove the background and consider only the image and a high-
pass filter to sharpen the image by amplifying the finer details.
The first step in pre-processing is converting the image from RGB to Greyscale. It can
be obtained by applying the below formula to the RGB image. The figure 4.5 depicts the
Conversion from RGB to grayscale.
Page 20
“Weapon Detection using Image Processing and Deep Learning”
Only 8 bit is required to store a single pixel of the image. So we will need 33 % less
memory to store grayscale image than to store an RGB image.
Grayscale images are much easier to work within a variety of task like In many
morphological operation and image segmentation problem, it is easier to work with single
layered image (Grayscale image) than a three-layered image (RGB color image).
It is also easier to distinguish features of an image when we deal with a single layered
image
Noise removal
Noise removal algorithm is the process of removing or reducing the noise from
the image. The noise removal algorithms reduce or remove the visibility of noise by
smoothing the entire image leaving areas near contrast boundaries. Noise removal is the
second step in image pre-processing. Here the grayscale image which was obtained in the
Page 21
“Weapon Detection using Image Processing and Deep Learning”
previous step is given as input. Here we are making use of Median Filter which is a Noise
Removal Technique.
Median Filtering
The median filter is a non-linear digital filtering technique, often used to remove
noise from an image or signal.
Here 0’s are appended at the edges and corners to the matrix which is the representation of
the grey scale image. Then for every3*3 matrix, arrange elements in ascending order, then
find median/middle element of those 9 elements, and write that median value to that
particular pixel position. The figure 4.6 depicts Noise filtering using Median Filter.
Page 22
“Weapon Detection using Image Processing and Deep Learning”
Image Sharpening
Image sharpening refers to any enhancement technique that highlights edges and fine
details in an image, Increasing yields a more sharpened image.
High-Pass Filtering
A high-pass filter can be used to make an image appear sharper. These filters emphasize
fine details in the image. Here the output from the thresholding is given as input. Here, we
are making use of a filter, first we append the nearest values to pixels at the boundary pixels.
The figure 4.8 depicts Image Sharpening using High-Pass Filter
Page 23
“Weapon Detection using Image Processing and Deep Learning”
We multiply the elements of the 3*3 input matrix with the filter matrix, this can be
represents as A(1,1)*B(1,1), in this way all the elements in the 3*3 are multiplied and their
sum id divided by 9, which gives the value for the particular pixel position. In the same way
the values of all the pixel positions are calculated. The negative values are considered as
zero, as there can be no such thing as negative illumination.
Here, we use a method called Histogram Orientation Gradient (HOG) to extract the
features from the preprocessed image received as input. It involves multiple steps like
finding Gx and Gy, which are gradients about each pixel in the x and y axes. Then, these
gradients are substituted in relevant formulae to get the magnitude and gradient of the
pixel’s orientation. Then, the angles and their respective frequencies are plotted to form a
histogram, which is the output of this module. The flowchart for feature extraction model is
shown in figure 4.9
Feature Extraction
Feature extraction is a process of dimensionality reduction by which an initial set of
raw data is reduced to more manageable groups for processing.
Page 24
“Weapon Detection using Image Processing and Deep Learning”
Here 0’s are appended at the edges and corners to the matrix. Then Gx and Gy are
calculated. Gx is calculates as Gx = value on right –value on left and Gy is calculated as
Gy=value on top-value on left. Figure 4.10 shows Gx and Gy in HOG.
Gx: Gy:
.
Figure 4.10 Gx and Gy in HOG.
Then, using the formula given in figure 4.11 Magnitude and the orientation are
calculated. Feature Extraction using HOG is shown in figure 4.11 Magnitude is the
illumination and degree the orientation is the angle of orientation
After the angle of orientation is calculated, the frequency the angles for the
particular intervals area are noted and they are given as input for the classifier. Here we
zeroes are not considered for finding the frequency. For example for the interval from 40 to
59 there are 2 occurrences, so we are writing the frequency as 2.
Page 25
“Weapon Detection using Image Processing and Deep Learning”
=Tan-1(Gy/Gx)
In CNN, we take the output from the high-pass filter as input, leaving out feature
extraction, as CNN is a classifier which simply has a feature extracting process of its own,
using convolution, rectification and pooling as the 3 sub-modules, which work in iterations
to give out a final comparison matrix, which is then classified by classifying algorithms like
Softmax.
By considering all the features in the output layer which gives the result with some
Page 26
“Weapon Detection using Image Processing and Deep Learning”
predictive value. These values are calculated by using SoftMax activation function.
SoftMax activation provides predictive values. Based on the predictive value the final
result will be identified as weapon and militant.
In other words, each group of neurons specializes in identifying one part of the
image. CNNs use the predictions from the layers to produce a final output that presents a
vector of probability scores to represent the likelihood that a specific feature belongs to a
certain class. Figure 4.13 shows the Typical CNN Architecture
Page 27
“Weapon Detection using Image Processing and Deep Learning”
Convolutional
Convolutional layer- In convolution layer after the computer reads an image in the form
of pixels, then with the help of convolution layers we take a small patch of the images.
These images or patches are called the features or the filters. By sending these rough
feature matches is roughly the same position in the two images, convolutional layer gets a
lot better at seeing similarities than whole image matching scenes. It creates a feature
map to predict the class probabilities for each feature by applying a filter that scans the
whole image, few pixels at a time.
Fully connected layer-“flattens” the outputs generated by previous layers to turn them
into a single vector that can be used as an input for the next layer. Applies weights over
the input generated by the feature analysis to predict an accurate label.
Output layer-generates the final probabilities to determine a class for the image.
Page 28
“Weapon Detection using Image Processing and Deep Learning”
Convolutional Layer
Convolutional Layer
Convolutional Layer is the first step in CNN, here 3*3 part of the given matrix which
was obtained from High-pass filter is given as input. That 3*3 matrix is multiplied with the
filter matrix for the corresponding position and their sum is written in the particular
position. This is shown in the below figure. This output is given to pooling layer where the
matrix is further reduced. Figure 4.15 shows the Convolutional Layer.
Page 29
“Weapon Detection using Image Processing and Deep Learning”
Page 30
“Weapon Detection using Image Processing and Deep Learning”
Page 31
“Weapon Detection using Image Processing and Deep Learning”
The output of the pooling layer is flattened and this flattened matrix is fed into the
Fully Connected Layer. In the fully connected layer there are many layers, Input layer,
Hidden layer and Output layers are parts of it. Then this output is fed into the classifier, in
this case SoftMax Activation Function is used to classify the image into weapon and militant
present or not. Figure 4.17 shows the Fully connected layer and Output Layer
Activity diagram
The figure 4.18 shows the activity diagram of weapon and militant detection. Here
single circle indicates the start of the process and double circle indicates the end of the
process. Here the pre-processing of the image by converting the RGB to grayscale image
and feature extraction is done by first layer that is convolution layer of the neural network
and detection done by using fully connected layers of convolutional neural network.
Sequence diagram
A sequence diagram simply depicts interaction between objects in a sequential order
i.e. the order in which these interactions take place. We can also use the terms event
diagrams or event scenarios to refer to a sequence diagram. Sequence diagrams describe
how and in what order the objects in a system function.
Page 32
“Weapon Detection using Image Processing and Deep Learning”
The figure 4.19 describes the sequence diagram of weapon and militant detection.
Here the image dataset is given and pre-processing of the image is done. The processed data
is given to feature extraction and here comparing and testing of image is done and by
applying CNN algorithm the detection is done.
Summary
The fourth chapter gives the detailed design for the used weapon and militant detection
system. In section 4.1. The structural chart for this system is shown. Section 4.2 gives the
flow chart for the proposed system. Section 4.3 gives the activity diagram and section 4.4
gives the sequence diagram of weapon and militant detection.
Page 33
“Weapon Detection using Image Processing and Deep Learning”
CHAPTER 5
IMPLEMENTATION
The implementation phase of project development is the most important phase as it adds
the final solution, which solves the problem at hand. The implementation phase solves the
actual materialization of the ideas, which are expressed in the analysis document and
developed in the design phase. Implementation should be perfect mapping of the design
document in a suitable programming language in order to achieve the necessary final
product. Often the product is ruined due to incorrect programming language chose for
implementation or unsuitable method for programming. It is better for the coding phase to be
directly linked to the design phase in the sense if the design is in terms of object-oriented
way. The factors concerning the programming language and platform chosen are described in
the next couple of sections.
Implementation of any software is always preceded by important decision regarding
selection of the platform, the language used, etc. these decisions are often included by several
factors such as real environment in which the system works, the aspect that is required, the
security concerns and other implementation specific details. These are two major
implementation decisions that have been made before the implementation decisions that have
been made before the implementation of this project.
They are as follows:
Selection of platform (Operating system).
Selection of programming language for development of the application.
IMPLEMENTATION REQUIREMENTS
To execute training data the used software is as follows:
Python OpenCV
Python language is used as Programming language.
Windows 10 is the operating system.
Review and Rating dataset.
Page 34
“Weapon Detection using Image Processing and Deep Learning”
first released in 1991. Python has a design philosophy that emphasizes code readability,
notably using significant whitespace. It provides constructs that enable clear programming on
both small and large scales.
Python features a dynamic type system and automatic memory management. Python
interpreters are available for many operating systems. C Python, the reference
implementation of Python, is open source software and has a community-based development
model; it performs nearly as all of Python’s other implementations.
Python works on different platforms like Windows, mac, Linux, Raspberry Pi, etc. and it
has a simple syntax that allows developers to write programs with fewer lines than some
other programming languages. Python uses new lines to complete a command, as opposed to
other programming languages which often use semicolons or parenthesis.
PYTHON GUI
Python provides Graphical User Interface which exhibits one or more windows
consisting of controls known as components which facilitates user to accomplish interactive
task. User need not want to create script or write commands in command prompt. Instead
user must be aware of how the programs are performs to accomplish tasks. It includes Radio
buttons, Toolbars, Sliders, Axes etc. Tools also help users to read and write data and
communicate with other GUI’s. Data are displayed in forms of tables or plots in GUI.
Page 35
“Weapon Detection using Image Processing and Deep Learning”
Step 3: Load the coco class, which labels the yolo model.
img_num = data[1]
img_data = data[0]
orig = img_data
Step 3: If the model is predicted, then we have reached the end of the stream
Page 36
“Weapon Detection using Image Processing and Deep Learning”
load = Image.open(fileName)
render = ImageTk.PhotoImage(load)
title. destroy ()
Summary
This chapter describes the implementation of the Weapon and militant Detection using
CNN. Implementation requirement is deliberated in Section 5.1, Section 5.2 briefs about the
Programming language selected. Section 5.3 describes the pseudocode for preprocessing
techniques.
Page 37
“Weapon Detection using Image Processing and Deep Learning”
CHAPTER 6
SYSTEM TESTING
Testing is the process of evaluating a system or its component(s) with the intent to find
whether it satisfies the specified requirements or not. Testing is executing a system in order to
identify any gaps, errors, or missing requirements in contrary to the actual requirements.
Testing Principle
Before applying methods to design effective test cases, a software engineer must
understand the basic principle that guides software testing. All the tests should be traceable to
customer requirements.
Testing Methods
There are different methods that can be used for software testing. They are,
Black-Box Testing
The technique of testing without having any knowledge of the interior workings of the
application is called black-box testing. The tester is oblivious to the system architecture and
does not have access to the source code. Typically, while performing a black-box test, a tester
will interact with the system's user interface by providing inputs and examining outputs
without knowing how and where the inputs are worked upon.
White-Box Testing
White-box testing is the detailed investigation of internal logic and structure of the code.
White-box testing is also called glass testing or open-box testing. In order to perform white-
box testing on an application, a tester needs to know the internal workings of the code. The
tester needs to have a look inside the source code and find out which unit/chunk of the code is
behaving inappropriately.
Levels of Testing
There are different levels during the process of testing. Levels of testing include different
methodologies that can be used while conducting software testing. The main levels of
software testing are:
Functional Testing:
This is a type of black-box testing that is based on the specifications of the software that is to
be tested. The application is tested by providing input and then the results are examined that
Page 38
“Weapon Detection using Image Processing and Deep Learning”
need to conform to the functionality it was intended for. Functional testing of software is
conducted on a complete, integrated system to evaluate the system's compliance with its
specified requirements. There are five steps that are involved while testing an application for
functionality.
The determination of the functionality that the intended application is meant to perform.
The output based on the test data and the specifications of the application.
The comparison of actual and expected results based on the executed test cases.
Non-functional Testing
This section is based upon testing an application from its non-functional attributes.
Non-functional testing involves testing software from the requirements which are non-
functional in nature but important such as performance, security, user interface, etc. Testing
can be done in different levels of SDLC.
UNIT TESTING
Unit testing is a software development process in which the smallest testable parts of an
application, called units, are individually and independently scrutinized for proper operation.
Unit testing is often automated but it can also be done manually. The goal of unit testing is to
isolate each part of the program and show that individual parts are correct in terms of
requirements and functionality. Test cases and results are shown in the Tables.
Page 39
“Weapon Detection using Image Processing and Deep Learning”
The below Table 6.1 shows the successful test case for loading the weapon and militant
detection image that is selected by the user to do the processing. Before pre-processing the
images, the images should match in dimensions. The resolution of the images captured also
plays a vital role in conveying false notions to the classifiers- thus affecting accuracy. In this
unit the images input to the system are checked whether they are loaded or not. If they are
loaded then the expected output is reached.
Test Feature Checks whether the image selected are loaded or not.
Output Expected The weapon and militant image to be loaded from the dataset.
Output Obtained Loaded weapon and militant image which is used for the
processing is shown in Snapshot 7.3
Result Successful
The original image is given as input. In the preprocessing step noise reduction method is
used. Where Pepper and salt method is added to original image and the clear image will be
obtained. In the below table 6.2, the noise reduction unit testing is applied if the image is clear
the expected output is reached.
Page 40
“Weapon Detection using Image Processing and Deep Learning”
Result Successful.
The RGB image is converted to grayscale image and the successful test case for the
grayscale image as shown in the Table 6.3, the gray level represents the brightness of a pixel.
Here the RGB image is converted to grayscale image because it simply reduces complexity
from a 3D pixel value (R, G, B) to a 1D value. Grayscale images can be the result of
measuring the intensity of light at each pixel according to a particular weighted combination
of frequencies.
Result Successful
The Table 6.4 shows the successful test case for training the weapon and militant
detection image. Here in training the image is loaded and then checks whether it is already
loaded and trained or not, if not trained then it goes to the process of training. The image
selected for training
Page 41
“Weapon Detection using Image Processing and Deep Learning”
undergoes preprocessing and then feature extraction and then finally the images are trained.
These processes are done in Training GUI. And after training the dataset which contains set of
weapon and militant detection images then can test and calculate the accuracy of result
obtained.
Result Successful.
Summary
This chapter presents system testing in section 6.1, which consists of testing
principle for the various modules of the Emergency Weapon and militant Detection. Section
6.2 gives a complete view of the testing methods which includes Test Name, Test Feature,
Output Expected, Output Obtained and Result.
Page 42
“Weapon Detection using Image Processing and Deep Learning”
CHAPTER 7
RESULTS AND DISCUSSION
In this chapter the discussion of results is carried out and experimental results of the
CNN are going to be over viewed.
Experimental Results
Results of Home Page
The above Snapshot 7.1 is about the home page of the proposed system. When the
system gets start to work this page will be displayed. This will display that proposed system
is works by using the Python 3.6 version. This also displaying the path in which the entire
information like training, testing and other models related to run this system data is stored. It
will also show the options to proceed next steps.
Page 43
“Weapon Detection using Image Processing and Deep Learning”
Page 44
“Weapon Detection using Image Processing and Deep Learning”
Above Snapshot 7.3 is the snapshot of uploading the image. In this we need to select
the image of our choice which needs to detect the name of the weapon and militant. As we
trained the model with many images of the weapon and militant so that select the image of
the weapon and militant to know about the name of the weapon and militant and to test that
if there is a weapon and militant in the image or not. In this we need to select the any of the
images to upload to the testing process. When we click on get photo in the previous step that
will move to this page.
Above Snapshot 7.4 is about how screen display when we upload the image to detect.
The image size of 224*224 is given as input image to detect that there will be weapon and
militant in the given image or not. Here there will be one button for analyzing that there will
be weapon and militant in the image or not. Here along with the analyze button pre-
processing button is also considered to pre-process the image.
Page 45
“Weapon Detection using Image Processing and Deep Learning”
Results of pre-preprocessing
Page 46
“Weapon Detection using Image Processing and Deep Learning”
Page 47
“Weapon Detection using Image Processing and Deep Learning”
Snapshot 7.8 is the snapshot for showing image containing automag. After uploading
the image to detect. There will be button for analyzing. When click on this button then we
get final output. This will display name of the weapon and militant with indicating sound.
There will be status mentioned in the side corner of the display page when click on the
analyze button name of the weapon and militant, mass, length, firing range will be display in
front of status so the name of the weapon and militant with some specification related to that
weapon and militant is detected.
Page 48
“Weapon Detection using Image Processing and Deep Learning”
If there is no weapon and militant in the given image then when we click on the
analyze button ten it will display it as no weapon and militant as shown in the Snapshot
7.10. Which means that there is no weapon and militant is found in the given image. So, this
system also identifies and inform if there is no weapon and militant in the given image.
Summary
This chapter presents the experimental results in section 7.1, which consists of snapshots
of home page, selecting the image, uploading the image given image and analyzing the to
give the final output that there will be weapon and militant in the given image or not and if
there is a weapon and militant it will display the name of the weapon and militant with the
indicating sound.
Page 49
“Weapon Detection using Image Processing and Deep Learning”
CHAPTER 8
Conclusion
The results of the project have been satisfactory, since the accuracy and sensitivity in
the identification and classification of weapon and militant detection are high. The pre-
processing steps used to reduce the amount of data input during the classification have been
proven to be useful as the information contained in weapon and militant images. The
development of these subsystems has accomplished the objective of using only the
significant data: the pixels around the regions that might be weapon and militants based on
the pixel’s intensity. The data augmentation process has been useful and it has work
properly. The amount of input data was not big enough to train the CNN. The processes of
rotation and translation of the images have allowed to obtain much many examples of
weapon and militants to train the CNN learn from them.
Among all the classification methods included in this work, the CNN has been the one
that has the best performance, although it has two major drawbacks: the time spent and the
necessary memory to train it. The biggest issue in the results of the project are the
significant high false positives that the system obtains, which means that the precision is
really low, hence there are many connected components that have intensity values around
the values given in the weapon and militants. The use of a CNN in input images of three
deep dimensions has been developed and performs successfully, even if it is not commonly
used. The major drawback while using it is its increased amount of input data, which
increases the number of parameters that need to be trained in the CNN.
Future Enhancement
Since this model is detecting only the name of the weapon and militant in future it can
be improvised by adding some other features like count of the weapon and militant if we
give more than one weapon and militant. It can further improve by defining the name of the
weapon and militant inside the image itself with the bounding box. It can be further
developed by detecting the weapon and militant by using the live image so that it can further
used in the CCTV for detecting the weapon and militant. In this proposed system detect the
weapon and militant of same type in one image in future it can be improvised with detecting
the name of multiple type weapon and militants in the one image.
Page 50
“Weapon Detection using Image Processing and Deep Learning”
REFERENCES
[1] Harsha Jain et.al. “Weapon and militant detection using artificial Intelligence and deep
learning for security applications” ICESC 2020.
[2] Arif Warsi et.al “Automatic handgun and knife detection algorithms” IEEE Conference
2019.
[3] Neelam Dwivedi et.al. “Weapon and militant classification using deep Convolutional
neural networks” IEEE Conference CICT 2020.
[4] Gyanendra Kumar Verma et.al. “Handheld Gun detection using Faster R-CNN Deep
Learning” IEEE Conference 2019.
[5] Abhiraj Biswas et.al. “Classification of Objects in Video Records using Neural Network
Framework,” International conference on Smart Systems and Inventive Technology, 2018
[6] Pallavi Raj et.al. “Simulation and Performance Analysis of Feature Extraction and
Matching Algorithms for Image Processing Applications” IEEE International Conference on
Intelligent Sustainable Systems, 2019.
[7] Mohana et.al. “Simulation of Object Detection Algorithms for Video Survillance
Applications”, International Conference on I-SMAC (IoT in Social, Mobile, Analytics and
Cloud), 2018.
[8] Glowacz et.al. “Visual Detection of Knives in Security Applications using Active
Appearance Model”, Multimedia Tools Applications, 2015
[9] Mohana et.al. “Performance Evaluation of Background Modeling Methods for Object
Detection and Tracking” International Conference on Inventive Systems and Control, 2020
[10] Rojith vajihalla et.al. ”Inventive system and control for performance evaluation of
background modeling method of object detection and tracking” International conference,
2020
Page 51
“Weapon Detection using Image Processing and Deep Learning”
Appendix
Backpropagation in CNNs
The purpose of back propagation is being able to compute the error in the output to
optimize the weights and biases to minimize the error. In order to make the network learn we
must compute the derivatives to use gradient descent algorithm. The propagation is done from
the output of the network to the input. So the first thing is to propagate the error back in the
fully connected network. The derivative of the Error function (E) related to the weights in the
fully layer is, applying the chain rule:
𝑑𝐸 𝑑𝐸 𝑑𝑥 𝑙+1�
𝑙= 𝑙
𝑑 𝑖,𝑗
𝑑 𝑖,𝑗 𝑑𝑥 𝑙+1
�
Note that we only get contributions from the input of the next layer, since the weights
are used nowhere else. From the equation of forward propagation, (𝑥 𝑙 = ∑𝑗 𝑤 𝑙−1 𝑦 𝑙−1 ) we see
𝑖 𝑗𝑖 𝑗
that the partial with respect to any weight is the activation from its origin neuron, so:
𝑑𝐸 𝑑𝐸
𝑙 = 𝑦𝑙 𝑖
𝑑 𝑖,𝑗 𝑑𝑥 𝑙+1�
We already know all the values of y, so we need to compute the partial with respect to
the input xj. We know that so using the chain rule:
𝑑𝐸 𝑑𝐸 𝑑(𝜎(𝑥 𝑖 ) + 𝐼 𝑖 ) 𝑑𝐸
𝑗
= 𝑗 𝜎 ′ (𝑥 𝑙 )
𝑑𝑥 𝑑𝑦
𝑙 𝑙
𝑑𝑥𝑙 = 𝑑𝑦𝑙 �
� � � �
If we are in the output layer, then we know that the partial is just the derivative of the
error function.
𝑑𝐸 𝑑𝐸(𝑦 𝑙 )
𝑙
=
𝑑𝑦 𝑑𝑦𝑙
� �
While if we are not in the output layer, the partial is:
𝑑𝐸
𝑑𝐸 𝑑𝑥 𝑙 +1 𝑑𝐸
𝑑𝑦 𝑙 = ∑ � ∑ 𝑙+1 𝑊𝑖,𝑗
� 𝑑𝑥 𝑙+1 𝑑𝑦𝑙 𝑑𝑥
� � �
As we can see, the error in a particular layer l is the weighted combination of the errors
in the next layer: In summary, to compute the gradient of the error with respect to the weight
we must:
Compute the errors at the output layer:
𝑑𝐸
= 𝑑𝐸 (𝑦 𝑙 )
𝑑𝑦𝑙
� 𝑑𝑦𝑙
�
Page 52
“Weapon Detection using Image Processing and Deep Learning”
Compute the partial derivative of the error with respect to the input of the neurons:
𝑑𝐸 𝑑𝐸
𝑑𝑥 𝑙 = 𝑑𝑦 𝑙 𝜎 (𝑥 )�
′ 𝑙
�
�
The pooling layer does not learn, since there is not any parameter in it. The error is just
back propagated to the previous layer.
We have the gradient of the error function (E) at the output of the convolutional neural
𝑑𝐸
network, so we can compute the gradient related to the output of the previous layer ( ). We
𝑑𝑦𝑙
�
can compute the gradient related to the weights of the filters by applying the chain rule and
summing the contributions of all the expressions where the variable is used.
𝑙
𝑑𝐸 𝑁−𝑚 𝑁−𝑚 𝑑𝐸
𝑑 𝑖𝑗
𝑁−𝑚 𝑁−𝑚 𝑑𝐸
= ∑ ∑ = ∑ ∑ 𝑙 𝑦(𝑖+𝑎) (𝑗+𝑏)𝑙−1
𝑑𝑤𝑎 𝑏 𝑖=0 𝑗=0
𝑙
𝑑𝑤 𝑖=0 𝑗 =0
𝑙
𝑑 𝑖 𝑗 𝑎 𝑏
𝑑 𝑗
𝑖
In order to compute 𝐸
= 𝑦 𝑙−1 we can use again the chain rule:
𝑖𝑗
� � (𝑖+𝑎)
𝑑𝑥𝑙
Looking at the forward propagation equations we know that (𝑖 −𝑎) (𝑗−𝑏)
. This looks like
� 𝑖𝑙 𝑗
a convolution but instead of applying the filter to 𝑥(𝑖−𝑎)(𝑗−𝑏). We apply it to x(i−a)(j−b). This
expression only makes sense for points that are at least m away from the top and left edges. If
we do it, it is a simple convolution using w flipped both axes.
Page 53