0% found this document useful (0 votes)
18 views49 pages

Miniproject Documentation3

Uploaded by

Mahi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views49 pages

Miniproject Documentation3

Uploaded by

Mahi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

IGNIS VISION

- GPS Enhanced Surveillance

A project report submitted to

Rajiv Gandhi University of knowledge technologies


SRIKAKULAM

In partial fulfillment of the requirements for the


Award of the degree of

BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING

3rd year submitted by


K.Revathi (S190273)
V.Shafiya Uzama (S190233)
M.HimaLalitha(S190231)
B.Anitha(S190294)

Under the Esteemed Guidance of


Asst.Prof.Shri. Lakshmi Bala Madam

1|Page
CERTIFICATE

This is to certify that the thesis work titled 'IGNIS VISION-GPS Enhanced
Surveillance' was successfully completed by K.Revathi(S190273),
V.ShafiyaUzma(S190233), M.Himalalitha (S190231), and B.Anitha (S190294) in
partial fulfillment of the requirements for the major project in Computer Science
and Engineering of Rajiv Gandhi University of Knowledge Technologies, under
my guidance. The output of the work carried out is deemed satisfactory.

Prof. LAKSHMI BALA M.Tech Prof. LAKSHMI BALA, M.Tech


Project Guide Head of the Department (CSE)

2|Page
DECLARATION

I declared that this thesis work titled “IGNIS VISION-GPS Enhanced


Surveillance” is carried out by me during the year 2023-24 in partial
fulfillment of the requirements for the Major Project in Computer
Science and Engineering.

I further declare that this dissertation has not been submitted


elsewhere for any Degree. The matter embodied in this dissertation report
has not been submitted elsewhere for any other degree. Furthermore,
the technical details furnished in various chapters of this thesis are
purely relevant to the above project and there is no deviation from the
theoretical point of view for design, development and implementation.

K.Revathi (S190273)
V.Shafiya Uzama (S190233)
M.himaLalitha(S190231)
B.Anitha (S190294)

3|Page
CONTENTS
ACKNOWLEDGMENT................................................................................................................5

ABSTRACT................................................................................................................................6

Chapter-1
1.1 Introduction ......................................................................................................................7
1.2 Hazards of Fire...................................................................................................................7
1.3 Fire Detection and Control.................................................................................................8
1.4 Applications.......................................................................................................................9
1.5 Problem Statement...........................................................................................................10

Chapter-2
2.1 Study..................................................................................................................................11
2.2 Analysis..............................................................................................................................12
2.3 Benefits..............................................................................................................................12
2.4 Identify................................................................................................................................12
2.5 Summary.............................................................................................................................13

Chapter-3
3.1 Introduction........................................................................................................................13
3.2 Computer Vision..................................................................................................................13
3.3 Fire Properties......................................................................................................................14
3.4 Technical Analysis................................................................................................................14
3.5 Fire Detection System......................................................................................................... 15
3.6 RGB to HSV Conversion........................................................................................................16
3.7 Gaussian Blur........................................................................................................................19
3.8 OpenCV.................................................................................................................................22

Chapter-4
4.1 Introduction...........................................................................................................................26
4.2 What is Sensor......................................................................................................................26
4.3 Types.....................................................................................................................................26
4.4 Use of Smoke Sensor.............................................................................................................27
4.5 Use of Temperature Sensor....................................................................................................28
4.6 Sensor-based Fire Detection....................................................................................................29
4.7 Use Case Diagram....................................................................................................................32

Chapter-5
5.1 Introduction..............................................................................................................................33
5.2 Proposed System.......................................................................................................................34
5.3 GPS............................................................................................................................................35
5.4 Project Prototype........................................................................................................................40
5.5 Block Diagram.............................................................................................................................41
5.6 Use Case Diagram........................................................................................................................42

Chapter-6
4|Page
6.1 Import Packages........................................................................................................................43
6.2 Email function............................................................................................................................43
6.3 Sms and Call function.................................................................................................................44
6.4 Vediocapture and Parameter Initialisation...................................................................................44
6.5 Fire Detection..............................................................................................................................45
7. Results................................................................................................................................... 46

8. Conclusion.................................................................................................................................................... 48

Acknowledgment
I would like to articulate my profound gratitude and
indebtedness to my Project guide Lakshmi Bala Mam, Assistant
Professor who has always been a constant motivation and
guiding factor throughout the project time. It has been a great
pleasure for me to get an opportunity to work under his
guidance and complete the thesis work successfully.
I am also grateful to other members of the department
without their support my work would have been carried out so
successfully.
complete the thesis work successfully.I thank one and all who have
rendered help to me directly or indirectly in the completion of my thesis
work.
I wish to extend my sincere thanks to Prof. Lakshmi Bala Mam, M.Tech,
Head of the Computer Science and Engineering Department,for her
constant encouragement throughout the project.

K.Revathi (S190273)
V.Shafiya Uzma (S190233)
M.himaLalitha(S190231)
B.Anitha (S190294)

5|Page
ABSTRACT

The process of oxidation of any material in the exothermic process of combustion,

releasing heat and light as by-products, is called Fire. Most of the available fire detection

system uses temperature or smoke sensors which take time to respond. Moreover, these

systems are costly and not effective if a fire is far away from the detectors. This led to

thinking of alternatives such as computer vision, image processing techniques. One of

the costs effective methods would be to use surveillance cameras to detect the fires in

order to inform the relevant parties. The light parameter and the color of the flame help

in detecting fire.

The proposed work suggests a method to use surveillance cameras in order to monitor

occurrences of fire anywhere within the camera range. In order to enhance the

performance parameters of fire flame detection based on a live video stream, In this, a

Method has been used that finds the boundary of the moving region in the color

segmented image and calculate the number of fire pixels in this area.Then a fire

detection system is developed based on this method to detect fire efficiently, generate

Immediate alarm and send alert notification along with GPS location to the fire stations,

to save life and property from fire hazard.

6|Page
Chapter 1

1.1 Introduction
Computer Vision based fire detection has the potential to be useful in conditions in
which conventional methods cannot be adopted. Visual characteristics of fires such as
brightness, color, spectral texture, spectral flicker, and edge trembling are used to
discriminate them from other visible stimuli. These characteristics are utilized
commonly in many algorithms for fire detection. Most of the conventional fire
detection techniques are based on particle sampling, temperature sampling, relative
humidity sampling, air transparency testing and smoke analysis, in addition to the
traditional ultraviolet and infrared sampling. These methods require close proximity to
the fire. In addition, these methods are not always reliable, as they do not always
detect the fire itself. Most of them are designed to detect not the fire itself but it’s by-
products like smoke, which could be produced in other ways. Presence of certain
particles generated by smoke and fire is detected in most of these systems. Alarm is
not issued unless particles reach the sensors to activate them. Also commonly used
infrared and ultraviolet sensors produce many false alarms. By the help of computer
vision techniques, it is possible to get better results than conventional systems because
images can provide more reliable information.

1.2 Hazards of Fire


Fire is one of the few disasters in which damage can be prevented or reduced, when
compared to other natural disasters such as earthquakes and hurricanes. However, in
order to control the fire, it is important to detect it early so that it can be put out easily
before it grows too large, since a small fire can be put out with a lot less effort than a
big one.
Fire is one of the worst hazards. Being too close to a fire can cause burns, while
the smoke that is emitted by the fires may casphyxiation property like other natural
disasters. Certain things which are set on fire can release poisonous gases, while a
large amount of combustible material may cause a huge explosion if set on fire. Some
common fire hazards include overheating electrical appliances, candles,smoking,
cooking appliances and heating appliances. These places should be monitored
for fire more often as they have a tendency to cause fire, and are common locations
where fires occur. Among such places is the outdoors, where people may have a
barbeque party or smoke. As these are activities that can lead to fires, they should be
7|Page
done responsibly and be put out afterwards.

An example of a large fire that occurred due to carelessness is the Nimtali fires of
2010. More than 100 people, mostly women and children, were burned alive and
around 150 wounded at the accident. Some of the families who were living in the
buildings have lost most of their members. One family has lost 11 members, and
another family has lost seven. This fire, the biggest since the country’s independence in
1971, gutted eight buildings and more than 20 shops at Nawab-Katra.

1.3 Fire Detection and Control


For the purpose of reducing the damage of fire, as mentioned above, the fire should
be put out while small. However, fire naturally grows larger as long as there is fuel. It is
therefore important to put out the fire before it grows larger. However, small fires are
more difficult to see, and therefore many fires are only spotted too late.

This can be prevented by the use of fire and smoke detectors, which can detect fire

8|Page
itself . These fire detection systems can be placed where fire hazards possible occurs,
in order torespond quickly in case of any unto wardaccident.

Currently there are a number of different smoke detectors already in the market. The
main ones are : optical smoke detectors, where a light travels from one point to another
and is dispersed by smoke; ionization smoke detectors, where smoke particles prevent
the current from flowing inside a circuit; and air sampling smoke detectors, where the
air is sampled in a given time period in order to detect trace amounts of smoke by a
large system.It can be noticed that these detectors are quiteexpensive to set up and is
usually only for closed spaces. Another problem with them is that the response time is
slow, as it requires the smoke and heat to dissipate. As the
examples of fire hazards include the outdoors, where smoke can go undetected
because the particle smoke detectors are unable to detect smoke in such a large area
due to scattering of particles, it is not useful for these situations. The response time is
also important, as can be seen by the example of the Nimtali fires. The incident would
not have been so severe if the fires were suppressed early before they converged.

Therefore, this project aims to come up with a detection system that can be used
anywhere, especially open spaces, as well as can enable a quick response to a fire.

1.4 Applications
• Chemical Industries

• Forest Fire accidents


Fire detection is a field of applied computer vision research. Employing
Computer vision technology in fire detection system will enable it to offer
efficient, cost- saving, faster and better performance. It can work in places such
as tunnels and indoors.
The applications of this project are many, including monitoring forest fires,
keeping a close watch on places where fire would be disastrous such as on the
roads and tunnels, coal-mines where a fire would slow down the traffic
drastically and is dangerous to any nearby people.
Of course, it can also be used like the conventional indoor fire detecting
systems, which are used to detect fires in buildings, which triggers alarms in
cases of emergency so that no one will be injured or caught unawares.

9|Page
1.5 Problem Statement

The fire accidents are very major problem in this challenging world,
the major fire accident spots are chemical industries, forest’s.

Recent fire accident spots

■ Major fire breaks out in Ahmedabad's Danilimda area (June2024)

■ Massive fire breaks out at Carnival Farm House Delhi Alipur


area,fire tenders rushed- (May 2024)

❖ There Are many sensor based fire detection systems are existed for
protected our lives from the fire accidents, but it was not effective.

❖ Our proposed system uses surveillance cameras to detect the fire


and generate alarm and sending alert notifications contains alert
message along with the GPS location to the fire station.

10 | P a g e
Chapter 2

Literature Survey
Collect Information:A number of existing models were studied and their
efectiveness was compared.

Fire detection methods:


Heat Detectors:The most common units are fixed temperature
devices that operate when the room reaches a predetermined
temperature (usually in the 135°–165°F/57°–74°C)Smoke
Detectors:As the name implies, these devices are designed to
identify a fire while in its smoldering or early flame stages,
replicating the human sense of smell Flame Detectors The fire, itself,
is a source of radiation, and it can be detected by the recognition of
radiation produced in the burning zone.

2.1 Study
The progress on fire detection technologies has been substantial
over the last decade due to advance in sensor, microelectronics and
information technologies, as well as a greater understanding of fire
physics. This paper provides a review of progress in fire detection
technologies over the last decade, including various emerging sensor
technologies (e.g., computer vision system, distributed fiber optic
temperature sensor, and intelligent multiple sensor), signal
processing and monitoring technology (e.g.,real-time control via
Internet) and integrated fire detection systems. Some problems and
future research efforts related to current fire detection technologies
are discussed.

11 | P a g e
2.2 Analysis
Problems associated with the old UV detectors. Because the UV/IR
flame detector detects both UV and IR radiation emitted from the
flames, it prevents false alarm from other common sources of
ultraviolet radiation other than a flame that do not emit infrared
radiation at 4.3 µm

2.3 Benfits
● CCTV (Closed Circuit Television) technology has great advantage for
use on sensing and monitoring a fire. Compared with other types of
fire detectors, the video cameras cannot be fooled by visible, or
emissions from common background sources, eliminating false alarm
problems.

● The original CCTV technology was intended to transfer or record a


video signal and then present it to the human eye. Attempts have
been made to develop an automatic

● Another automatic CCTV detection technique is the machine vision


fire detection system (MVFDS) that is a combination of video
cameras, computers, and artificial intelligence techniques

2.4 Identify
● Image result for advantages of cctv camera fire detection system

● Early detection can enable you to avoid serious damage or destruction,


so it is of extreme importance. In addition to providing security to
household also a fire alarm can quickly alert firefighters so they can
help minimize the damage.

12 | P a g e
2.5 Summary
There are several options for a building's fire detection and alarm
system. The ultimate system type, and selected components, will be
dependent upon the building construction and value, its use or uses,
the type of occupants, mandated standards, content value, and
mission sensitivity. Contacting a fire engineer or other appropriate
professional who understands fire problems and the different alarm
and detection options is usually a preferred first step to find the best
system.

Chapter 3

3.1 Introduction
In this chapter, the background knowledge related to the project work will
be discussed. At first the basic concepts related to computer vision will be elaborated.
Then thetheory and concepts related to the detection system, color analysis,
motion tracking and morphological operation will be discussed. Related works in this
field will be described. As the proposed system uses OpenCV library, basic
knowledge related to this library will be focused also.

3.2 Computer Vision


Computer vision is the science and technology of machines for seeing . The
field of computer vision works on the theory that extract information from images. The
image may take variety of forms, such as sequences of meaningful frames that
constitute a video,views from multiple cameras, static images captured by camera
etc. The applications of computer vision include: controlling processes, detecting
events, organizing information modeling objects or information, interaction etc.

13 | P a g e
3.3 Fire Properties

Typically, fire comes from a chemical reaction between oxygen in the atmosphere
and some sort of fuel. For the combustion reaction to happen, fuel must be heated to
its ignition temperature. In a typical wood fire, first something heats the wood to a very
high temperature. When the wood reaches about 150 degrees Celsius, the heat
decomposes some of the cellulose material that makes up the wood.

These gases are known as smoke. Smoke is compounds of hydrogen,


carbon and oxygen. The rest of the material forms char, which is nearly pure
carbon, and ash, which is all of the unburnable minerals in the wood(calcium,
pottasium and so on). The char is also called charcoal.

3.4 Technical Analysis


Fires not only produce flames but also tend to emit smoke, and a defining trait of fire is
the emission of heat.

Infrared light can be detected by certain cameras; however, it is not visible to the human
eye. This makes it impractical to use in a system meant for cheap cameras, which are
easily obtainable and, therefore, are not likely to capture the infrared spectrum. Heat
cannot be detected visually, which leaves only smoke and flame as easily visible
components of a fire.
14 | P a g e
Aside from clean fires, most fires will emit smoke, which consists of solids and liquids
that are remainders of the fuel that is not cleanly burnt. Smoke is usually visible.

The fire detection system would preferably have a fire detection program, which uses an
algorithm to detect fire through visual analysis of the area where fire detection is
desired. The program would provide feedback once a fire has been determined.

3.5 Fire Detection System


Instead of sensors we are using cameras to detect the fire Our proposed system
provides fire detection using a simple algorithm. Firstly, the image frame is
acquired from the live video feed. The RGB color model is then applied to the
frame. The resultant RGB frame is then converted to a HSV frame. This frame is
then passed through Gaussian Blur, median blurring (to remove noise). A
suitable response is displayed on the window monitor, and an alarm buzzer is
sounded and the notification will be sent. Alert notifications contains alert
message along with the GPS location to the fire station.

.3.6 RGB to HSV conversion

The RGB color model is an additive color model in which the red, green, and blue primary
colors of light are added together in various ways to reproduce a broad array of colors.
The name of the model comes from the initials of the three additive primary colors, red,
green, and blue HSV is a cylindrical color model that remaps the RGB primary colors into
dimensions that are easier for humans to understand. Like the Munsell Color System,
these dimensions are hue, saturation, and value. Hue specifies the angle of the color on
the RGB color circle R, G, B in RGB are all co-related to the color luminance( what we
loosely call intensity),i.e., We cannot separate color information from luminance. HSV or
Hue Saturation Value is used to separate image luminance from color information. This
makes it easier when we are working on or need luminance of the image/frame.

For example shows how to adjust the saturation of a color image by converting the
image to the HSV color space. The example then displays the separate HSV color planes
(hue, saturation, and value) of a synthetic image.
15 | P a g e
Convert RGB Image to HSV Image
Read an RGB image into the workspace. Display the image.
RGB = imread('peppers.png');
imshow(RGB)

Convert the image to the HSV color space.


HSV = rgb2hsv(RGB);

Process the HSV image. This example increases the saturation of the image by
multiplying the S channel by a scale factor.

[h,s,v] = imsplit(HSV);
saturationFactor = 2;
s_sat = s*saturationFactor;
HSV_sat = cat(3,h,s_sat,v);

Convert the processed HSV image back to the RGB color space. Display the new RGB
image. Colors in the processed image are more vibrant.

RGB_sat = hsv2rgb(HSV_sat);
imshow(RGB_sat)

Closer Look at the HSV Color Space


For closer inspection of the HSV color space, create a synthetic RGB image.
RGB = reshape(ones(64,1)*reshape(jet(64),1,192),[64,64,3]);

Convert the synthetic RGB image to the HSV colorspace.

HSV = rgb2hsv(RGB);

Split the HSV version of the synthetic image into its component planes: hue, saturation,
and value.
16 | P a g e
[h,s,v] = imsplit(HSV);

Display the individual HSV color planes with the original image.

montage({h,s,v,RGB},"BorderSize",10,"BackgroundColor",'w');

As the hue plane image in the preceding figure illustrates, hue values make a linear
transition from high to low.

If you compare the hue plane image against the original image, you can see that shades
of deep blue have the highest values, and shades of deep red have the lowest values.
(As stated previously, there are values of red on both ends of the hue scale.

To avoid confusion, the sample image uses only the red values from the beginning of the
hue range.)

Saturation can be thought of as the purity of a color. As the saturation plane image
shows, the colors with the highest saturation have the highest values and are
represented as white.

In the center of the saturation image, notice the various shades of gray.

These correspond to a mixture of colors; the cyans, greens, and yellow shades are
mixtures of true colors.

Value is roughly equivalent to brightness, and you will notice that the brightest areas of
the value plane correspond to the brightest colors in the original image.

17 | P a g e
18 | P a g e
3.7 Gaussian Blur
In image processing, a Gaussian blur is the result of blurring an image by a Gaussian
function. It is a widely used effect in graphics software, typically to reduce image noise
and reduce detail.

If you take a photo in low light, and the resulting image has a lot of noise, Gaussian blur
can mute that noise. If you want to lay text over an image, a Gaussian blur can soften
the image so the text stands out more clearly.

In Gaussian Blur operation, the image is convolved with a Gaussian filter instead of the
box filter. The Gaussian filter is a low-pass filter that removes the high-frequency
components are reduced. src − A Mat object representing the source (input image) for
this operation Blur images with various low pass filters Apply custom-made filters to
images (2D convolution)
2D Convolution ( Image Filtering ) As in one-dimensional signals, images also can be
filtered with various low-pass filters (LPF), high-pass filters (HPF), etc. LPF helps in
removing noise, blurring images, etc. HPF filters help in finding edges in images.OpenCV
provides a function cv.filter2D() to convolve a kernel with an image

As an example, we will try an averaging filter on an image. A 5x5 averaging filter kernel
will look like the below:

K=125⎡⎣⎢⎢⎢⎢⎢⎢1111111111111111111111111⎤⎦⎥⎥⎥⎥⎥⎥
The operation works like this: keep this kernel above a pixel, add all the 25 pixels below
this kernel, take the average, and replace the central pixel with the new average value.
This operation is continued for all the pixels in the image

Image Blurring (Image Smoothing)

Image blurring is achieved by convolving the image with a low-pass filter kernel. It is
useful for removing noise. It actually removes high frequency content (eg: noise, edges)
from the image. So edges are blurred a little bit in this operation (there are also blurring
techniques which don't blur the edges). OpenCV provides four main types of blurring
techniques.

1. Averaging

19 | P a g e
This is done by convolving an image with a normalized box filter. It simply takes the
average of all the pixels under the kernel area and replaces the central element.

This is done by the function cv.blur() or cv.boxFilter(). Check the docs for more details
about the kernel.

We should specify the width and height of the kernel. A 3x3 normalized box filter would
look like the below:

K=19⎡⎣⎢111111111⎤⎦⎥

Gaussian Blurring

In this method, instead of a box filter, a Gaussian kernel is used. It is done with the
function, cv.GaussianBlur().

We should specify the width and height of the kernel which should be positive and odd.

We also should specify the standard deviation in the X and Y directions, sigmaX and
sigmaY respectively.

If only sigmaX is specified, sigmaY is taken as the same as sigmaX. If both are given as
zeros, they are calculated from the kernel size.

Gaussian blurring is highly effective in removing Gaussian noise from an image.

If you want, you can create a Gaussian kernel with the function, cv.getGaussianKernel().

The above code can be modified for Gaussian blurring:

blur = cv.GaussianBlur(img,(5,5),0)

Median Blurring
Here, the function cv.medianBlur() takes the median of all the pixels under the kernel
area and the central element is replaced with this median value.

This is highly effective against salt-and-pepper noise in an image. Interestingly, in the


above filters, the central element is a newly calculated value which may be a pixel value
in the image or a new value.

20 | P a g e
But in median blurring, the central element is always replaced by some pixel value in the
image. It reduces the noise effectively. Its kernel size should be a positive odd integer.

In this demo, I added a 50% noise to our original image and applied median blurring.
Check the result:

median = cv.medianBlur(img,5)

Bilateral Filtering
cv.bilateralFilter() is highly effective in noise removal while keeping edges sharp. But the
operation is slower compared to other filters. We already saw that a Gaussian filter takes
the neighbourhood around the pixel and finds its Gaussian weighted average.

This Gaussian filter is a function of space alone, that is, nearby pixels are considered
while filtering.

It doesn't consider whether pixels have almost the same intensity. It doesn't consider
whether a pixel is an edge pixel or not. So it blurs the edges also, which we don't want to
do.

Bilateral filtering also takes a Gaussian filter in space, but one more Gaussian filter which
is a function of pixel difference.

The Gaussian function of space makes sure that only nearby pixels are considered for
blurring, while the Gaussian function of intensity difference makes sure that only those
pixels with similar intensities to the central pixel are considered for blurring.

So it preserves the edges since pixels at edges will have large intensity variation.

The below sample shows use of a bilateral filter (For details on arguments, visit docs).

blur = cv.bilateralFilter(img,9,75,75)

21 | P a g e
3.8 Open Computer Vision
OpenCV is an open-source computer vision library written in C and C++, compatible with
Linux, Windows, and Mac OS X. It supports interfaces for Python, Ruby, MATLAB, and
other languagesDesigned for computational efficiency and real-time applications,
OpenCV leverages multi-core processors and can utilize Intel’s Integrated Performance
Primitives (IPP) for further optimization. The library contains over 500 functions for
various vision applications, such as factory product inspection, medical imaging,

22 | P a g e
3.9 Why need Open CV
Efficiency and Real-Time Processing: It is designed for computational efficiency and
real-time applications, making it ideal for tasks that require quick processing.

• Cross-Platform Support: OpenCV runs on multiple platforms, including Linux,


Windows, and Mac OS X, making it versatile for various development
environments.

• Extensive Functionality: With over 500 functions, OpenCV covers a wide range
of computer vision tasks such as image processing, object detection, camera
calibration, and more.

• Community and Support: Being open-source, OpenCV has a large community of


developers and users, providing extensive documentation, tutorials, and forums for
support.

• Multi-Language Interfaces: OpenCV supports multiple programming languages,


including C++, Python, and MATLAB, allowing developers to choose the language
they are most comfortable with.

About this Library


OpenCV (Open Source Computer Vision Library) is a comprehensive open-source library designed
for computer vision and image processing. Written in C and C++, it supports multiple platforms
including Linux, Windows, and Mac OS X, and provides interfaces for languages such as Python,
Java, and MATLAB. Key features include:

Efficiency: Optimized for real-time applications, leveraging multi-core processors.


Extensive Functionality: Over 500 functions for tasks like camera calibration, feature detection,
object recognition, and 3D reconstruction.
Community and Support: A large, active community with extensive documentation and tutorials.
Integration: Compatible with Intel’s Integrated Performance Primitives (IPP) for further
optimization.

OpenCV is widely used in various fields such as robotics, medical imaging, security, and
autonomous vehicles.

23 | P a g e
Relationship Between
opencv and other Library

Deep
Learning
Frameworks: Integrates with TensorFlow, PyTorch, and Keras for combining traditional
computer vision with deep learning techniques.

• Hardware Acceleration: Uses Intel IPP and CUDA for enhanced performance on
CPUs and NVIDIA GPUs.

• Camera and Sensor Integration: Interfaces with cameras and sensors for real-
time video processing and data acquisition.

• Edge and Cloud Computing: Deployable on edge devices and integrates with
cloud platforms for scalable processing.

• Complementary Libraries: Works alongside SciPy, NumPy, and scikit-learn,


forming a robust ecosystem for computer vision and machine learning
applications.

Datatypes and supported


OpenCV operates on fundamental array-like types such as IplImage (IPL image) and
CvMat (matrix), which handle image data and matrix operations respectively. It also
utilizes growable collections like CvSeq (deque), CvSet, and CvGraph for managing
dynamic arrays and graph structures. Additionally, mixed types such as CvHistogram
(multi-dimensional histogram) are used for analyzing color distributions. Helper data
types include CvPoint (2D point), CvSize (width and height), CvTermCriteria (termination
criteria for iterative processes), IplConvKernel (convolution kernel), and CvMoments
(spatial moments), which simplify and standardize the OpenCV API, enhancing its
usability across various computer vision tasks.

Error Handling
24 | P a g e
Error handling in OpenCV follows a mechanism similar to IPL (Intel Image Processing
Library). Instead of returning error codes, OpenCV uses a global error status that can be
manipulated using functions like cvError to set an error and cvGetErrStatus to retrieve
it. This global error status allows for flexible error management: developers can
configure whether errors trigger immediate termination of program execution with error
messages or simply set an error code and allow execution to continue. This approach
provides adaptability in handling errors across different scenarios in OpenCV
applications.

Software and Hardware


The OpenCV software runs on personal computers that are based on Intel archi-
tecture processors and running Microsoft* Windows* 95, Windows 98, Windows

Platform Supported
The OpenCV software run on Windows platforms. The code and syntax used for
function and variable declarations in this manual are written in the ANSI C style.
However, versions of the OpenCV for different processors or operating systems may,of
necessity, vary slightly

Summary
In this chapter, background study and related papers which are useful for the project
are briefly discussed. Here also a short discussion about needed graphical
library(OpenCV) to complete the project is given.

25 | P a g e
Chapter 4

Existing System

4.1 Introduction
In this chapter we will discuss about Existing System that is sensor
based fire detection system and their usage.

4.2 What is Sensor


A sensor is a device that detects and responds to some type of input from the physical
environment. The specific input could be light, heat, motion, moisture, pressure, or any
one of a great number of other environmental phenomena.

There are numerous definitions as to what a sensor is but I would like to define a Sensor
as an input device which provides an output (signal) with respect to a specific physical
quantity (input).

The term “input device” in the definition of a Sensor means that it is part of a bigger
system which provides input to a main control system (like a Processor or a
Microcontroller).

Another unique definition of a Sensor is as follows: It is a device that converts signals


from one energy domain to electrical domain. The definition of the Sensor can be better
understood if we take an example in to consideration.

4.3 Different types of sensors

26 | P a g e
The following is a list of different types of sensors that are commonly used in various
applications. All these sensors are used for measuring one of the physical properties like
Temperature, Resistance, Capacitance, Conduction, Heat Transfer etc.

Here is the list of various types of sensors:

1. Temperature Sensor
2. Proximity Sensor
3. Accelerometer
4. IR Sensor (Infrared Sensor)
5. Pressure Sensor
6. Light Sensor
7. Ultrasonic Sensor
8. Smoke, Gas, and Alcohol Sensor
9. Touch Sensor
10. Color Sensor
11. Humidity Sensor
12. Position Sensor
13. Magnetic Sensor (Hall Effect Sensor)
14. Microphone (Sound Sensor)
15. Tilt Sensor
16. Flow and Level Sensor
17. PIR Sensor
18. Touch Sensor
19. Strain and Weight Sensor

These sensors are used in various applications to detect and measure different physical
phenomena such as temperature, proximity, motion, light, sound, and more.

4.4 Use of smoke sensor


Smoke alarms detect fires by sensing small particles in the air using a couple of different
kinds of technologies. Once they detect those particles above a certain threshold, they
signal the alarm to sound so that you and your family can get to safety and call 911.

smoke detector, device used to warn occupants of a building of the presence of a fire
before it reaches a rapidly spreading stage and inhibits escape or attempts to extinguish
it Smoke alarms save lives. Smoke alarms that are properly installed and maintained
play a vital role in reducing fire deaths and injuries. If there is a fire in your home, smoke
spreads fast and you need smoke alarms to give you time to get out.

27 | P a g e
There are three types of smoke alarms, ionization, photoelectric and a combination of
the two which is commonly called a “dual” detector. Look for the UL stamp on any smoke
alarm. Research has shown: Ionization smoke alarms detect flaming fires marginally
earlier than photo-electric smoke alarms.

4.5 Use of Temperature sensor


One of the most common and most popular sensors is the Temperature Sensor. A
Temperature Sensor, as the name suggests, senses the temperature i.e., it measures the
changes in the temperature.

There are different types of Temperature Sensors like Temperature Sensor ICs (like
LM35, DS18B20), Thermistors, Thermocouples, RTD (Resistive Temperature Devices),
etc.

Temperature Sensors can be analog or digital. In an Analog Temperature Sensor, the


changes in the Temperature correspond to change in its physical property like resistance
or voltage. LM35 is a classic Analog Temperature Sensor.
28 | P a g e
Coming to the Digital Temperature Sensor, the output is a discrete digital value (usually,
some numerical data after converting analog value to digital value). DS18B20 is a simple
Digital Temperature Sensor.

Temperature Sensors are used everywhere like computers, mobile phones, automobiles,
air conditioning systems, industries etc.

A simple project using LM35 (Celsius Scale Temperature Sensor) is implemented in this
project: TEMPERATURE CONTROLLED SYSTEM.

4.6 Sensor Based Fire Detection


It has been found in a survey that 80% of losses caused due to fire would have been
avoided if the fire was detected immediately. Arduino fire detector from Microtronics
Technologies is the solution to this problem.
In this project, we have built a fire alarm using Arduino Uno which is interfaced with a
temperature sensor, a smoke sensor, and buzzer. The temperature sensor senses the
heat and smoke sensor senses any smoke generated due to burning or fire. Buzzer
connected to Arduino gives us an alarm indication. Whenever fire triggered, it burns
29 | P a g e
objects nearby and produces smoke. A fire alarm can also be triggered due to small
smoke from candlelight or oil lamps used in a household. Also, whenever heat intensity
is high then also the alarm goes on. Buzzer or alarm is turned off whenever the
temperature goes to normal room temperature and smoke level reduces. We have also
interfaced LCD display and Relay to the Arduino board.
Arduino fire alarm system is an important system for industrial purposes as well as for
household purposes. Whenever it detects fire or smoke then it instantly alerts the user
about the fire through the Bluetooth module. For this purpose, we are using Arduino Uno
which is from the Arduino family. Also, the Arduino interfacing with an LCD display is
done to display the status of the system whether the Smoke and Overheat are detected
or not. And Arduino interfacing with the Bluetooth module is done so that the user gets
an alert message. It intimates the user about fire detection. This system is really useful
whenever the user is not in the house or industry or inside the premises. Whenever a fire
occurs, the system automatically senses and alerts the user by sending an alert to an
app installed on a user’s Android mobile.

Existing systems include Fire and Hazard Detection systems which employ
heat sensors or temperature sensors or smoke sensors or a combination
of these.
Smoke Sensor will detect Only after detection of smoke, the fire is
detected. And sometimes sensor fails to detect the fire if the air flows in
opposite direction. And It can produce the false alarm when someone
smoke near to the smoke sensor These sensors have limited range and
becomes tough for large industries and they are very expensive.

Temperature sensors do not sense the particles of combustion and are


designed to alarm only when heat on their sensor reached the
predetermined level.Sensors releases more radiation which affects the
living organisms.

30 | P a g e
31 | P a g e
Use Case Diagram

Chapter 5
5.1 Proposed System

This chapter introduces the proposed camera-based fire detection system and its
application. Traditional fire detection methods are often costly and limited in scope.
Hence, alternative methods such as computer vision and image processing techniques
have been explored. Using surveillance cameras proves to be a cost-effective solution for
detecting fires by analyzing light parameters and flame colors.

32 | P a g e
The proposed system aims to utilize surveillance cameras to monitor fire occurrences
within their range. To enhance fire detection accuracy in real-time video streams, the
system employs color segmentation to identify the boundary of moving regions and
calculates the number of fire pixels within these areas. This method forms the basis of an
efficient fire detection system that promptly generates alarms and notifies fire stations
with GPS coordinates, aiming to protect lives and properties from fire hazards.

The camera-based fire monitoring system operates by continuously monitoring


designated areas through video processing. Upon detecting a fire, the system captures
an alarm image and forwards it to the administrator. The administrator verifies the fire
based on the submitted alarm image before taking further action.

5.2 Proposed System


33 | P a g e
Instead of sensors we are using cameras to detect the fire Our
proposed system provides fire detection using a simple algorithm.
Firstly, the image frame is acquired from the live video feed. The RGB
color model is then applied to the frame. The resultant RGB frame is then
converted to a HSV frame. This frame is then passed through Gaussian
Blur, median blurring (to remove noise).

A suitable response is displayed on the window monitor, and an alarm


buzzer is sounded and the notification will be sent. Alert notifications
contains alert message along with the GPS location to the fire station.

34 | P a g e
The threat to people’s lives and property posed by fires has become increasingly serious.
To address the problem of a high false alarm rate in traditional fire detection, an
innovative detection method based on multifeature fusion of flame is proposed.

First, we combined the motion detection and color detection of the flame as the fire
preprocessing stage. This method saves a lot of computation time in screening the fire
candidate pixels.

Second, although the flame is irregular, it has a certain similarity in the sequence of the
image. According to this feature, a novel algorithm of flame centroid stabilization based
on spatiotemporal relation is proposed, and we calculated the centroid of the flame
region of each frame of the image and added the temporal information to obtain the
spatiotemporal information of the flame centroid.

5.3 Global positioning system


(GPS)

Find GPS Coordinates on Google maps

To find the exact GPS latitude and longitude coordinates of a point on


google maps along
with the altitude/elevation above sea level, simply drag the marker in the
map below to
the point you require. Alternatively enter the location name in the search
bar then drag
the resulting marker to the precise position.
The GPS coordinates including latitude and longitude along with the
altitude/elevation
will automatically update in the Google map pop-up. Use the map zoom
controls to get a
closer view of the point you require.
Alternatively to show the coordinates of your current location use the Find
My
Coordinates button below. Your coordinates will update on the map.
You can also share your location on this map using the Send this location
button below
your GPS coordinates in the map info window. This will create an email
with a link to your
location on Google maps to share your location with another person.

35 | P a g e
Coordinates
GPS coordinates are formed by two components that are a latitude ,
giving the north-
south position, and a longitude, giving the east-west position.
Use this map to convert any address in its GPS coordinates. You can also
find the
location of any GPS coordinates, and geocode its address if available.
Visit the wheream I page to get all the details of your current location
coordinates.

Latitude definition
The latitude of a point is the measurement of the angle formed by the
equatorial plane with By construction, it is comprised between -90 °and
90 °. Negative values are for the southern hemisphere locations, and
latitude is worth 0 ° at the equator.

Longitude definition
The principle is the same for the longitude, with the difference that there
is no natural reference like the equator for the latitude. Longitude
reference has been arbitrarily set at the Greenwich Meridian (it passes
through the Royal Greenwich Observatory in Greenwich in the suburbs of
London), and the longitude of a point is the measurement of the angle
formed by the half plane formed by the axis of the earth and passes
through the Greenwich meridian and the half-plane formed by the axis of
the earth and passing through the point.

A third component
Careful readers would have already noticed that a third element is
required to locate a point, its altitude. In the most typical use cases, GPS
coordinates are needed for locations on the surface of the Earth, making
this third parameter less important.However, it is as necessary as the
latitude and longitude to define a complete and accurate GPS location.

36 | P a g e
What3words
What3words divided the world into 57 trillion squares, each measuring 3m
by 3m (10ft by 10ft) and each having a unique, randomly assigned three-
word address. Our coordinates converter helps you convert coordinates to
w3w but also what3words to coordinates.
Multiple geodetic systems for geographical coordinates
As we saw, the above definitions take into account several parameters
that must be fixed or identified for future reference:
- the equator plane and the model chooser for the shape of the
earth,
- a set of reference points,
- the position of the center of the Earth,
- the earth axis,

- the reference meridian.

These five criteria are the basis of the different geodetic systems used
through history.
Currently, the most commonly used geodetic system is WGS 84 (used
notably for GPS
coordinates).
GPS coordinates measurement units
The two main units of measurement are the decimal and sexagesimal
coordinates.
Decimal coordinates
The latitude and longitude are a decimal number, with the following
characteristics:
- latitude between 0° and 90°: Northern hemisphere,
- latitude between 0° and -90°: Southern hemisphere,
longitude between 0° and 180°: East of the Greenwich meridian,
- longitude between 0° and -180°: West of the Greenwich meridian,
37 | P a g e
Sexagesimal coordinates
The sexagesimal coordinates have three components: degrees, minutes and
seconds.
Each of these components is usually an integer, but the seconds can be a
decimal
number in case of need of a greater precision.
One angle degree includes 60 angle minutes and one angule minute
consists of 60 angle
seconds of arc.
Unlike decimal coordinates the sexagesimal coordinates can not be
negative. In their
case, the letter W or E is added to the longitude to specify the position east-
west from
the Greenwich meridian, and the letter N or S to the latitude to designate
the
hemisphere (North or South).

Correlation Table

38 | P a g e
Latitude and longitude finder
On the website homepage, when you enter coordinates in one of the
formats (on the left column), they are automatically converted to the
other format. Also, when you visualize an address on the map, or after
clicking on a point on the map, its coordinates in the tw0units are
displayed in the left column.

39 | P a g e
5.3 Project Prototype

40 | P a g e
5.4 Block Diagram

Then, we extracted features including spatial variability, shape variability, and area
variability of the flame to improve the accuracy of recognition.

Finally, we used support vector machine for training, completed the analysis of
candidate fire images, and achieved automatic fire monitoring.

Experimental results showed that the proposed method could improve the accuracy and
reduce the false alarm rate compared with a state-of-the-art technique.

The method can be applied to real-time camera monitoring systems, such as home
security, forest fire alarms, and commercial monitoring

41 | P a g e
5.5 Use Case Diagram

42 | P a g e
Chapter 6
Implementation and Results

6.1 Import Packages


import cv2
import numpy as np
from playsound import playsound
import smtplib, ssl
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from twilio.rest import Client

6.2 Email function


def send_email(sender_email, receiver_email, password, subject, body):
try:
message = MIMEMultipart()
message["From"] = sender_email
message["To"] = receiver_email
message["Subject"] = subject

message.attach(MIMEText(body, "plain"))

context = ssl.create_default_context()
with smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context) as server:
server.login(sender_email, password)
server.sendmail(sender_email, receiver_email, message.as_string())
print("Email sent successfully")
except Exception as e:
print(f"Failed to send email: {e}")

43 | P a g e
6.3 SMS and Call Function (usingTwilio)
# Function to send SMS and make a phone call using Twilio
def send_sms_and_call(account_sid, auth_token, twilio_phone_number,
receiver_phone_number, body):
try:
client = Client(account_sid, auth_token)

# Send SMS
message = client.messages.create(
body=body,
from_=twilio_phone_number,
to=receiver_phone_number
)
print(f"SMS sent successfully: {message.sid}")

# Make Phone Call


call = client.calls.create(
twiml='<Response><Say>' + body + '</Say></Response>',
from_=twilio_phone_number,
to=receiver_phone_number
)
print(f"Phone call made successfully: {call.sid}")

except Exception as e:
print(f"Failed to send SMS or make phone call: {e}")

6.4Video Capture and Parameter


Initialization
# Initialize video capture
cap = cv2.VideoCapture(0)

# Define color range for fire detection in HSV space


lower_red = np.array([0, 50, 50])
upper_red = np.array([10, 255, 255])
lower_orange = np.array([10, 50, 50])
upper_orange = np.array([25, 255, 255])
lower_yellow = np.array([25, 50, 50])
upper_yellow = np.array([35, 255, 255])

alert_sound = "/home/rgukt/Desktop/mini project/firesound.mp3"


alert_flag = False
44 | P a g e
cooldown_frames = 100
current_cooldown = cooldown_frames
min_contour_area = 1000

# Background subtractor for motion detection


back_sub = cv2.createBackgroundSubtractorMOG2()

# Initialize optical flow parameters


prev_frame = None
flow_threshold = 2.0

# Specific latitude and longitude for Nuzvidu, Vijayawada


latitude = 16.7880
longitude = 80.8460

Example Email and Twilio Credentials

# Example email parameters


sender_email = "[email protected]"
receiver_email = "[email protected]"
password = "zuec htgh qewb bwmq"
subject = "Fire Detected Alert"

# Twilio credentials
twilio_account_sid = 'AC19b63d7fe164560509bca854c30a6a9a'
twilio_auth_token = '741e41fc842bc6189e9b0d3e22892f06'
twilio_phone_number = '+17406406219'
receiver_phone_number = '+919885767044'

6.5 Fire Detection


while True:
ret, frame = cap.read()
if not ret:
break

hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

# Create masks for different fire colors


mask_red = cv2.inRange(hsv, lower_red, upper_red)
mask_orange = cv2.inRange(hsv, lower_orange, upper_orange)
mask_yellow = cv2.inRange(hsv, lower_yellow, upper_yellow)

45 | P a g e
mask = mask_red | mask_orange | mask_yellow

# Apply morphological operations to reduce noise


kernel = np.ones((5, 5), np.uint8)
mask = cv2.dilate(mask, kernel, iterations=1)
mask = cv2.erode(mask, kernel

Results

46 | P a g e
47 | P a g e
Conclusion
This project introduces a sensor-free fire detection algorithm that operates solely
from live video feeds, eliminating the need for traditional temperature and heat
sensors. The system's primary objective is to detect fires in their early stages,
when they are small and manageable, thereby minimizing potential damage and
enhancing safety.

Using computer vision techniques, the algorithm analyzes color and motion patterns in
real-time video streams to identify fire incidents. This approach not only reduces the cost
associated with conventional sensor-based systems but also simplifies hardware
requirements.

Key outcomes include:

• Early Detection: The system effectively identifies fires before they escalate,
enabling timely intervention.
• Cost Efficiency: By utilizing existing surveillance cameras and computational
algorithms, the system reduces hardware costs and maintenance.
• Functionality: Upon detecting a fire, the system triggers alerts, including phone
calls, to notify relevant authorities or stakeholders.

In summary, this fire detection system offers proactive monitoring and rapid response
capabilities, making it a practical solution for enhancing fire safety in diverse
environments

48 | P a g e
49 | P a g e

You might also like