Miniproject Documentation3
Miniproject Documentation3
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
1|Page
CERTIFICATE
This is to certify that the thesis work titled 'IGNIS VISION-GPS Enhanced
Surveillance' was successfully completed by K.Revathi(S190273),
V.ShafiyaUzma(S190233), M.Himalalitha (S190231), and B.Anitha (S190294) in
partial fulfillment of the requirements for the major project in Computer Science
and Engineering of Rajiv Gandhi University of Knowledge Technologies, under
my guidance. The output of the work carried out is deemed satisfactory.
2|Page
DECLARATION
K.Revathi (S190273)
V.Shafiya Uzama (S190233)
M.himaLalitha(S190231)
B.Anitha (S190294)
3|Page
CONTENTS
ACKNOWLEDGMENT................................................................................................................5
ABSTRACT................................................................................................................................6
Chapter-1
1.1 Introduction ......................................................................................................................7
1.2 Hazards of Fire...................................................................................................................7
1.3 Fire Detection and Control.................................................................................................8
1.4 Applications.......................................................................................................................9
1.5 Problem Statement...........................................................................................................10
Chapter-2
2.1 Study..................................................................................................................................11
2.2 Analysis..............................................................................................................................12
2.3 Benefits..............................................................................................................................12
2.4 Identify................................................................................................................................12
2.5 Summary.............................................................................................................................13
Chapter-3
3.1 Introduction........................................................................................................................13
3.2 Computer Vision..................................................................................................................13
3.3 Fire Properties......................................................................................................................14
3.4 Technical Analysis................................................................................................................14
3.5 Fire Detection System......................................................................................................... 15
3.6 RGB to HSV Conversion........................................................................................................16
3.7 Gaussian Blur........................................................................................................................19
3.8 OpenCV.................................................................................................................................22
Chapter-4
4.1 Introduction...........................................................................................................................26
4.2 What is Sensor......................................................................................................................26
4.3 Types.....................................................................................................................................26
4.4 Use of Smoke Sensor.............................................................................................................27
4.5 Use of Temperature Sensor....................................................................................................28
4.6 Sensor-based Fire Detection....................................................................................................29
4.7 Use Case Diagram....................................................................................................................32
Chapter-5
5.1 Introduction..............................................................................................................................33
5.2 Proposed System.......................................................................................................................34
5.3 GPS............................................................................................................................................35
5.4 Project Prototype........................................................................................................................40
5.5 Block Diagram.............................................................................................................................41
5.6 Use Case Diagram........................................................................................................................42
Chapter-6
4|Page
6.1 Import Packages........................................................................................................................43
6.2 Email function............................................................................................................................43
6.3 Sms and Call function.................................................................................................................44
6.4 Vediocapture and Parameter Initialisation...................................................................................44
6.5 Fire Detection..............................................................................................................................45
7. Results................................................................................................................................... 46
8. Conclusion.................................................................................................................................................... 48
Acknowledgment
I would like to articulate my profound gratitude and
indebtedness to my Project guide Lakshmi Bala Mam, Assistant
Professor who has always been a constant motivation and
guiding factor throughout the project time. It has been a great
pleasure for me to get an opportunity to work under his
guidance and complete the thesis work successfully.
I am also grateful to other members of the department
without their support my work would have been carried out so
successfully.
complete the thesis work successfully.I thank one and all who have
rendered help to me directly or indirectly in the completion of my thesis
work.
I wish to extend my sincere thanks to Prof. Lakshmi Bala Mam, M.Tech,
Head of the Computer Science and Engineering Department,for her
constant encouragement throughout the project.
K.Revathi (S190273)
V.Shafiya Uzma (S190233)
M.himaLalitha(S190231)
B.Anitha (S190294)
5|Page
ABSTRACT
releasing heat and light as by-products, is called Fire. Most of the available fire detection
system uses temperature or smoke sensors which take time to respond. Moreover, these
systems are costly and not effective if a fire is far away from the detectors. This led to
the costs effective methods would be to use surveillance cameras to detect the fires in
order to inform the relevant parties. The light parameter and the color of the flame help
in detecting fire.
The proposed work suggests a method to use surveillance cameras in order to monitor
occurrences of fire anywhere within the camera range. In order to enhance the
performance parameters of fire flame detection based on a live video stream, In this, a
Method has been used that finds the boundary of the moving region in the color
segmented image and calculate the number of fire pixels in this area.Then a fire
detection system is developed based on this method to detect fire efficiently, generate
Immediate alarm and send alert notification along with GPS location to the fire stations,
6|Page
Chapter 1
1.1 Introduction
Computer Vision based fire detection has the potential to be useful in conditions in
which conventional methods cannot be adopted. Visual characteristics of fires such as
brightness, color, spectral texture, spectral flicker, and edge trembling are used to
discriminate them from other visible stimuli. These characteristics are utilized
commonly in many algorithms for fire detection. Most of the conventional fire
detection techniques are based on particle sampling, temperature sampling, relative
humidity sampling, air transparency testing and smoke analysis, in addition to the
traditional ultraviolet and infrared sampling. These methods require close proximity to
the fire. In addition, these methods are not always reliable, as they do not always
detect the fire itself. Most of them are designed to detect not the fire itself but it’s by-
products like smoke, which could be produced in other ways. Presence of certain
particles generated by smoke and fire is detected in most of these systems. Alarm is
not issued unless particles reach the sensors to activate them. Also commonly used
infrared and ultraviolet sensors produce many false alarms. By the help of computer
vision techniques, it is possible to get better results than conventional systems because
images can provide more reliable information.
An example of a large fire that occurred due to carelessness is the Nimtali fires of
2010. More than 100 people, mostly women and children, were burned alive and
around 150 wounded at the accident. Some of the families who were living in the
buildings have lost most of their members. One family has lost 11 members, and
another family has lost seven. This fire, the biggest since the country’s independence in
1971, gutted eight buildings and more than 20 shops at Nawab-Katra.
This can be prevented by the use of fire and smoke detectors, which can detect fire
8|Page
itself . These fire detection systems can be placed where fire hazards possible occurs,
in order torespond quickly in case of any unto wardaccident.
Currently there are a number of different smoke detectors already in the market. The
main ones are : optical smoke detectors, where a light travels from one point to another
and is dispersed by smoke; ionization smoke detectors, where smoke particles prevent
the current from flowing inside a circuit; and air sampling smoke detectors, where the
air is sampled in a given time period in order to detect trace amounts of smoke by a
large system.It can be noticed that these detectors are quiteexpensive to set up and is
usually only for closed spaces. Another problem with them is that the response time is
slow, as it requires the smoke and heat to dissipate. As the
examples of fire hazards include the outdoors, where smoke can go undetected
because the particle smoke detectors are unable to detect smoke in such a large area
due to scattering of particles, it is not useful for these situations. The response time is
also important, as can be seen by the example of the Nimtali fires. The incident would
not have been so severe if the fires were suppressed early before they converged.
Therefore, this project aims to come up with a detection system that can be used
anywhere, especially open spaces, as well as can enable a quick response to a fire.
1.4 Applications
• Chemical Industries
9|Page
1.5 Problem Statement
The fire accidents are very major problem in this challenging world,
the major fire accident spots are chemical industries, forest’s.
❖ There Are many sensor based fire detection systems are existed for
protected our lives from the fire accidents, but it was not effective.
10 | P a g e
Chapter 2
Literature Survey
Collect Information:A number of existing models were studied and their
efectiveness was compared.
2.1 Study
The progress on fire detection technologies has been substantial
over the last decade due to advance in sensor, microelectronics and
information technologies, as well as a greater understanding of fire
physics. This paper provides a review of progress in fire detection
technologies over the last decade, including various emerging sensor
technologies (e.g., computer vision system, distributed fiber optic
temperature sensor, and intelligent multiple sensor), signal
processing and monitoring technology (e.g.,real-time control via
Internet) and integrated fire detection systems. Some problems and
future research efforts related to current fire detection technologies
are discussed.
11 | P a g e
2.2 Analysis
Problems associated with the old UV detectors. Because the UV/IR
flame detector detects both UV and IR radiation emitted from the
flames, it prevents false alarm from other common sources of
ultraviolet radiation other than a flame that do not emit infrared
radiation at 4.3 µm
2.3 Benfits
● CCTV (Closed Circuit Television) technology has great advantage for
use on sensing and monitoring a fire. Compared with other types of
fire detectors, the video cameras cannot be fooled by visible, or
emissions from common background sources, eliminating false alarm
problems.
2.4 Identify
● Image result for advantages of cctv camera fire detection system
12 | P a g e
2.5 Summary
There are several options for a building's fire detection and alarm
system. The ultimate system type, and selected components, will be
dependent upon the building construction and value, its use or uses,
the type of occupants, mandated standards, content value, and
mission sensitivity. Contacting a fire engineer or other appropriate
professional who understands fire problems and the different alarm
and detection options is usually a preferred first step to find the best
system.
Chapter 3
3.1 Introduction
In this chapter, the background knowledge related to the project work will
be discussed. At first the basic concepts related to computer vision will be elaborated.
Then thetheory and concepts related to the detection system, color analysis,
motion tracking and morphological operation will be discussed. Related works in this
field will be described. As the proposed system uses OpenCV library, basic
knowledge related to this library will be focused also.
13 | P a g e
3.3 Fire Properties
Typically, fire comes from a chemical reaction between oxygen in the atmosphere
and some sort of fuel. For the combustion reaction to happen, fuel must be heated to
its ignition temperature. In a typical wood fire, first something heats the wood to a very
high temperature. When the wood reaches about 150 degrees Celsius, the heat
decomposes some of the cellulose material that makes up the wood.
Infrared light can be detected by certain cameras; however, it is not visible to the human
eye. This makes it impractical to use in a system meant for cheap cameras, which are
easily obtainable and, therefore, are not likely to capture the infrared spectrum. Heat
cannot be detected visually, which leaves only smoke and flame as easily visible
components of a fire.
14 | P a g e
Aside from clean fires, most fires will emit smoke, which consists of solids and liquids
that are remainders of the fuel that is not cleanly burnt. Smoke is usually visible.
The fire detection system would preferably have a fire detection program, which uses an
algorithm to detect fire through visual analysis of the area where fire detection is
desired. The program would provide feedback once a fire has been determined.
The RGB color model is an additive color model in which the red, green, and blue primary
colors of light are added together in various ways to reproduce a broad array of colors.
The name of the model comes from the initials of the three additive primary colors, red,
green, and blue HSV is a cylindrical color model that remaps the RGB primary colors into
dimensions that are easier for humans to understand. Like the Munsell Color System,
these dimensions are hue, saturation, and value. Hue specifies the angle of the color on
the RGB color circle R, G, B in RGB are all co-related to the color luminance( what we
loosely call intensity),i.e., We cannot separate color information from luminance. HSV or
Hue Saturation Value is used to separate image luminance from color information. This
makes it easier when we are working on or need luminance of the image/frame.
For example shows how to adjust the saturation of a color image by converting the
image to the HSV color space. The example then displays the separate HSV color planes
(hue, saturation, and value) of a synthetic image.
15 | P a g e
Convert RGB Image to HSV Image
Read an RGB image into the workspace. Display the image.
RGB = imread('peppers.png');
imshow(RGB)
Process the HSV image. This example increases the saturation of the image by
multiplying the S channel by a scale factor.
[h,s,v] = imsplit(HSV);
saturationFactor = 2;
s_sat = s*saturationFactor;
HSV_sat = cat(3,h,s_sat,v);
Convert the processed HSV image back to the RGB color space. Display the new RGB
image. Colors in the processed image are more vibrant.
RGB_sat = hsv2rgb(HSV_sat);
imshow(RGB_sat)
HSV = rgb2hsv(RGB);
Split the HSV version of the synthetic image into its component planes: hue, saturation,
and value.
16 | P a g e
[h,s,v] = imsplit(HSV);
Display the individual HSV color planes with the original image.
montage({h,s,v,RGB},"BorderSize",10,"BackgroundColor",'w');
As the hue plane image in the preceding figure illustrates, hue values make a linear
transition from high to low.
If you compare the hue plane image against the original image, you can see that shades
of deep blue have the highest values, and shades of deep red have the lowest values.
(As stated previously, there are values of red on both ends of the hue scale.
To avoid confusion, the sample image uses only the red values from the beginning of the
hue range.)
Saturation can be thought of as the purity of a color. As the saturation plane image
shows, the colors with the highest saturation have the highest values and are
represented as white.
In the center of the saturation image, notice the various shades of gray.
These correspond to a mixture of colors; the cyans, greens, and yellow shades are
mixtures of true colors.
Value is roughly equivalent to brightness, and you will notice that the brightest areas of
the value plane correspond to the brightest colors in the original image.
17 | P a g e
18 | P a g e
3.7 Gaussian Blur
In image processing, a Gaussian blur is the result of blurring an image by a Gaussian
function. It is a widely used effect in graphics software, typically to reduce image noise
and reduce detail.
If you take a photo in low light, and the resulting image has a lot of noise, Gaussian blur
can mute that noise. If you want to lay text over an image, a Gaussian blur can soften
the image so the text stands out more clearly.
In Gaussian Blur operation, the image is convolved with a Gaussian filter instead of the
box filter. The Gaussian filter is a low-pass filter that removes the high-frequency
components are reduced. src − A Mat object representing the source (input image) for
this operation Blur images with various low pass filters Apply custom-made filters to
images (2D convolution)
2D Convolution ( Image Filtering ) As in one-dimensional signals, images also can be
filtered with various low-pass filters (LPF), high-pass filters (HPF), etc. LPF helps in
removing noise, blurring images, etc. HPF filters help in finding edges in images.OpenCV
provides a function cv.filter2D() to convolve a kernel with an image
As an example, we will try an averaging filter on an image. A 5x5 averaging filter kernel
will look like the below:
K=125⎡⎣⎢⎢⎢⎢⎢⎢1111111111111111111111111⎤⎦⎥⎥⎥⎥⎥⎥
The operation works like this: keep this kernel above a pixel, add all the 25 pixels below
this kernel, take the average, and replace the central pixel with the new average value.
This operation is continued for all the pixels in the image
Image blurring is achieved by convolving the image with a low-pass filter kernel. It is
useful for removing noise. It actually removes high frequency content (eg: noise, edges)
from the image. So edges are blurred a little bit in this operation (there are also blurring
techniques which don't blur the edges). OpenCV provides four main types of blurring
techniques.
1. Averaging
19 | P a g e
This is done by convolving an image with a normalized box filter. It simply takes the
average of all the pixels under the kernel area and replaces the central element.
This is done by the function cv.blur() or cv.boxFilter(). Check the docs for more details
about the kernel.
We should specify the width and height of the kernel. A 3x3 normalized box filter would
look like the below:
K=19⎡⎣⎢111111111⎤⎦⎥
Gaussian Blurring
In this method, instead of a box filter, a Gaussian kernel is used. It is done with the
function, cv.GaussianBlur().
We should specify the width and height of the kernel which should be positive and odd.
We also should specify the standard deviation in the X and Y directions, sigmaX and
sigmaY respectively.
If only sigmaX is specified, sigmaY is taken as the same as sigmaX. If both are given as
zeros, they are calculated from the kernel size.
If you want, you can create a Gaussian kernel with the function, cv.getGaussianKernel().
blur = cv.GaussianBlur(img,(5,5),0)
Median Blurring
Here, the function cv.medianBlur() takes the median of all the pixels under the kernel
area and the central element is replaced with this median value.
20 | P a g e
But in median blurring, the central element is always replaced by some pixel value in the
image. It reduces the noise effectively. Its kernel size should be a positive odd integer.
In this demo, I added a 50% noise to our original image and applied median blurring.
Check the result:
median = cv.medianBlur(img,5)
Bilateral Filtering
cv.bilateralFilter() is highly effective in noise removal while keeping edges sharp. But the
operation is slower compared to other filters. We already saw that a Gaussian filter takes
the neighbourhood around the pixel and finds its Gaussian weighted average.
This Gaussian filter is a function of space alone, that is, nearby pixels are considered
while filtering.
It doesn't consider whether pixels have almost the same intensity. It doesn't consider
whether a pixel is an edge pixel or not. So it blurs the edges also, which we don't want to
do.
Bilateral filtering also takes a Gaussian filter in space, but one more Gaussian filter which
is a function of pixel difference.
The Gaussian function of space makes sure that only nearby pixels are considered for
blurring, while the Gaussian function of intensity difference makes sure that only those
pixels with similar intensities to the central pixel are considered for blurring.
So it preserves the edges since pixels at edges will have large intensity variation.
The below sample shows use of a bilateral filter (For details on arguments, visit docs).
blur = cv.bilateralFilter(img,9,75,75)
21 | P a g e
3.8 Open Computer Vision
OpenCV is an open-source computer vision library written in C and C++, compatible with
Linux, Windows, and Mac OS X. It supports interfaces for Python, Ruby, MATLAB, and
other languagesDesigned for computational efficiency and real-time applications,
OpenCV leverages multi-core processors and can utilize Intel’s Integrated Performance
Primitives (IPP) for further optimization. The library contains over 500 functions for
various vision applications, such as factory product inspection, medical imaging,
22 | P a g e
3.9 Why need Open CV
Efficiency and Real-Time Processing: It is designed for computational efficiency and
real-time applications, making it ideal for tasks that require quick processing.
• Extensive Functionality: With over 500 functions, OpenCV covers a wide range
of computer vision tasks such as image processing, object detection, camera
calibration, and more.
OpenCV is widely used in various fields such as robotics, medical imaging, security, and
autonomous vehicles.
23 | P a g e
Relationship Between
opencv and other Library
Deep
Learning
Frameworks: Integrates with TensorFlow, PyTorch, and Keras for combining traditional
computer vision with deep learning techniques.
• Hardware Acceleration: Uses Intel IPP and CUDA for enhanced performance on
CPUs and NVIDIA GPUs.
• Camera and Sensor Integration: Interfaces with cameras and sensors for real-
time video processing and data acquisition.
• Edge and Cloud Computing: Deployable on edge devices and integrates with
cloud platforms for scalable processing.
Error Handling
24 | P a g e
Error handling in OpenCV follows a mechanism similar to IPL (Intel Image Processing
Library). Instead of returning error codes, OpenCV uses a global error status that can be
manipulated using functions like cvError to set an error and cvGetErrStatus to retrieve
it. This global error status allows for flexible error management: developers can
configure whether errors trigger immediate termination of program execution with error
messages or simply set an error code and allow execution to continue. This approach
provides adaptability in handling errors across different scenarios in OpenCV
applications.
Platform Supported
The OpenCV software run on Windows platforms. The code and syntax used for
function and variable declarations in this manual are written in the ANSI C style.
However, versions of the OpenCV for different processors or operating systems may,of
necessity, vary slightly
Summary
In this chapter, background study and related papers which are useful for the project
are briefly discussed. Here also a short discussion about needed graphical
library(OpenCV) to complete the project is given.
25 | P a g e
Chapter 4
Existing System
4.1 Introduction
In this chapter we will discuss about Existing System that is sensor
based fire detection system and their usage.
There are numerous definitions as to what a sensor is but I would like to define a Sensor
as an input device which provides an output (signal) with respect to a specific physical
quantity (input).
The term “input device” in the definition of a Sensor means that it is part of a bigger
system which provides input to a main control system (like a Processor or a
Microcontroller).
26 | P a g e
The following is a list of different types of sensors that are commonly used in various
applications. All these sensors are used for measuring one of the physical properties like
Temperature, Resistance, Capacitance, Conduction, Heat Transfer etc.
1. Temperature Sensor
2. Proximity Sensor
3. Accelerometer
4. IR Sensor (Infrared Sensor)
5. Pressure Sensor
6. Light Sensor
7. Ultrasonic Sensor
8. Smoke, Gas, and Alcohol Sensor
9. Touch Sensor
10. Color Sensor
11. Humidity Sensor
12. Position Sensor
13. Magnetic Sensor (Hall Effect Sensor)
14. Microphone (Sound Sensor)
15. Tilt Sensor
16. Flow and Level Sensor
17. PIR Sensor
18. Touch Sensor
19. Strain and Weight Sensor
These sensors are used in various applications to detect and measure different physical
phenomena such as temperature, proximity, motion, light, sound, and more.
smoke detector, device used to warn occupants of a building of the presence of a fire
before it reaches a rapidly spreading stage and inhibits escape or attempts to extinguish
it Smoke alarms save lives. Smoke alarms that are properly installed and maintained
play a vital role in reducing fire deaths and injuries. If there is a fire in your home, smoke
spreads fast and you need smoke alarms to give you time to get out.
27 | P a g e
There are three types of smoke alarms, ionization, photoelectric and a combination of
the two which is commonly called a “dual” detector. Look for the UL stamp on any smoke
alarm. Research has shown: Ionization smoke alarms detect flaming fires marginally
earlier than photo-electric smoke alarms.
There are different types of Temperature Sensors like Temperature Sensor ICs (like
LM35, DS18B20), Thermistors, Thermocouples, RTD (Resistive Temperature Devices),
etc.
Temperature Sensors are used everywhere like computers, mobile phones, automobiles,
air conditioning systems, industries etc.
A simple project using LM35 (Celsius Scale Temperature Sensor) is implemented in this
project: TEMPERATURE CONTROLLED SYSTEM.
Existing systems include Fire and Hazard Detection systems which employ
heat sensors or temperature sensors or smoke sensors or a combination
of these.
Smoke Sensor will detect Only after detection of smoke, the fire is
detected. And sometimes sensor fails to detect the fire if the air flows in
opposite direction. And It can produce the false alarm when someone
smoke near to the smoke sensor These sensors have limited range and
becomes tough for large industries and they are very expensive.
30 | P a g e
31 | P a g e
Use Case Diagram
Chapter 5
5.1 Proposed System
This chapter introduces the proposed camera-based fire detection system and its
application. Traditional fire detection methods are often costly and limited in scope.
Hence, alternative methods such as computer vision and image processing techniques
have been explored. Using surveillance cameras proves to be a cost-effective solution for
detecting fires by analyzing light parameters and flame colors.
32 | P a g e
The proposed system aims to utilize surveillance cameras to monitor fire occurrences
within their range. To enhance fire detection accuracy in real-time video streams, the
system employs color segmentation to identify the boundary of moving regions and
calculates the number of fire pixels within these areas. This method forms the basis of an
efficient fire detection system that promptly generates alarms and notifies fire stations
with GPS coordinates, aiming to protect lives and properties from fire hazards.
34 | P a g e
The threat to people’s lives and property posed by fires has become increasingly serious.
To address the problem of a high false alarm rate in traditional fire detection, an
innovative detection method based on multifeature fusion of flame is proposed.
First, we combined the motion detection and color detection of the flame as the fire
preprocessing stage. This method saves a lot of computation time in screening the fire
candidate pixels.
Second, although the flame is irregular, it has a certain similarity in the sequence of the
image. According to this feature, a novel algorithm of flame centroid stabilization based
on spatiotemporal relation is proposed, and we calculated the centroid of the flame
region of each frame of the image and added the temporal information to obtain the
spatiotemporal information of the flame centroid.
35 | P a g e
Coordinates
GPS coordinates are formed by two components that are a latitude ,
giving the north-
south position, and a longitude, giving the east-west position.
Use this map to convert any address in its GPS coordinates. You can also
find the
location of any GPS coordinates, and geocode its address if available.
Visit the wheream I page to get all the details of your current location
coordinates.
Latitude definition
The latitude of a point is the measurement of the angle formed by the
equatorial plane with By construction, it is comprised between -90 °and
90 °. Negative values are for the southern hemisphere locations, and
latitude is worth 0 ° at the equator.
Longitude definition
The principle is the same for the longitude, with the difference that there
is no natural reference like the equator for the latitude. Longitude
reference has been arbitrarily set at the Greenwich Meridian (it passes
through the Royal Greenwich Observatory in Greenwich in the suburbs of
London), and the longitude of a point is the measurement of the angle
formed by the half plane formed by the axis of the earth and passes
through the Greenwich meridian and the half-plane formed by the axis of
the earth and passing through the point.
A third component
Careful readers would have already noticed that a third element is
required to locate a point, its altitude. In the most typical use cases, GPS
coordinates are needed for locations on the surface of the Earth, making
this third parameter less important.However, it is as necessary as the
latitude and longitude to define a complete and accurate GPS location.
36 | P a g e
What3words
What3words divided the world into 57 trillion squares, each measuring 3m
by 3m (10ft by 10ft) and each having a unique, randomly assigned three-
word address. Our coordinates converter helps you convert coordinates to
w3w but also what3words to coordinates.
Multiple geodetic systems for geographical coordinates
As we saw, the above definitions take into account several parameters
that must be fixed or identified for future reference:
- the equator plane and the model chooser for the shape of the
earth,
- a set of reference points,
- the position of the center of the Earth,
- the earth axis,
These five criteria are the basis of the different geodetic systems used
through history.
Currently, the most commonly used geodetic system is WGS 84 (used
notably for GPS
coordinates).
GPS coordinates measurement units
The two main units of measurement are the decimal and sexagesimal
coordinates.
Decimal coordinates
The latitude and longitude are a decimal number, with the following
characteristics:
- latitude between 0° and 90°: Northern hemisphere,
- latitude between 0° and -90°: Southern hemisphere,
longitude between 0° and 180°: East of the Greenwich meridian,
- longitude between 0° and -180°: West of the Greenwich meridian,
37 | P a g e
Sexagesimal coordinates
The sexagesimal coordinates have three components: degrees, minutes and
seconds.
Each of these components is usually an integer, but the seconds can be a
decimal
number in case of need of a greater precision.
One angle degree includes 60 angle minutes and one angule minute
consists of 60 angle
seconds of arc.
Unlike decimal coordinates the sexagesimal coordinates can not be
negative. In their
case, the letter W or E is added to the longitude to specify the position east-
west from
the Greenwich meridian, and the letter N or S to the latitude to designate
the
hemisphere (North or South).
Correlation Table
38 | P a g e
Latitude and longitude finder
On the website homepage, when you enter coordinates in one of the
formats (on the left column), they are automatically converted to the
other format. Also, when you visualize an address on the map, or after
clicking on a point on the map, its coordinates in the tw0units are
displayed in the left column.
39 | P a g e
5.3 Project Prototype
40 | P a g e
5.4 Block Diagram
Then, we extracted features including spatial variability, shape variability, and area
variability of the flame to improve the accuracy of recognition.
Finally, we used support vector machine for training, completed the analysis of
candidate fire images, and achieved automatic fire monitoring.
Experimental results showed that the proposed method could improve the accuracy and
reduce the false alarm rate compared with a state-of-the-art technique.
The method can be applied to real-time camera monitoring systems, such as home
security, forest fire alarms, and commercial monitoring
41 | P a g e
5.5 Use Case Diagram
42 | P a g e
Chapter 6
Implementation and Results
message.attach(MIMEText(body, "plain"))
context = ssl.create_default_context()
with smtplib.SMTP_SSL("smtp.gmail.com", 465, context=context) as server:
server.login(sender_email, password)
server.sendmail(sender_email, receiver_email, message.as_string())
print("Email sent successfully")
except Exception as e:
print(f"Failed to send email: {e}")
43 | P a g e
6.3 SMS and Call Function (usingTwilio)
# Function to send SMS and make a phone call using Twilio
def send_sms_and_call(account_sid, auth_token, twilio_phone_number,
receiver_phone_number, body):
try:
client = Client(account_sid, auth_token)
# Send SMS
message = client.messages.create(
body=body,
from_=twilio_phone_number,
to=receiver_phone_number
)
print(f"SMS sent successfully: {message.sid}")
except Exception as e:
print(f"Failed to send SMS or make phone call: {e}")
# Twilio credentials
twilio_account_sid = 'AC19b63d7fe164560509bca854c30a6a9a'
twilio_auth_token = '741e41fc842bc6189e9b0d3e22892f06'
twilio_phone_number = '+17406406219'
receiver_phone_number = '+919885767044'
45 | P a g e
mask = mask_red | mask_orange | mask_yellow
Results
46 | P a g e
47 | P a g e
Conclusion
This project introduces a sensor-free fire detection algorithm that operates solely
from live video feeds, eliminating the need for traditional temperature and heat
sensors. The system's primary objective is to detect fires in their early stages,
when they are small and manageable, thereby minimizing potential damage and
enhancing safety.
Using computer vision techniques, the algorithm analyzes color and motion patterns in
real-time video streams to identify fire incidents. This approach not only reduces the cost
associated with conventional sensor-based systems but also simplifies hardware
requirements.
• Early Detection: The system effectively identifies fires before they escalate,
enabling timely intervention.
• Cost Efficiency: By utilizing existing surveillance cameras and computational
algorithms, the system reduces hardware costs and maintenance.
• Functionality: Upon detecting a fire, the system triggers alerts, including phone
calls, to notify relevant authorities or stakeholders.
In summary, this fire detection system offers proactive monitoring and rapid response
capabilities, making it a practical solution for enhancing fire safety in diverse
environments
48 | P a g e
49 | P a g e