0% found this document useful (0 votes)
37 views74 pages

Final rp2

Uploaded by

biswajit.roul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views74 pages

Final rp2

Uploaded by

biswajit.roul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

PROJECT REPORT

ON
FACE DETECTION SECURITY SYSTEM

A Major Project Submitted in Partial Fulfilment of the Requirements for the Degree
of
Bachelor of Technology
In
Computer Science & Engineering

By

Mehrussama Khanum (2101326082)


Soni Suhani Behera (2101326120)
Varsha Kumari (2101326138)
Vikash Kumar (2101326140)
Biswajit Roul (2101326052)

Under the Guidance of


[Link] Das

Department of Computer Science & Engineering


GANDHI INSTITUTE FOR EDUCATION & TECHNOLOGY

AY 2024-25
CERTIFICATE

This is to certify that the Project entitled “FACE DETECTION SECURITY SYSTEM” submitted by
Biswajit Roul (2101326052), Mehrussama Khanum (2101326082), Soni Suhani Behera
(2101326120), Varsha Kumari (2101326138), Vikash Kumar (2101326140) to the Biju Pattnaik
University of Technology, Odisha for partial fulfilment of the award for Bachelor of Technology in
Computer Science & Engineering, is a bonafide Project work carried out by them under my
supervision. The results presented in this project have not been submitted elsewhere for the award of
any other degree.

In my opinion, this work has reached the standard fulfilling the requirements for the award of the degree
of [Link] in accordance with the regulations of the University.

Signature of Guide Signature of HoD

Department of Computer Sc. & Engineering Department of Computer Sc. & Engineering
GIET,Baniatangi
GIET,Baniatangi

Signature of External Examiner

II
DECLARATION
We declare that this written submission represents our ideas with our own words and where others ideas
or words have been included. We have adequately cited and referenced the original sources. We also
declare that we have adhered to all principles of academic honesty and integrity and have not
misrepresented or fabricated or falsified any idea/ data/ fact/ source in our submission. We understand
that any violation of the above will be cause for disciplinary action by the institute and can also evoke
penal action from the sources which have thus not been properly cited or from whom proper permission
has not been taken when needed.

Biswajit Roul (2101326052)


Mehrussama Khanum (2101326082)
Soni Suhani Behera (2101326120)
Varsha Kumari (2101326138)
Vikash Kumar (2101326140)

DATE :-

III
ACKNOWLEDGEMENT
We are very grateful, thankful and wish to record our indebtedness to Sidhanta Kumar Balabantaray,
H.O.D. of Computer Science & Engineering, Gandhi Institute for Education and
Technology,Baniatangi, for his active guidance and interest in this Project work.

We would also like to thank our guide Prof. Dibyasha Das of Computer Science & Engineering
Department for her continued drive for better quality in everything that allowed us to carry out our project
work.

Lastly, word run to express our gratitude to our parents and all the Professors, Lecturers, Technical and
official staffs and friends for their co-operation, constructive criticism and valuable suggestions during
the preparation of project report.

Biswajit Roul (2101326052)


Mehrussama Khanum
(2101326082) Soni Suhani Behera
(2101326120) Varsha Kumari
(2101326138)
Vikash Kumar (2101326140)
[Link] in Computer Sc. & Engg.

IV
FACE DETECTION SECURITY SYSTEM
================================

ABSTRACT
This project presents a Face Detection Security System designed to enhance access control and
surveillance through biometric authentication. Leveraging computer vision and deep learning
techniques, the system automatically detects and recognizes human faces in real-time, providing a
secure and contactless method for identity verification. The system employs Haar Cascade
classifiers and convolutional neural networks (CNNs) to ensure high accuracy in various lighting
and environmental conditions. Integrated with a database of authorized personnel, the system grants
or denies access based on facial recognition results. Applications include secure entry points in
offices, homes, and restricted areas. This approach improves security, reduces the risk of
unauthorized access, and offers a scalable solution adaptable to multiple domains.

This report presents the development of a Face Detection Security System aimed at improving
access control through advanced biometric verification. The system utilizes computer vision
techniques and machine learning algorithms to detect and recognize human faces in real-time,
enabling secure, contactless authentication. A combination of Haar Cascade classifiers for face
detection and convolutional neural networks (CNNs) for face recognition is employed to ensure
robustness across varied lighting and environmental conditions. The system is integrated with a
local database to match live facial inputs against stored authorized profiles. Access is granted only
to verified individuals, enhancing security and minimizing the risk of unauthorized entry. Designed
for implementation in environments such as offices, homes, and secure facilities, this system
demonstrates a scalable and efficient approach to modern security challenges.

The system grants access only to recognized individuals, ensuring reliable security in
environments such as homes, offices, and restricted areas. The project demonstrates a scalable,
cost-effective, and efficient approach to modern security

v
CONTENTS
CHAPTER NO TOPIC PAGE NO
TITLE I
CERTIFICATE II
DECLARATION III
ACKNOWLEDGEMENT IV
ABSTRACT V

CHAPTER 1 INTRODUCTION 01-06


1.1 Background and Motivation

1.2 Objectives of the System

1.3 Scope of the Project

1.4 Significance and Applications

CHAPTER 2 LITERATURE REVIEW 07-09


2.1 Overview of Biometric Security Systems
2.2 Evolution of Face Detection and Recognition
2.3 Comparison with Other Authentication Methods
2.4 Existing Systems and Technologies

CHAPTER 3 SYSTEM ANALYSIS 10-14


3.1 Block Diagram of the System
3.2 Hardware Components (if applicable)
3.3 Software Components and Tools Used
3.4 Data Flow and Process Overview
CHAPTER 5
SYSTEM DESIGN 24-30
5.1 Development Environment
5.2 User Interface Design
5.3 Integration of Modules
5.4 Real-Time Performance

CHAPTER 6
TRAINING AND TESTING
 Function
 Code
 Training
 Facerecognize class
 Applications

CHAPTER7 CONCLUSION AND FUTURE SCOPE


 Future Scope
 Conclusion
LIST OF FIGURES
FIGURE NO. FIGURE DESCRIPTION PAGE NO

1 Limitations Design 13

2 Limitations Diagram 14

3 Limitations Diagram 15

4 Limitations Diagram of FDSS 16

5 Database Design of FDSS 19

6 Use Case Diagram of Staff Registration 20

7 Use Case Diagram of Course Management Module 21

8 Use Case Diagram of Face Detection 25


CHAPTER 1
INTRODUCTION
1. Background
In today's world, security has become a major concern in both public and private sectors.
Traditional security systems, such as manual identification and password-based authentication, are
increasingly being replaced or supplemented by advanced biometric technologies. Among these
technologies, face detection stands out due to its non-intrusive nature, user convenience, and
growing accuracy.

Face detection is the first step in many computer vision tasks involving facial recognition or
analysis. It involves identifying and locating human faces in images or video streams. With the
rapid development of artificial intelligence (AI) and machine learning, face detection systems have
become more reliable, accurate, and efficient, making them ideal for security applications such as
access control, surveillance, and attendance tracking.

2. Problem Statement
Traditional security systems often face limitations in terms of user identification, convenience,
and vulnerability to breaches. Passwords can be forgotten or stolen, ID cards can be lost or
duplicated, and manual monitoring can be inefficient and error-prone. There is a clear need for a
more robust, automated, and intelligent security solution that ensures both accuracy and ease of
use. A face detection-based security system addresses these limitations by using biometric
verification to enhance security protocols.

3. Objectives
The main objective of this project is to design and develop a face detection security system
that can identify individuals through real-time video feed and grant or deny access based on
facial recognition. Specific objectives include:

 Implementing a face detection algorithm to identify human faces in images or video.


 Integrating a face recognition module to compare detected faces with a stored database.
 Building a user-friendly interface for system operation and monitoring.
 Ensuring accuracy, speed, and security of the overall system.

4. Scope of the Project


This project focuses on developing a prototype security system using face detection and
recognition techniques. It will be designed primarily for indoor environments such as offices,
labs, and restricted areas where secure access control is essential. The system will include
features such as:

 Real-time face detection and recognition.


 Secure access authorization.
 logging of access attempts.
Data 1
Introduction Chapter - I

1.5 Methodology Overview


The system will be developed using Python with libraries like OpenCV for image processing and face
detection, and machine learning frameworks such as TensorFlow or dlib for facial recognition. The
development process includes requirement analysis, algorithm implementation, system integration,
testing, and evaluation.

institutional goals. It also supports strategic decision-making by providing relevant data and analytics.
By centralizing authority and streamlining oversight, the HOD Control Panel enhances accountability,
ensures quality education delivery, and facilitates effective departmental administration.

1. Role-Based Authentication: Role-Based Authentication is a security framework that ensures


users access only those features and data relevant to their assigned role. Whether it's a student, staff
member, or HOD, the system restricts access based on predefined permissions, protecting sensitive
information and maintaining system integrity. This layered approach to access control prevents
unauthorized activities and enhances user accountability. It also simplifies navigation by presenting
only the relevant tools and modules for each role, reducing confusion and potential misuse. Role-
Based Authentication ensures compliance with data protection standards while promoting a secure
and organized digital environment for all institutional stakeholders.

2. Open-Source Flexibility: One of the key advantages of this EDU-CONNECT-based system


is its open-source flexibility, allowing educational institutions to tailor the software according to
their specific needs. Whether it's integrating new modules, customizing user interfaces, or adapting
workflows, institutions have complete control over the platform’s structure and functionality. This
adaptability ensures that the system can grow with the organization, accommodating future
expansions and technological upgrades. Open-source flexibility also fosters innovation and cost-
effectiveness by eliminating vendor lock- in and enabling collaborative development. Institutions
can enhance user experiences, improve efficiency, and ensure long-term sustainability by
customizing the system to align perfectly with their educational goals.

2. IMPORTANCE

Managing academic activities manually can be time-consuming and prone to errors. Edu- Connectdigitizes
and automates the entire academic workflow, ensuring efficient, transparent, and accessible management.
Key advantages include:

1. Enhanced Efficiency: Edu-Connect significantly enhances operational efficiency within


educational institutions by reducing the reliance 2on paperwork and manual data entry. Traditional

methods are often 2


Chapter - I

records. Only authorized users can access specific data based on their roles, reducing the risk of data
breaches or misuse. This level of control is essential in maintaining trust and compliance with data
protection policies. Through encryption, role-based permissions, and audit trails, Edu-
Connectensures that academic records remain secure, reliable, and tamper-proof within the
institution’s system.

2. Real-Time Updates: With Edu-Connect, students and staff benefit from real-time updates
regarding academic activities such as attendance, assignments, internal marks, and
schedules. This feature ensures that everyone stays informed instantly, without delays.
Teachers can upload and update data which becomes immediately visible to
students and higher authorities. Such instant access to current information helps in
timely decision-making, improves communication, and eliminates confusion. It also
enhances accountability, as students can track their performance and
responsibilities regularly, fostering better

3. academic engagement and planning.

4. Role-Based Control: Edu-Connectincorporates role-based access control to maintain


organized and secure access to features and data. Different users such as students, teachers,
assistant teachers, and HODs are granted access based on their specific responsibilities. For
example, students can only view their own data, while teachers can manage student records,
and HODs can oversee departmental operations. This segregation prevents unauthorized access,
ensures data privacy, and simplifies the user experience by presenting only relevant
functionalities to each role, making academic management structured and efficient.
promotes transparency, reduces manual errors, and helps students stay aware of their participation
throughout the academic session.

4. Assignment Submission & Evaluation: Create a streamlined system that allows students to
submit assignments online and enables teachers to review, provide feedback, and assign grades.
This digital process reduces paperwork, improves record-keeping, and enhances
communication between students and faculty regarding academic tasks and performance.

5. Internal Marks Management: Provide a module where teachers can input internal marks and
students can view them instantly. This promotes transparency, timely academic tracking,
and helps students understand their progress, encouraging better performance and
engagement in academic activities throughout the semester.
3
6. HOD Registration Approval: Enable a feature where HODs have the authority to approve
or reject
3
Chapter - I

maintaining system integrity and preventing unauthorized entry into the academic management
system.

7. Paper Management System: Facilitate a paper management feature where staff can edit
subject- related information, like syllabus or schedule. HODs retain full administrative control,
including adding,

deleting, or reassigning subjects. This ensures structured content management and better
academic coordination across the department.

8. Open-Source Customization: Offer the system as an open-source solution, allowing


institutions to freely implement, modify, and enhance the Edu-Connectplatform. This
eliminates financial barriers, encourages innovation, and gives colleges complete control to
customize the system according to their academic structure and specific requirements.

3. SCOPE:

Edu-Connect covers a wide range of functionalities designed to improve academic


administration. The platform includes

1. Student Registration System: Students can register through the portal, but access is granted
only after approval from the HOD. This ensures that only verified and eligible users are
allowed into the system, maintaining data security and academic integrity.

2. Role-Based Dashboard: Each user—student, staff, or HOD—gets a personalized dashboard


tailored to their responsibilities. This role-based interface provides access only to releva
ational institutions to expand and customize the system as their needs evolve. As the
number of students, staff, or courses grows, Edu- Connectcan accommodate the additional
load without performance issues. Furthermore, institutions can integrate new modules—such
as online examination systems, parent portals, or alumni networks—without overhauling the
existing framework. This adaptability ensures long-term usability, supports institutional
growth, and protects investments in digital infrastructure by allowing continuous
improvement and upgrades.

4. OBJECTIVES:

The primary objectives of Edu-Connect are:

1. Role-Based Login System: Develop a secure role-based login system that assigns
4
specific access privileges to students, staff, and HODs. Each user can access features relevant

their role, ensuring streamlined navigation, data privacy, and system integrity. This structure.
to 4
Introduction Chapter-1

3. Attendance Tracking: Teachers can update daily attendance records, which students can view in
real- time. This system promotes transparency, allows students to monitor their
academic participation, and reduces manual errors or delays in attendance reporting.

4. Internal Marks System: Teachers can upload internal marks quickly and efficiently,
which are instantly visible to students. This fosters academic transparency, timely
performance tracking, and encourages students to stay engaged with their progress throughout
the semester.

5. Paper Management: Teachers are allowed to edit and update subject details like syllabus
or codes. HODs have the authority to add new subjects, ensuring structured curriculum
management and better departmental coordination.

6. Admin Control Panel: HODs have full administrative control to approve or reject new
student and staff registrations, manage subjects, and oversee academic activities. This
centralized system ensures efficient governance and smooth academic operations.

5. METHODOLOGY:-

1. Backend – [Link] with JWT Authentication: The backend is developed using


[Link], incorporating JSON Web Token (JWT) authentication for secure login and role-based
access. This ensures that users only access permitted features, enhancing security and maintaining
data integrity across different roles within the academic management system.

1. Frontend – HTML, CSS, JavaScript, React & Tailwind CSS: The frontend leverages
modern technologies like React, HTML, CSS, JavaScript, and Tailwind CSS to deliver a
clean, dynamic, and responsive interface. These tools ensure that the user experience remains
smooth across devices, improving usability for students, staff, and HODs.

2. Database – MongoDB: MongoDB powers the database layer, efficiently handling large
volumes of academic data. Its schema-less design supports flexibility and scalability, allowing
seamless data updates and performance optimization, which is ideal for growing institutional
needs and a dynamic academic environment.

1. .5.4 Hosting – Vencel & Render: The application is hosted using Vercel for the frontend and
Render for the backend, ensuring fast deployment and stable performance. These modern
platforms support CI/CD and deliver consistent5uptime, making the system reliable and accessible

to users at all times. 5


Introduction Chapter-1

1.5.5 Design – Figma for UI/UX: Figma is used to craft intuitive and user-centric UI/UX
designs.
Wireframes and prototypes ensure every page layout and interaction is optimized for clarity and
ease of use, creating a seamless experience for all users navigating the Edu-Connect platform.

The development of Edu-Connect follows an iterative model, enabling continuous improvements


based on user feedback and institutional needs. This ensures scalability, adaptability, and long-
term usability. As a result, Edu-Connect delivers an integrated academic management system
that is efficient, secure, and tailored to the evolving demands of educational institutions.

6
CHAPTER 2
LITERATURE REVIEW

1. EXISTING PORTALS:

This chapter explores the foundational technologies, methodologies, and existing systems related
to face detection and recognition, especially in the context of security. It provides a
comprehensive overview of relevant research, algorithms, and applications to justify the direction
of the proposed project.

Face detection is a critical first step in face recognition systems. It identifies the presence and
location of human faces in images or video frames. Over the years, several approaches have
been developed:
2. COMPARATIVE STUDY:

Facial recognition security systems use biometric technology to identify


individuals by analyzing their facial features. These systems compare facial
characteristics against a database of known faces to verify or identify a
person. They are used in various applications, including access control, law
enforcement, and surveillance.

2.3 TECHNOLOGICAL EVOLUTION:

Academic portals have significantly evolved over the years, transforming from simple record-
keeping tools into sophisticated platforms that streamline and automate various academic
functions. In the past, these systems primarily focused on storing student data such as grades
and attendance. However, modern academic portals have expanded their scope to include
assignment submission, timetable management,

7
Literature Review Chapter-II
internal marks handling, paper management, and more. A key feature of today’s platforms is the
integration
of role-based access control, which ensures that students, staff, and Heads of Departments (HODs)
can only access functionalities relevant to their roles. Additionally, real-time data
synchronization and user-centric
design have become essential for improving communication and transparency within institutions.

Security is another cornerstone of modern portals, with emphasis on protecting academic


records and ensuring controlled access to sensitive information. Despite these advancements,
most robust and feature- rich platforms are either proprietary or require a paid subscription, which
can be a financial burden for many institutions. This is where the Student Academic Portal (Edu-
Connect) stands out. Edu-Connect offers a free, open-source alternative that includes all essential
academic management features while allowing institutions to customize and scale the system
according to their unique needs. This makes Edu-Connect a cost-effective, efficient, and accessible
solution for academic administration.

2.4 CHALLENGES:

Data Security and Role-Based Access Enforcement: Edu-Connect ensures robust data
protection by implementing secure authentication and role-based access controls. Each user—
whether student, staff, or HOD—has limited access tailored to their role, reducing
unauthorized access and maintaining the confidentiality and integrity of sensitive academic
data.

Scalability to Accommodate Growing Student Numbers: The platform is designed with


scalability in mind, allowing institutions to easily manage increasing numbers of students and staff
without compromising performance. As enrolment grows, Edu-Connect can adapt to larger data
loads and expanded user activity with minimal resource adjustments.

Efficient Integration with Institutional Policies: Edu-Connect can be customized to align


seamlessly with an institution’s academic rules and administrative policies. Whether it's attendance
criteria, grading systems, or registration workflows, the platform is flexible enough to support
institutional requirements while ensuring consistency and compliance.

Ensuring an Intuitive and Responsive User Experience: Built with modern UI/UX
principles, Edu- Connect delivers a user-friendly interface that is easy to navigate for all roles. Its
responsive design ensures accessibility across de 8vices, enhancing engagement and reducing the
learning curve for users interacting with academic tools.
8
Literature Review Chapter-2
2.5 NEED FOR ENHANCEMENT:

Many existing academic management platforms fall short when it comes to offering a fully
customized and balanced system that effectively combines administrative control with ease of
access for users. These systems often have rigid structures, limited flexibility, or require costly
licenses that make them inaccessible to smaller institutions. This is where the Edu-Connect provides
a significant advantage. Edu-Connect bridges this gap by delivering a comprehensive,
customizable, and secure academic management solution tailored to institutional needs.

The platform features secure login with multi-tiered role-based access, ensuring that students,
faculty, and Heads of Departments (HODs) interact with only the functionalities relevant to their
roles. This enhances security while simplifying the user experience. Key academic features
such as attendance tracking, assignment management, internal mark entries, and paper
management are integrated into a single, user- friendly interface—making Edu-Connect a one-
stop platform for academic workflows.

Moreover, Edu-Connect’s open-source nature allows educational institutions to freely adopt,


modify, and expand the system without licensing costs. This not only reduces financial barriers
but also encourages continuous improvement and innovation. As a result, Edu-Connect
empowers institutions to streamline operations, maintain data integrity, and adapt quickly to
evolving academic needs while remaining accessible and cost-effectiv

9
CHAPTER 3
SYSTEM ANALYSIS
access, and minimal loading times will ensure that faculty can perform their academic
responsibilities efficiently without needing extensive technical training.

3.1.1. Administrator Requirements : Administrators play a central role in maintaining the portal
and need full control over all user accounts and academic data. The system should allow them to
add, edit, or remove student and faculty profiles, manage course structures, configure academic
sessions, and assign user roles. Administrators should be able to monitor portal usage, generate
institution-wide reports, and push official announcements. Security settings, data backups, and
error logs must be accessible to ensure smooth operations. The system should also support
analytical dashboards that help in decision-making and tracking the institution's academic
performance over time.

3.3.4. Non-Functional Requirements : In addition to core functionalities, the system must


meet several non-functional requirements. It should ensure high-level data security and privacy,
especially when handling sensitive academic records. The portal must be scalable to support
hundreds or thousands of concurrent users without crashing or slowing down. It should be
responsive, accessible on desktops, tablets, and smartphones. The user interface should be simple
and engaging to minimize the learning curve. Regular maintenance, error handling, and data
backup features must be incorporated to prevent data loss and ensure the system is always reliable
and up-to-date..

2. STAKEHOLDERS

1. Students as Stakeholders : Students are the primary stakeholders of the Student


Academic Portal. They interact with the system regularly for academic and administrative
purposes. Through the portal, students can securely log in to access their class schedules,
attendance records, assignment deadlines, internal marks, and final results. It also allows them to
submit assignments, download study materials, and receive important announcements and
notifications. The portal helps students stay organized and informed about their academic
progress, reduces the need for physical paperwork, and offers a centralized platform for
managing their academic life. A user-friendly interface is essential to ensure effective
student engagement.
Faculty and Administrators as Stakeholders : 1F0aculty members play a critical role as stakeholders
by using the portal to manage and facilitate the academic process. They can upload lecture notes,
give 10
assignments,
Chapter-III

3.2.2 Technical Staff and Institutional Heads as Stakeholders : The technical team, including
developers and IT staff, are essential stakeholders responsible for maintaining, updating, and
securing the portal. They ensure the platform is responsive, bug-free, and capable of handling
multiple users at once. Regular system backups, data encryption, and troubleshooting fall under
their domain. Additionally, institutional heads such as principals, deans, and department heads
use the portal to monitor overall academic performance, view analytical dashboards, and make
data-driven decisions. Their involvement helps align the system with institutional goals.
Together, technical experts and leadership ensure that the portal evolves to meet changing academic
and administrative needs efficiently.

3. FUNCTIONAL VS NON-FUNCTIONAL REQUIREMENTS

1. Functional Requirements

User Authentication : The system must allow students, faculty, and admins to securely log in
using unique credentials.

Student Dashboard : Students should be able to view attendance, grades, class schedules,
and course materials.

Assignment Submission :Students must be able to upload and submit assignments before
deadlines.

Faculty Panel : Faculty should be able to mark attendance, upload study materials, and evaluate
students.

Admin Management : Admins should have the ability to add/edit/delete users, manage
courses, and monitor overall activity.

Notification System : The system must notify users of updates, announcements, and important
dates.

Data Entry & Reports : Faculty and admin should be able to generate academic reports and
update student records.

2. Non-Functional Requirements

Performance : The portal should load all pages within 2-3 seconds, even during peak usage.

Scalability : It must support multiple users simultaneously without performance issues.

1
1
System Analysis Chapter-III

Maintainability : The portal should be easy to update, debug, and maintain over time.

4. ARCHITECTURE

The architecture of the Student Academic Portal follows a modular and scalable design, primarily
based on the Three-Tier Architecture Model—comprising the Presentation Layer, Application
Layer, and Data Layer. This layered approach enhances system performance, makes the portal
easier to maintain, and ensures a smooth flow of information between users and the system. The
design ensures that each layer has its own responsibility and can be independently updated or
scaled without affecting the entire system. The architecture is built to serve students, faculty,
and administrators efficiently, allowing them to perform various academic and administrative
tasks online. With the increasing demand for digital education management, this architecture
ensures the system is accessible from anywhere and anytime, supporting mobile and desktop
platforms. The architectural design also emphasizes security, usability, and performance, ensuring
data privacy, intuitive interfaces, and responsive operations. Middleware components like
authentication systems, session managers, and mail services further enhance the system’s
capabilities. Whether it's a student checking attendance, a faculty uploading marks, or an admin
generating reports, the architecture supports all with seamless interaction. This robust structure
allows for flexibility in adding new modules such as chatbots, analytics, or integration with other
educational tools in the future.

5. BREAKDOWN OF LAYERS

The Presentation Layer is the user interface through which all users interact with the portal. It is
built using HTML, CSS, and JavaScript, designed to be user-friendly and responsive. This layer
collects input from users—students, faculty, or admins—and presents the required output in
an understandable form. The Application Layer acts as the central processing unit, developed
using technologies like Java and Spring Boot. It handles all the business logic of the portal,
such as login validation, attendance calculation, assignment handling, and role-based data
access. This layer ensures that the correct functions are executed based on user requests. It also
ensures proper security protocols like token-based authentication and user session handling. The
Data Layer, typically backed by MySQL or MongoDB, stores all critical academic and
administrative information such as user credentials, marks, class schedules, and attendance
records. It communicates only with the Application Layer to prevent unauthorized direct access.
12
Together, these layers ensure the portal functions efficiently. The separation of concerns

across layers simplifies debugging, updating, and adding new features. It also ensures scalability, 12
System Analysis Chapter-III

LIMITATIONS

Despite the Student Academic Portal being a powerful and useful platform, it does come
with certain limitations that need to be acknowledged for future improvements. One of the primary
limitations is internet dependency—the portal cannot function without a stable internet connection,
which can be an issue in rural or low-network areas. Another limitation is device compatibility
and responsiveness; although designed to be responsive, the interface may not always display
perfectly across all mobile devices or older browsers. Additionally, the system may experience
performance issues under heavy load, especially during peak times like result announcements or
registration periods, unless hosted on a high-performance server. Data accuracy heavily relies on
manual entries by faculty or admins, which may lead to human error. Limited
personalization is another drawback—users might not be able to fully customize dashboards or
notification preferences according to their specific needs. Furthermore, while security
measures are in place, data breaches or cyber threats remain potential risks if not monitored
continuously. Finally, the learning curve for new users, especially those not tech-savvy, can
hinder immediate adoption without proper training. Recognizing these limitations is important to
continuously enhance the portal’s usability, performance, and reliability for all stakeholders
involved.

1
3
System Analysis Chapter-III

1
4
System Analysis Chapter-III

1
5
System Analysis Chapter-III

1
6
System Analysis Chapter-III
interactivity and responsive behavior. The built-in design system includes a predefined set of spacing, color
palette, font sizes, and more — ensuring consistency across your UI. Tailwind is also configurable via the
[Link] file, where you can define custom themes, extend color palettes, or override defaults to
better match your branding.

Another powerful feature is the JIT (Just-In-Time) compiler, which compiles only the classes you use,
making your final CSS file extremely small and optimized. Tailwind also integrates seamlessly with
component-based frameworks like React, Vue, Angular, or even traditional HTML projects. In React, for
example, you can apply Tailwind classes directly inside JSX elements, making styling fast and intuitive.

Tailwind promotes better development practices by reducing context-switching between CSS and HTML,
encouraging consistent class naming, and making components more self-contained. It’s ideal for teams
and solo developers who want full control over their UI without relying heavily on custom stylesheets or
bloated CSS files. It also works well with tools like DaisyUI, Headless UI, and Heroicons, which provide
pre-built UI components styled with Tailwind.

To sum up, Tailwind CSS is a modern, flexible, and efficient way to style your frontend applications. It
empowers developers to build sleek and responsive UIs with ease, reduces the need for writing custom CSS,
and fits perfectly with modern JavaScript frameworks like React.

1. DATABASE

1. Mongo DB:- MongoDB is a powerful, flexible, and scalable NoSQL database designed for modern
applications that require high performance, large-scale data handling, and real-time analytics. Unlike
traditional relational databases (like MySQL or PostgreSQL), which store data in tables and rows, MongoDB
stores data in a document-oriented format using BSON (Binary JSON) — a binary representation of JSON-
like documents. This schema-less structure allows for more flexibility, as each document in a collection
can have a different structure, making MongoDB ideal for handling unstructured or semi-structured data.

In MongoDB, data is organized into databases, which contain collections, and these collections consist of
documents. Each document is a JSON-like object with key-value pairs, making it very readable and intuitive
to work with, especially for JavaScript developers. This format closely mirrors the structure of objects in
modern programming languages, allowing seamless integration with backend frameworks like [Link],
[Link], and Spring Boot.

MongoDB offers powerful CRUD operations (Create, Read, Update, Delete) and supports advanced features
like indexing, aggregation, geospatial queries, and full-text search, which enable efficient querying and
17
data analysis. One of the key strengths of MongoDB is its horizontal scalability using sharding, which

allows the 17
System Analysis Chapter-III

database to distribute data across multiple machines, making it suitable for handling massive
amounts of data with high availability and fault tolerance.

MongoDB also supports replication through replica sets, ensuring that data is always available
even in the event of hardware failure. This replication mechanism allows automatic failover
and recovery, which is crucial for mission-critical applications. Additionally, MongoDB provides
robust security features such as authentication, authorization, role-based access control, and
encryption, making it a secure choice for enterprise applications.

One of the most popular tools in the MongoDB ecosystem is MongoDB Atlas, a fully
managed cloud database service that simplifies deployment, monitoring, and scaling. It
integrates seamlessly with cloud platforms like AWS, Azure, and Google Cloud, enabling
developers to focus on application development without worrying about database maintenance.
MongoDB is a highly efficient and developer-friendly database solution that empowers modern
applications to manage complex and evolving datasets. Its flexibility, performance, and scalability
make it a top choice for real-time web applications, IoT systems, mobile apps, and data-driven
platforms. Whether you're building a simple blog or a large-scale enterprise system, MongoDB
adapts to your needs while keeping.

18
System Analysis Chapter-III

19
System Analysis Chapter-III

20
System Analysis Chapter-III

21
CHAPTER 4
Introduction
The implementation of a Face Detection Security System involves several hardware and software
technologies working together to ensure accurate, real-time detection and recognition of human
faces. This chapter provides a comprehensive overview of the key technologies used in the
development of the system, including the programming languages, frameworks, libraries, and tools.

2. Python Programming Language


Python is a high-level, interpreted programming language known for its readability, simplicity, and
extensive library support. It has become one of the most popular languages in the field of artificial
intelligence, machine learning, and computer vision due to its easy syntax and powerful capabilities.
Python is used in this project as the primary programming language due to the following advantages:

 Rich ecosystem of libraries (e.g., OpenCV, Dlib, TensorFlow)


 Cross-platform compatibility
 Rapid development and prototyping capabilities
 Strong support from the developer and academic communities

3. OpenCV (Open Source Computer Vision Library)


OpenCV is an open-source computer vision and machine learning software library. It provides a
vast collection of algorithms for real-time image and video processing. In the context of this project,
OpenCV is used for:

 Capturing video from a camera


 Performing real-time face detection
 Handling image pre-processing tasks such as resizing, grayscale conversion, and
histogram equalization

OpenCV supports multiple face detection methods including Haar Cascades and deep
learning-based detectors. Its performance and wide adoption make it an essential tool for face
detection applications.

4. Dlib Library
Dlib is a modern C++ toolkit with Python bindings that provides robust machine learning
algorithms and tools for creating complex software. Dlib is particularly known for its accurate and
efficient facial recognition implementation. Key features used in this project include:

 Facial landmark detection


 Face encoding (128-dimension feature vectors)
 Face comparison and matching

Dlib’s face recognition is based on ResNet (Residual Neural Network) architecture, which
enables it to generate highly discriminative features from facial images.

2
2
Technology Used
Chapter-IV

4.5 Haar Cascade Classifier


Haar Cascades are machine learning-based classifiers introduced by Viola and Jones that detect
objects in images. In this system, they are used for detecting human faces from video frames in real
time. The classifier uses a cascade function trained from a large number of positive and negative
images to efficiently detect faces.

Although more modern methods like CNNs offer better accuracy, Haar Cascades are still widely
used for their speed and low computational cost, making them ideal for real-time applications on
limited hardware.

4.6 Local Binary Patterns Histograms (LBPH)


Local Binary Patterns Histograms (LBPH) is a simple yet powerful algorithm used for face
recognition. It works by:

 Dividing the face image into small regions


 Calculating the LBP value for each pixel by comparing it to neighboring pixels
 Creating a histogram of all regions
 Concatenating histograms into a single feature vector

2. FRONTEND TECH

1. React – Frontend Library : React is an open-source JavaScript library developed by


Facebook for building fast, interactive, and dynamic user interfaces, especially for single-page
applications (SPAs). At its core, React follows a component-based architecture, where the UI is
broken into reusable and isolated components, each managing its own state and logic. This
makes the code more modular, maintainable, and easy to debug. One of the standout features
of React is the Virtual DOM, a lightweight in-memory representation of the real DOM. When
a component’s state changes, React first updates the virtual DOM, then compares it to the
previous version using a process called diffing, and finally updates only the changed elements in
the real DOM. This ensures optimal performance and a smooth user experience.

React uses JSX (JavaScript XML), a syntax extension that allows developers to write HTML-
like code within JavaScript, making the code more readable and easier to understand. React is
declarative in nature, meaning you simply describe what you want the UI to look like, and React
takes care of the how. This differs from traditional imperative programming and results in
cleaner, more concise code. State management is handled through hooks like useState,
useEffect, and context APIs, or external librar 2i e3s such as Redux, Zustand, or Recoil for
more
23
complex state needs.

2
2
Technology Used Chapter-IV

React also supports routing through libraries like React Router, allowing developers to build multi-
page apps
without reloading the entire page. Another major advantage is its vast ecosystem and community
support.
Thousands of third-party libraries, tools, and resources exist to speed up development, and
the React community is active and helpful.

React is a powerful tool for frontend developers looking to build modern, scalable, and
maintainable web applications. Whether you're creating a small interactive form or a full-fledged
enterprise dashboard, React provides the structure, speed, and flexibility to bring your UI ideas to
life efficiently.

4.2.2 Tailwind CSS – Utility-First CSS Framework : Tailwind CSS is a highly popular utility-
first CSS framework that allows developers to rapidly build modern, responsive, and
customizable user interfaces directly in their HTML or JSX code. Unlike traditional CSS
frameworks like Bootstrap that come with pre- styled components, Tailwind focuses on providing
low-level utility classes such as p-4, text-center, bg-blue- 500, or rounded-lg, which can be
combined to create unique and flexible designs without writing custom CSS. This approach
encourages reusability and speeds up development by allowing developers to build custom UI
designs quickly and consistently.

One of Tailwind’s strongest features is mobile-

first responsive design, where classes like sm:,

md:, lg: and xl: allow developers to apply

different styles at

different screen sizes. Tailwind also supports

hover, focus, dark mode, and other states

using simple

class naming conventions, which makes it

easy to implement

24
Technology Used Chapter-IV

development fast and smooth.

25
Technology Used Chapter-IV

3. CLOUD HOSTING
1. Vercel – Frontend Deployment Platform: Vercel is a modern cloud platform
optimized for deploying frontend web applications with exceptional speed, scalability, and ease. It
was created by the team behind [Link], and is widely used for deploying projects built with
frameworks like React, Vue, Angular, Svelte, and even static HTML/CSS/JS sites. One of the
standout features of Vercel is its zero-confining.

26
Technology Used Chapter-IV

deployment — developers can simply push their code to GitHub, GitLab, or Bitbucket, and Vercel
automatically builds and deploys it. This makes it perfect for solo developers, startups, and even large teams
looking for a fast and efficient CI/CD pipeline.

Vercel specializes in delivering content through a global Content Delivery Network (CDN), ensuring that your
frontend app loads quickly no matter where users are located. It supports features like instant rollbacks, custom
domains, preview deployments (for every pull request), serverless functions, and environment variables, all
from a clean, intuitive dashboard. Another benefit is that you can collaborate with your team in real-time by
sharing preview URLs, making it easier to test and iterate quickly.

The platform is especially powerful for apps built using [Link], as it includes built-in support for features like
ISR (Incremental Static Regeneration), SSR (Server-Side Rendering), and API routes. However, it works
just as well with other static site generators or SPA frameworks. With Vercel, developers don’t need to worry
about managing servers, SSL certificates, or scaling traffic — it’s all taken care of automatically.

Vercel is an ideal solution for frontend developers who want fast deployments, easy integration with Git, and
performance that scales with their users. Whether it’s a portfolio, a blog, or a complex SPA, Vercel
simplifies the process and ensures blazing-fast load times.

4.3.2 Render – Backend Hosting Platform : Render is a powerful and easy-to-use cloud platform for
deploying backend applications, APIs, databases, static sites, and more. It provides a full-stack infrastructure with
minimal configuration, making it a go-to choice for developers who want to focus on code rather than
managing servers. Render supports a wide range of backend technologies including [Link], Python
(Django/Flask), Ruby, Go, Rust, and Java (Spring Boot) — making it highly versatile for various project
needs.

One of the key features of Render is its automatic deployment from Git repositories. Just like Vercel, you can
connect your GitHub or GitLab repo, and Render will detect your backend language, install
dependencies, and deploy it in a live environment. It supports background workers, cron jobs, and web
services, which are essential for running backend operations and background tasks. Render also offers
managed PostgreSQL databases, so you can host your backend and database together with minimal setup.

Security and scalability are also a big plus — every service on Render gets a free HTTPS certificate,
automatic scaling, and built-in DDoS protection. For Spring Boot or [Link] applications, Render is
especially developer-friendly: just specify your start command and environment variables, and you're good to
go. You can even host your API backend on a custom domain with zero effort.

Another great feature is free tier hosting for small projects, which is perfect for student portfolios, prototypes, or

2
7
MVPs. Render offers real-time logs, metrics, and a clean Used
Technology dashboard to monitor service health. Compared
Chapter-IV

28
to traditional cloud providers like AWS or GCP, Render abstracts away most of the complexity
while still
offering the flexibility that developers need.

Render is an excellent backend hosting solution for developers who want to deploy APIs or full-
stack apps without the hassle of infrastructure management. It combines power and simplicity,
making it perfect for hosting anything from personal projects to production-grade services.

4.4 SECURITY MEASURES IN WEB APPLICATIONS

Security is one of the most critical aspects of any web application, whether it's frontend or
backend. Implementing strong security measures protects the application from unauthorized
access, data breaches, and various cyberattacks like XSS, SQL Injection, and CSRF. A secure
system not only protects sensitive user data but also builds trust and credibility among users.

Starting with the frontend, one major security measure is input validation. All forms and user
inputs should be validated both on the client side and server side to ensure data integrity and
prevent malicious input. Cross-Site Scripting (XSS) is a common attack vector where attackers
inject scripts into web pages viewed by others. To mitigate XSS, it's crucial to sanitize and
escape HTML, use frameworks like React, which automatically escapes content, and implement
Content Security Policy (CSP) headers.

HTTPS (SSL/TLS encryption) is another vital security step to protect data transmission between
the user and the server. It ensures that sensitive information like login credentials, emails, and
payment details are securely transmitted and not intercepted by attackers. Most hosting providers
like Vercel and Render provide free HTTPS certificates by default.

In the backend, authentication and authorization are key. Tools like JWT (JSON Web
Tokens) help implement secure user sessions. When a user logs in, a token is issued and must be
passed with every request to verify the user's identity. Tokens should be securely stored on the
client side (like in HTTP-only cookies or localStorage) and should be signed and encrypted to
prevent tampering.

Another important practice is password hashing. Instead of storing plain text passwords, use
strong hashing algorithms like bcrypt or argon2 to store hashed passwords. This ensures that
even if the database is compromised, user passwords remain unreadable. Additionally,
implementing role-based access control (RBAC) ensures that users only access features or data
relevant to their permissions.

Rate limiting and CAPTCHA are used to prevent brute force attacks and abuse of APIs. For 282
7
Technology Used Chapter-4
Lastly, keeping dependencies up to date, using security headers like X-Frame-Options, X-
Content-Type-
Options, and Strict-Transport-Security, and regularly performing security audits are vital in
maintaining a secure web application.

29
CHAPTER 5

1 Introduction
The System Design chapter provides a detailed blueprint for implementing the Face Detection
Security System. It translates requirements and analysis into concrete architectural components,
interfaces, and data structures. The design ensures that all functional and non-functional
requirements are met, while considering scalability, maintainability, and security.

5.2 Overall Architecture


The proposed architecture follows a modular, layered approach consisting of four main layers:

1. Presentation Layer: Handles user interaction via GUI or web interface.


2. Application Layer: Coordinates core functionality including face detection, recognition,
and access control logic.
3. Data Layer: Manages persistent storage of face templates, user records, and logs.
4. Hardware Abstraction Layer: Interfaces with camera modules, actuators, and external
devices.
[ Presentation Layer ]

[ Application Layer ]
┌──────────────┐
│ Face │
│ Detection │
├──────────────┤
│ Face │
│ Recognition │
├──────────────┤
│ Access

│ Control

└──────────────┘

[ Data Layer ]
┌──────────────┐
│ SQLite DB │
│ Face Encodings│
│ Access Logs │
└──────────────┘

[ Hardware Abstraction
Layer ]
┌──────────────┐
│ Camera │
│ Actuator │

└──────────────┘

5.3 Module Design

30
Technology Used Chapter-V

1. Face Detection Module

 Responsibility: Capture frames, detect face bounding boxes.


 Input: Video frames from camera.
 Process: Use OpenCV’s Haar Cascade or MTCNN for detection.
 Output: List of face region coordinates.

Class Diagram (simplified):


FaceDetector
├── load_model()
├── detect_faces(frame) → List<BoundingBox>
└── preprocess(frame)

2. Face Recognition Module

 Responsibility: Encode detected faces, compare with database templates.


 Input: Cropped face images.
 Process: Compute 128-d face embedding via Dlib’s ResNet, compute Euclidean distance
to stored embeddings.
 Output: UserID or “Unknown”.

Sequence:

User → Camera → FaceDetector → FaceRecognition → Decision Engine

3. Access Control Module

 Responsibility: Grant/deny access based on recognition results.


 Integration: Sends signal to actuator, updates UI, logs event.

4. Database Module

 Responsibility: CRUD operations on users, encodings, and logs.


 Schema:
o Users(UserID, Name, Encoding, Role)
o AccessLogs(LogID, UserID, Timestamp, Status, Reason)

5. UI Module

 Responsibility: Provide screens for:


o Real-time monitoring
o User management (add/delete)
o Log viewing
 Technology: Tkinter (desktop) or Flask for web-based interface.

3. Data Flow Diagrams

1. Level 0: Context Diagram

[User] → (Face Detection System) 31→ [Feedback: Access


Granted/Denied]
31
Technology Used Chapter-V

5.4.2 Level 1: Detailed DFD


[Camera] → [D1: Capture & Preprocess] → [D2: Detect Faces] → [D3: Encode & Recognize]
→ [D4: Decision & Log] → [Database]


[Actuator]
5.5 Database Schema
Table Columns Description
Users UserID (PK), Name, Encoding, Role Stores registered user information
AccessLogs LogID (PK), UserID (FK), Timestamp, Status, Reason Records each access attempt

Entity-Relationship Diagram (Textual):

Users (1) ←── (N) AccessLogs

6. Interface Design

1. User Interfaces

 Login/Registration Screen: Capture user details, enroll new face.


 Dashboard: Display live video feed, detection overlays, status indicators.
 Log Viewer: Filterable list of access events.

2. System Interfaces

 Camera API: OpenCV VideoCapture wrapper.


 Actuator API: GPIO control logic for locking mechanism.

3. External Interfaces

 Network Interface (Optional): REST API endpoints for remote user management and log
retrieval (Flask-based).

7. Security Design Considerations

 Data Encryption: Encrypt face encodings and logs at rest using AES.
 Secure Communication: HTTPS for any network APIs.
 Anti-Spoofing: Liveness detection via eye-blink or challenge-response.
 Authentication & Authorization: Admin login to manage users.

5.8 Scalability and Performance

 Load Balancing: For multi-camera setups, distribute recognition tasks across threads or
separate devices.
 Caching: Cache recent face embeddings in memory to reduce database reads.
 Optimization: Use GPU-accelerated DNN modules in OpenCV or TensorFlow for faster
inference.

32
Technology Used Chapter-V

5.9 Summary
Chapter 5 outlined the detailed design of the Face Detection Security System, covering architectural
layers, module breakdowns, data flows, database schemas, interface definitions, and security
measures. This design lays the groundwork for implementation and validation in the subsequent
chapters.

Additionally, this architecture supports scalability and modular development, allowing different
developers to work independently on the frontend and backend without conflicts. This decoupling
enhances development speed, simplifies debugging, and allows future integration of features like
admin dashboards, payment gateways, real-time chat, or notifications without disturbing the
existing system. It also enables smooth integration with third-party services or microservices
architecture if the project grows.

33
System Design Chapter-V

5.2 System Architecture Overview


The architecture of the Face Detection Security System follows a modular layered design, where
each layer performs a specific role. The system is divided into four primary layers:

1. Presentation Layer: This is the user-facing interface. It allows users or administrators to


interact with the system through a Graphical User Interface (GUI) or a web
interface. It handles user input, displays camera feeds, provides feedback on access
control (granted or denied), and enables administrative controls such as user
registration and log viewing.
2. Application Logic Layer: This is the core processing layer that performs the main functions
of the
system, including:
o Capturing real-time video
o Detecting faces
o Extracting facial features
o Comparing detected features with stored data
o Making access decisions
3. Data Layer: This layer manages the storage and retrieval of data. It includes a database
(SQLite) that stores user information, face encodings, and access logs. It ensures
data consistency and supports CRUD (Create, Read, Update, Delete) operations.
4. Hardware Abstraction Layer: This layer interfaces with hardware components such as the
camera
(used for capturing real-time video) and the door actuator or electric lock (used for
granting or denying physical access).

3. Module Design
Each system function is encapsulated into distinct modules to ensure modularity and ease of
development.

1. Face Detection Module

Purpose: This module captures video frames and identifies whether any human faces are present.

Technology Used: OpenCV with Haar Cascade Classifier or MTCNN (Multi-task Cascaded
Convolutional Networks).

Functionality:

 Converts images to grayscale to improve detection speed


 Scans for features resembling human faces
 Returns bounding boxes for detected face regions

Benefits:

 Fast detection in real-time environments


 Works well in controlled lighting

34
System Design Chapter-V
2. Face Recognition Module

Purpose: Compares a newly detected face with the registered faces in the system and
identifies the individual.

Technology Used: Dlib or LBPH (Local Binary Pattern Histogram) using OpenCV.

Functionality:

 Crops and resizes the face region


 Extracts a unique facial embedding or encoding
 Compares the encoding to known encodings using Euclidean distance
 Determines a match if the distance is below a defined threshold

Outcome: If a match is found, access is considered for approval. If not, access is denied.

3. Access Control Module

Purpose: Takes action based on the recognition result.

Functionality:

 If recognition is successful, triggers a relay to unlock the door


 If recognition fails, maintains the lock and optionally raises an alert
 Logs the event in the database for auditing

4. Database Module

Technology Used: SQLite

Purpose: Stores the following:

 User data (user ID, name, encoded facial features)


 Access logs (timestamp, user ID, access result)

Structure:

 Users Table: Contains personal details and face encodings


 AccessLogs Table: Records each attempt to access the system

5. Interface Module (GUI/Web Interface)

Purpose: To allow administrators to interact with the system.

Functionality:

 Add or delete users


 View access logs
 Monitor live camera feed
 Update system settings

3
5
Chapter-V

5.4 Data Flow Design


Data flows from hardware to software and back in the following pattern:

1. Camera captures image →


2. Image passed to Detection Module →
3. If face is detected, it is cropped and passed to Recognition Module →
4. Recognition Module compares the face to stored encodings in the database

5. Access Control Module decides whether to grant or deny access →
6. Log is updated and feedback is given via actuator (door unlock or alert)

Note: If face is not recognized or multiple faces are detected simultaneously, access is
denied.

5.5 Database Design


Tables Used:

1. Users
o UserID (Primary Key)
o Name
o FaceEncoding (stored as BLOB or
serialized array)
o Role (admin/user)
2. AccessLogs
o LogID (Primary Key)
o UserID (Foreign Key)
o Timestamp
o AccessStatus (Granted/Denied)
o Reason (e.g., Unknown user, face not
detected)

Relationships: One-to-many between Users and AccessLogs

5.6 Interface Design


User Interfaces

 Login/Registration Interface: For enrolling new users


 Monitoring Dashboard: Shows live feed, recognized faces, and access
status
 Access Log Viewer: Table of logs that can be filtered by
date/user/status

System Interfaces

 Camera Interface: Captures and streams video frames


36
System Design Chapter-V

 Hardware Control Interface: Triggers relay or motor for access


control
 Network Interface (optional): API endpoints for remote access,
data sync
5.7 Security Design
To ensure the system is secure:

 Face data encryption: Encodings are encrypted before storing in the


database
 Secure access: Admin dashboard is password-protected
 Liveness Detection (optional): Prevent spoofing using videos or
images
 Data Integrity: Logs are timestamped and access-controlled
 Fail-safe: If recognition fails multiple times, system locks access
temporarily

5.8 Scalability Considerations


Although designed for small-scale deployments (e.g., homes, offices), the system can
scale:

 Horizontal scaling: Multiple cameras and processors for different areas


 Cloud integration: Store and compare face encodings via a cloud-based API
 Load balancing: For large user bases, recognition can be distributed to
GPUs

5.9 Summary
This chapter elaborated on the detailed design aspects of the Face Detection Security System. The
design is modular and scalable, and emphasizes performance, reliability, and security. By clearly
defining modules, data flows, and interfaces, the system becomes easier to implement, test, and
maintain. The design sets a solid foundation for the next stages of system

3
7
System Design Chapter-V

The implementation of a Face Detection Security System involves several hardware and
software technologies working together to ensure accurate, real-time detection and recognition
of human faces. This chapter provides a comprehensive overview of the key technologies used
in the development of the system, including the programming languages, frameworks, libraries,
and tools.

2. Python Programming Language


Python is a high-level, interpreted programming language known for its readability,
simplicity, and extensive library support. It has become one of the most popular languages in
the field of artificial intelligence, machine learning, and computer vision due to its easy
syntax and powerful capabilities. Python is used in this project as the primary programming
language due to the following advantages:

 Rich ecosystem of libraries (e.g., OpenCV, Dlib, TensorFlow)


 Cross-platform compatibility
 Rapid development and prototyping capabilities
 Strong support from the developer and academic communities

3. OpenCV (Open Source Computer Vision Library)


OpenCV is an open-source computer vision and machine learning software library. It provides a
vast collection of algorithms for real-time image and video processing. In the context of this
project, OpenCV is used for:

 Capturing video from a camera


 Performing real-time face detection
 Handling image pre-processing tasks such as resizing, grayscale conversion, and
histogram equalization

OpenCV supports multiple face detection methods including Haar Cascades and deep
learning-based detectors. Its performance and wide adoption make it an essential tool for
face detection applications.

4. Dlib Library
Dlib is a modern C++ toolkit with Python bindings that provides robust machine learning
algorithms and tools for creating complex software. Dlib is particularly known for its accurate
and efficient facial recognition implementation. Key features used in this project include:

 Facial landmark detection


 Face encoding (128-dimension feature vectors)
 Face comparison and matching

Dlib’s face recognition is based on ResNet (Residual Neural Network) architecture, which
enables it to generate highly discriminative features from facial images.

5. Haar Cascade Classifier


38

38
System Design Chapter-V

Haar Cascades are machine learning-based classifiers introduced by Viola and Jones that
detect objects in images. In this system, they are used for detecting human faces from video
frames in real time. The classifier uses a cascade function trained from a large number of
positive and negative images to efficiently detect faces.

Although more modern methods like CNNs offer better accuracy, Haar Cascades are still widely
used for their speed and low computational cost, making them ideal for real-time applications on
limited hardware.

4.6 Local Binary Patterns Histograms (LBPH)


Local Binary Patterns Histograms (LBPH) is a simple yet powerful algorithm used
for face recognition. It works by:

 Dividing the face image into small regions


 Calculating the LBP value for each pixel by comparing it to neighboring pixels
 Creating a histogram of all regions
 Concatenating histograms into a single feature vector

LBPH is resistant to lighting variations and performs well in controlled environments. It is


often used in systems where deep learning models are not feasible due to hardware constraints.

3
9
System Design Chapter-V

1. APPLICATIONS

• Security: Face Recognition can help in developing security measures, that is


unlocking of a safe using facial recognition.

• Attendance Systems: Face Recognition can be used to train a set of users in


order to create and implement an automatic attendance system that recognizes the face
of the individual and marks their attendance.

• Access: Face Detection can be used to access sensitive information like your bank
account and it can also be used to authorize payments.

• Mobile Unlocking: This feature has taken the mobile phone industry by a storm and
almost every smart phone manufacturing company has their flagship smartphones
being unlocked using face recognition. Apple’s FaceID is an excellent example.

• Law Enforcement: This is a rather interesting way of using face detection and face
recognition as it can be used to assess the features of a suspect to see if they are
being truthful in their statements or not.

40
CHAPTER 6:

TRAINING AND TESTING


4.2 TRAINING IN OPENCV
In OpenCV, training refers to providing a recognizer algorithm with training data
to learn from. The trainer uses the same algorithm (LBPH) to convert the images
cells to histograms
and then computes the values of all cells and by concatenating the histograms,
feature vectors can be obtained. Images can be classified by processing with
an ID attached.
Input images are classified using the same process and compared with the
dataset and distance is obtained. By setting up a threshold, it can be identified
if it is a known or unknown [Link] and Fisherface compute the
dominant features of the whole training set while LBPH analyses them
individually.
To do so, firstly, a Dataset is created. You can either create your own dataset
or start with one of the available face databases.
• Yale Face Database
• AT & T Face Database
The .xml or .yml configuration file is made from the several features
extracted from your dataset with the help of the FaceRecognizer Class and
stored in the form of feature vectors.
4.3 TRAINING THE CLASSIFIERS
OpenCV enables the creation of XML files to store features extracted from
datasets using the FaceRecognizer Class. The stored images are imported,
converted to Grayscale and
saved with IDs in two lists with same indexes. Face Recognizer obj ects are
created using FaceRecognizer class. Each recognizer can take in parameters
described below.

[Link]()

1. Takes in the number of components for the PCA for crating Eigenfaces.
OpenCV documentation mentions 80 can provide satisfactory
reconstruction capabilities.

41
System Design Chapter-VI

2. Takes in the threshold in recognising faces. If the distance to the likeliest


Eigenface is above this threshold, the function will return a -1, that can be used
state the face is unrecognisable

[Link]()

1. The first argument is the number of components for the LDA for the
creation of Fisherfaces. OpenCV mentions it to be kept 0 if uncertain.
2. Similar to Eigenface threshold. -1 if the threshold is passed.

[Link]()

1. The radius from the centre pixel to build the local binary pattern.
2. The Number of sample points to build the pattern. Having a considerable
number will slow down the computer.
3. The Number of Cells to be created in X axis.
4. The number of cells to be created in Y axis.
5. A threshold value similar to Eigen face and Fisherface. if the threshold is
passed the object will return [Link] objects are created and images
are imported, resized, converted into numpy arrays and stored in a vector. The ID of
the image is gathered from splitting the file name, and stored in
another [Link] using [Link](NumpyImage, ID) all three of the
objects are trained. It must be noted
that resizing the images were required only for Eigenface and Fisherface, not
for LBPH. The configuration model is saved as XML using the
function: [Link](FileName).
cognizer class. The stored images are imported, converted to grayscale and saved with
IDs in two listswith same indexes. FaceRecognizer objects are created using face
recogniser class.

42
System Design Chapter-VI

4.4 .train() FUNCTION


Trains a FaceRecognizer with given data and associated
labels. Parameters:
src The training images, that means the faces you want to learn. The data has to be
given as a vector<Mat >. labels The labels corresponding to the
images have to be given either as a vector<int> or any other data type.

4.5 CODE
Given below is the code for creating a .yml file, that is the configuration model
that stores features extracted from datasets using the FaceRecognizer Class. It is
stored in a folder
named ‘recognizer’ under the name ‘training
[Link]’. DATASET:
This is the code that will be used to create a dataset. It will turn the camera and take number
of pictures for few [Link] below is the code for face_dataset.py

Figure 4.1: Code snippet for the dataset

43
System Design Chapter-VI

Figure 4.2: Face_dataset.py

44
System Design Chapter-VI

OUTPUT:
After running the dataset code we will get number of pictures in a folder
named dataset. Now these photos will be used to train. The more the pics the greater
the accuracy
of the trainer.

Figure 4.3: example of the script storing the


dataset

Figure 4.4: Another example

4
5
System Design Chapter-VI

TRAINING:
This is the code that is going to be used to train and get the
[Link] file

Figure 4.5: Training the dataset

Figure 4.6:Training the dataset after importing

4
6
System Design Chapter-VI

OUTPUT:
This is the file that will get created after we run the train it will take all
code images from the dataset that we created the that it will create
previously, using named [Link] which will be further a file
used for recoginition.

Figure 4.7: The trainer xml file

4
7
System Design Chapter-VI

4.6 FACERECOGNIZER CLASS


All face recognition models in OpenCV are derived from the abstract base class
FaceRec ognizer, which provides a unified access to all face recognition algorithms
in OpenCV.

Figure 4.8: Flowchart Representaton


It doesn't look like a powerful interface at first sight. But: Every FaceRecognizer
is an Al gorithm, so you can easily get/set all model internals (if allowed by the
implementation). Algorithm is a relatively new OpenCV concept, which is available
since the 2.4 release.

Algorithm provides the following features for all derived classes:


• So called "virtual constructor". That is, each Algorithm derivative is registered at
progr am start and you can get the list of registered algorithms and create instance
of a particula r algorithm by its name (see Algorithm::create). If you plan to add
your own algorithms, i t is good practice to add a unique prefix to your
algorithms to distinguish them from other algorithms.

• Setting/Retrieving algorithm parameters by name. If you used video capturing


function ality from OpenCV highgui module, you are probably familar with
cv::cvSetCapturePro perty, ocvcvGetCaptureProperty, VideoCapture::set and
VideoCapture::get. Algorithm provides similar method where instead of integer
id's you specify the parameter names a s text Strings. See Algorithm::set and
Algorithm::get for details.
• Reading and writing parameters from/to XML or YAML files. Every Algorithm
derivat ive can store all its parameters and then read them back. There is no need

to re- implement it each time. 48


System Design Chapter-VI

Moreover every FaceRecognizer supports the:


• Training of a FaceRecognizer with [Link] on a given set of
images (your face database!).
• Prediction of a given sample image, that means a face. The image is given as a Mat.
• Loading/Saving the model state from/to a given XML or YAML.
• Setting/Getting labels info, that is stored as a string. String labels info is useful
for keep ing names of the recognized people
4.7 LBPH RECOGNIZER
The approach that has been used in this project is, LBPH approach which uses the following
algorithm to compute the feature vectors of the provided images in the dataset.
Local Binary Patterns (LBP) is a type of visual descriptor used for classification in
computer vision. LBP was first described in 1994 and has since been found to be a
powerful feature for texture classification. It has further been determined that
when LBP is combined with the Histogram of oriented gradients (HOG)
descriptor, it improves the detection performance considerably on some datasets.
As LBP is a visual descriptor it can also be used for face recognition tasks, as c
an be seen in the following Step-by-Step explanation.
In this section, it is shown a step-by-step explanation of the LBPH algorithm:
1. First of all, we need to define the parameters (radius, neighbours, grid x and
grid y) using the Parametersstructure from the lbph package. Then we need to
call the Init function passing the structure with the parameters. If we not set the
parameters, it will use the default parameters as explained in the Parameters
section.
2. Secondly, we need to train the algorithm. To do that we just need to call the Train
function passing a slice of images and a slice of labels by parameter. All images
must have the same size. The labels are used as IDs for the images, so if you
have more than one image of the same texture/subject, the labels should be the
same.
3. The Train function will first check if all images have the same size. If at least
one image has not the same size, the Train function will return an error and the
algorithm will not be trained.
4. Then, the Train function will apply the basic LBP operation by changing each pixel based
on its neighbours using a default radi4u9s defined by the user. The basic LBP
operation can be seen in the following image (using 8 neighbours and radius
49
equal to 1)
System Design Chapter-VI

Figure 4.9: referencing and assigning pixel value

5. After applying the LBP operation we extract the histograms of each image
based on the number of grids (X and Y) passed by parameter. After extracting
the histogram of each region, we concatenate all histograms and create a new
one which will be used to represent the image.

Figure 4.10: LBP result


The images, labels, and histograms are stored in a data structure so we can
compare all of it to a new image in the Predict function.
Now, the algorithm is already trained and we can Predict a new image.

5
0
System Design Chapter-VI
To predict a new image we just need to call the Predict function passing the image as
parameter. The Predictfunction will extract the histogram from the new image,
compare it to the histograms stored in the data structure and return the label and
distance corresponding to the closest histogram if no error has occurred. Note: It
uses the Euclidian distance metric as the default metric to compare the
histograms. The closer to zero is the distance, the greater is the confidence.

The LBPH package provides the following metrics to compare the histograms:
Chi-Square :

Equation 1

Euclidean Distance :

Equation 2

Normalized Euclidean Distance :

Equation 3

Absolute Value :

Equation 4

Parameters:

51
System Design Chapter-VI
 Radius: The radius used for building the Circular Local Binary Pattern. Default
value is 1.

 Neighbours: The number of sample points to build a Circular Local Binary Pattern
from. Keep in mind: the more sample points you include, the higher the
computational cost. Default value is 8.

 GridX: The number of cells in the horizontal direction. The more cells, the
finer the grid, the higher the dimensionality of the resulting feature vector.
Default value is 8.

 GridY: The number of cells in the vertical direction. The more cells, the finer the
grid, the higher the dimensionality of the resulting feature vector. Default
value is 8.

4.8 predict() Function


Predicts a label and associated confidence (e.g. distance) for a given input image. Parameters

src Sample image to get a prediction from.

label confidenceThe predicted label for the given image.


Associated confidence (e.g. distance) for the predicted
label.
The suffix const means that prediction does not affect the internal model
state, so the method can be safely called from within different threads.

5
2
System Design Chapter-VI

4.9 Code
Given below is the face recognition script that reads the data from the
[Link] file mentioned before and uses the .predict() function to pass a
confidence value, recognize the face of the individual and display their names
along with their face. The following script uses data that has been trained with images
of the students working on this project: shivam bhargava

Figure 4.11: Functioning

53
System Design Chapter-VI

Figure 4.12: Other snippets

Figure 4.13: Face_recognition.py

5
4
System Design Chapter-VI

OUTPUT

Figure 4.14

5
5
System Design Chapter-VI

5
6
System Design Chapter-VI

4.10 APPLICATIONS

• Security: Face Recognition can help in developing security measures, that is


unlocking of a safe using facial recognition.

• Attendance Systems: Face Recognition can be used to train a set of users in


order to create and implement an automatic attendance system that recognizes the face
of the individual and marks their attendance.

• Access: Face Detection can be used to access sensitive information like your bank
account and it can also be used to authorize payments.

• Mobile Unlocking: This feature has taken the mobile phone industry by a storm and
almost every smart phone manufacturing company has their flagship smartphones
being unlocked using face recognition. Apple’s FaceID is an excellent example.

• Law Enforcement: This is a rather interesting way of using face detection and face
recognition as it can be used to assess the features of a suspect to see if they are
being truthful in their statements or not.

• Healthcare: Face Recognition and Detection can be used in the healthcare sector
to assess the illness of a patient by reading their facial features.

5
7
CHAPTER 7
FUTURE SCOPE AND CHALLENGES

7.1 Future Scope

7.1.1 Government/ Identity Management: Governments all around the world are using face recognition
systems to identify civilians. America has one of the largest face databases in the world, containing
data of about 117 million people.

7.1.2 Emotion & Sentiment Analysis: Face Detection and Recognition have brought us closer to
the technology of automated psyche evaluation. As systems now a days can judge the precise emotions
frame by frame in order to evaluate the psyche.

7.1.3 Authentication systems: Various devices like mobile phones or even ATMs work using
facial recognition, thus making getting access or verification quicker and hassle free

7.1.4 Full Automation: This technology helps us become fully automated as there is very little to zero
amount of effort required for verification using facial recognition.

7.1.5 High Accuracy: Face Detection and Recognition systems these days have developed very high
accuracy and can be trained using very small data sets and the false acceptance rates have dropped
down significantly.

7.2 Limitations
7.2.1 Data Storage: Extensive data storage is required for creating, training and maintaining big
face databases which is not always feasible.

7.2.2 Computational Power: The requirement of computational power also increases with increase in
the size of the database. This becomes financially out of bounds for smaller organizations.

7.2.3 Camera Angle: The relative angle of the target’s face with the camera impacts the recognition rate
drastically. These conditions may not always be suitable, therefore creating a major drawback.

45
Future Scope And Challenges Chapter 7

7.3 Challenges
7.3.1 Privacy Concerns: One of the biggest challenges is addressing privacy concerns. People may be
uncomfortable with their facial data being stored and used, and there are legal considerations around data
protection and consent.

7.3.2 Accuracy: While face detection technology has improved significantly, it's not 100% accurate. Factors
like lighting, angle, facial expression, and changes in appearance can affect accuracy. False positives or
negatives can have serious implications in a security context.

7.3.3 Bias: Studies have shown that some face detection systems have bias, performing less accurately for
certain demographic groups. This is a significant issue that needs to be addressed.

7.3.4 Cost: Implementing a face detection system can be expensive. It requires advanced technology and
potentially significant infrastructure changes, which may not be feasible for all organizations.

7.3.5 Integration with Existing Systems: Integrating the face detection system with existing security
systems can be complex and may require significant time and resources.

7.3.6 Reliance on Technology: Like any technology-based system, face detection systems are vulnerable to
technical issues, system failures, and cyber attacks.

Challenge: Face detection on a live feed is an expensive task and needs a considerable amount of processing
resources. Apart from face detection, our system also needs to communicate with the cameras and receive
and process the live feed. The final solution will have multiple cameras, hence the challenge becomes multi-
fold.
Solution: In order to process live video feed from multiple cameras, we would need to design a
multithreaded system with built-in redundancy. Since video analysis is an expensive task, we would need a
high end server (and backup server) to complete this task. The Processor, buffer size, RAM and other
parameters of the server will be defined once we know the number of cameras that we would be using in
We would be designing a test plan for functional testing, load testing & benchmarking response times in
46
Future Scope And Challenges Chapter 7

order to finalize the ideal hardware and software requirements Alternatively, since the solution does not need
to provide live match results, we could store the video on the Disk and process the video. This decision will
be taken once we get the results from our benchmarking test results

Another complex area was implementing secure user authentication and effective session management.
The portal needed to distinguish between different user roles, including students, faculty, and
administrators, each with distinct access privileges. Using Spring Security, the development team
incorporated features like encrypted password storage, two-factor authentication, and session timeout
mechanisms to enhance security. Special attention was given to mitigating risks such as session hijacking,
cross-site scripting (XSS), and cross-site request forgery (CSRF). Creating a robust login system that
balanced security with ease of use required multiple iterations and extensive vulnerability testing.

Integrating the frontend with backend APIs posed another layer of complexity, particularly due to the
variety of modules and functionalities involved. From student attendance entry forms to detailed course-
wise mark sheets, the system had to ensure that all data collected through user interfaces accurately
reflected backend logic and database states. Misalignment between frontend validations and backend
constraints led to data integrity issues during initial testing phases. Resolving these involved rigorous unit
testing, integration testing, and the development of consistent RESTful APIs to streamline communication
between the frontend and backend layers. Real-time feedback and error handling were also implemented to
improve user interaction and reduce system latency.

47
CHAPTER 8
CONCLUSION

Facial Detection and Recognition systems are gaining a lot of popularity these days. Most of the
flagship smartphones of major mobile phone manufacturing companies use face recognition as the
means to provide access to the user.
This project report explains the implementation of face detection and face recognition using OpenCV
with Python and also lays out the basic information that is needed to develop a face detection and face
recognition [Link] goal of increasing the accuracy of this project will always remain constant
and new configurations and different algorithms will be tested to obtain better results. In this project,
the approach we used was that of Local Binary Pattern Histograms that are a part of the
FaceRecognizer Class of OpenCV.
This project is very innovative and novel as something like this has not been successfully implemented as
yet. Many attempts have been made in countries like Russia and China on surveillance of people in public
places but none have been focused on or successfully been able to capture people within a vehicle.

Most of the successful face recognition projects have been carried out in a controlled environment like at an
airport terminal/counter or retail outlets. None of these have the inherent challenges that this project has.

At the end of this Phase we would have a system that consists of a High End IP Camera that sends live feed
securely via TCP/IP using SSL to the Server. The Server will be able to store this feed in video format (eg.,
MPEG 4 format). We would also be testing different face recognition algorithms with datasets that contain
high distractors and YouTube videos and would have a report on the performance of each of these
algorithms. By the end of this phase, we would also have defined the top 3 face detection algorithms that
would meet our project requirements.

The computational models, which were implemented in this project, were chosen after extensive research,
and the successful testing results confirm that the choices made by the researcher were reliable. The system
with manual face detection and automatic face recognition did not have a recognition accuracy over 90%,
due to the limited number of eigenfaces that were used for the PCA transform. This system was tested under
very robust conditions in this experimental study and it is envisaged that real-world performance will be far
more [Link] fully automated frontal view face detection system displayed virtually perfect accuracy
and in the researcher's opinion further work need not be conducted in this area. The fully automated face
detection and recognition system was not robust enough to achieve a high recognition accuracy. The only
reason for this was the face recognition subsystem did not display even a slight degree of invariance to scale,
rotation or shift errors of the segmented face image. This was one of the system requirements identified in

48
Conclusion Chapter 13

section However, if some sort of further processing, such as an eye detection technique, was implemented
to further normalise the segmented face image, performance will increase to levels comparable to the
manual face detection and recognition system. Implementing an eye detection technique would be a minor
extension to the implemented system and would not require a great deal of additional research. All other
implemented systems displayed commendable results and reflect well on the deformable template and
Principal Component Analysis strategies. The most suitable real-world applications for face detection and
recognition systems are for mugshot matching and surveillance. There are better techniques such as iris or
retina recognition and face recognition using the thermal spectrum for user access and user verification
applications since these need a very high degree of [Link] real-time automated pose invariant face
detection and recognition system proposed in chapter seven would be ideal for crowd surveillance
applications. If such a system were widely implemented its potential for locating and tracking suspects for
law enforcement agencies is immense. The implemented fully automated face detection and recognition
system (with an eye detection system) could be used for simple surveillance applications such as ATM user
security, \while the implemented manual face detection and automated recognition system is ideal of
mugshot matching. Since controlled conditions are present when mugshots are gathered, the frontal view
face recognition scheme should display a recognition accuracy far better than the results, which were
obtained in this study, which was conducted under adverse conditions Furthermore, many of the test
subjects did not present an expressionless, frontal view to the system. They would probably be more
compliant when a 6'5'' policeman is taking their mugshot! In mugshot matching applications, perfect
recognition accuracy or an exact match is not a requirement. If a face recognition system can reduce the
number of images that a human operator has to search through for a match from 10000 to even a 100, it
would be of incredible practical use in law enforcement. The automated vision systems implemented in this
thesis did not even approach the performance, nor were they as robust as a human's innate face recognition
system. However, they give an insight into what the future may hold in computer vision.

49
REFERENCE

1. [Link], M.M. Desai, "Face Detection based ATM Security System using Embedded Linux
Platform ", 2nd International Conference for Convergence in Technology (I2CT), 2017.

2. [Link], [Link], [Link], [Link],"Enhanced security for ATM machine with


OTP and Facial recognition features",International Conference on Advanced Computing Tech- nologies
and Applications (ICACTA), 2015.

3. Sivakumar T. 1 , G. Askok 2 , k. S. Venuprathap, "Design and Implementation of Security Based ATM


theft Monitoring system", International Journal of Engineering Inventions, Volume 3, Issue 1, 2013.
4 C. Bhosale, P. Dere, C. Jadhav, "ATM security using face and fingerprint recognition", Interna- tional
Journal of Research in Engineering, Technology and Science, Volume VII, SpecialIssue, Feb 2017.

5. Manoj V , M. Sankar R , Sasipriya S , U. Devi E, Devika T , "Multi Authentication ATM Theft


Prevention Using iBeacon", International Research Journal of Engineering and Technology (IRJET).

6. L. Wang,H. Ji, Y. Shi, " Face recognition using maximum local fisher discriminant analysis",18th IEEE
International Conference on Image Processing, 2011.

7. [Link] and [Link], "Effective Face Recognition using Deep Learning based Linear
Discriminant Classification ", IEEE International Conference on Computational Intelligence and Computing
Research, 2016.

8. H. R. Babaei, O. Molalapata and A.H.Y Akbar Pandor, "Face Recognition Application for Au-
tomatic Teller Machines (ATM)", International Conference on Information and Knowledge Manage- ment
(ICIKM), 2012.

9 [Link] recognition

10. [Link]

11. [Link]

50

You might also like