0% found this document useful (0 votes)
23 views53 pages

Final Major

Uploaded by

ansaarkhan498
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views53 pages

Final Major

Uploaded by

ansaarkhan498
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

NAVIGATING THE SCREEN THROUGH EYEBALL

Major Project Report submitted in partial fulfillment of the requirements for the award of
the Degree of B.E in Computer Science and Engineering
By
Asna Muskan 160620733066
Samiya Ashraf Khan 160620733068
Fiza Abdul Aziz 160620733076
Under the Guidance of
Mrs. M. Thejaswee
Assistant Professor
Department of Computer Science & Engineering

Department of Computer Science and Engineering


Stanley College of Engineering & Technology for Women
(Autonomous)
Chapel Road, Abids, Hyderabad – 500001
(Affiliated to Osmania University, Hyderabad, Approved by AICTE, Accredited by NBA
& NAAC with A Grade)

2024

i
Stanley College of Engineering & Technology for
Women
(Autonomous)
Chapel Road, Abids, Hyderabad – 500001
(Affiliated to Osmania University, Hyderabad, Approved by AICTE, Accredited by
NBA & NAAC with A Grade)

CERTIFICATE
This is to certify that major project report entitled “Navigating the screen through
EyeBall” being submitted by
Asna Muskan 160620733066
Samiya Ashraf Khan 160620733068
Fiza Abdul Aziz 160620733076
in partial fulfillment for the award of the Degree of Bachelor of Engineering in Computer
Science & Engineering to the Osmania University, Hyderabad is a record of bonafide work
carried out under my guidance and supervision. The results embodied in this project report
have not been submitted to any other University or Institute for the award of any Degree or
Diploma.

Guide Head of the Department


Mrs. M. Thejaswee Dr. Y V S S Pragathi
Assistant Professor, Dept .of CSE Prof & HoD, Dept. of CSE
Project Coordinator External Examiner
Dr. D Radhika
Assistant Professor, Dept. of CSE

ii
DECLARATION

We hereby declare that major project work entitled Navigating the screen through Eyeball
submitted to the Osmania University, Hyderabad, is a record of original work done by us.
This project work is submitted in partial fulfilment of the requirements for the award of the
degree of the B.E in Computer Science and Engineering.

Asna Muskan 160620733066

Samiya Ashraf Khan 160620733068

Fiza Abdul Aziz 160620733076

iii
ACKNOWLEDGEMENT

Firstly, we are grateful to The Almighty God for establishing us to complete this Major
Project. We pay our respects and love to our parents and all our family members and friends
for their love and encouragement throughout our career.

We wish to express our sincere thanks to Sri. Kodali Krishna Rao, Correspondent and
Secretary Stanley College of Engineering & Technology for Women, for providing us with
all the necessary facilities.

We place on record, our sincere gratitude to Prof. Satya Prasad Lanka, Principal,
for his constant encouragement.

We deeply express our sincere thanks to our Head of the Department, Prof Y.V. S S Pragathi,
for encouraging and allowing us to present the Major Project on the topic Navigating the
screen through eyeball at our department premises for the partial fulfillment of the
requirements leading to the award of the B.E. degree.

It is our privilege to express sincere regards to our project guide Mrs. M. Thejaswee for the
valuable inputs, able guidance, encouragement, whole-hearted co-operation and constructive
criticism throughout the duration of our project.

We take this opportunity to thank all our faculty, who have directly or indirectly helped our
project. Last but not least, we express our thanks to our friends for their co-operation and
support.

iv
ABSTRACT

Introducing an innovative Human-Computer Interaction system designed to empower

individuals facing physical challenges in using a traditional computer mouse. This system

leverages eye gaze tracking, incorporating features like blink, gestures, and gaze to

facilitate seamless computer control. By utilizing a webcam and Python, it ensures a

reliable, mobile, and user-friendly eye control solution.

The implementation focuses on simplicity and convenience, aligning with natural human

habits. The system enables users to control the cursor on the screen by tracking the

movement of their eyes, specifically the center of the pupil. Through the use of OpenCV

for pupil detection, the system enhances.

Keywords: Human-Computer Interaction, Physical challenges, Eye gaze tracking,

Webcam, Python, User-friendly, Eye control, OpenCV, Pupil detection

v
Tables of Contents

1. Introduction 1

1.1 About Project 2

1.2 Objectives of the Project 3

1.3 Scope of the Project 3

1.4 Advantages 4

1.5 Disadvantages 4

1.6 Applications 5

1.7 Hardware and Software Requirements 5

2. Literature Survey 7

2.1 Existing System 8

2.2 Proposed System 10

3. Proposed Architecture 12

4. Implementation 24

4.1 Algorithm 25

4.2 Code Implementation 26

5. Results 32

6. Conclusion 40

7. Future Scope 42

8. References 44

vi
List of Figures

1. Fig 3.1: Block Diagram. 13


2. Fig 3.2: Flowchart Diagram. 14
3. Fig 3.3: Class Diagram. 15
4. Fig 3.4: Object Diagram. 16
5. Fig 3.5: Use-Case Diagram. 17
6. Fig 3.6: Component Diagram. 18
7. Fig 3.7: Deployment Diagram. 19
8. Fig 3.8: Sequence Diagram. 20
9. Fig 3.9: State Diagram. 21
10. Fig 3.10: Activity Diagram. 22
11. Fig 3.11: System Architecture Diagram. 23
12. Fig 5.1: Opening of the camera using open cv. 33
13. Fig 5.2: Detection of landmark. 34
14. Fig 5.3: Mapping of landmarks. 35
15. Fig 5.4: Detection of the first iris. 36
16. Fig 5.5: Cursor movement through Eyeball. 37
17. Fig 5.6: Navigating through Eyeball. 38
18. Fig 5.7: Before Zooming Operation. 38
19. Fig 5.8: After Zooming Operation. 39

vii
CHAPTER 1

INTRODUCTION

1
1.1 About Project

Now a days, personal systems are carrying a huge part in our everyday lives as they are used
in areas such as work, education and enjoyment. All these applications are common in the
use of personal computers is predominantly based on the input method via keyboard and
mouse. To enable such substitute input methods a system was made which follows a low
approach control a mouse cursor on his computer system. The eye tracker is based on images
recorded by a mutated webcam to acquire the eye movements. These eye movements are
then plotted to a computer screen to stance a mouse cursor accordingly. The motion of mouse
by automatically modifying the position of eyesight. Camera is used to capture the image of
a movement. In general, any digital image processing algorithm consists of three stages:
Input, processor and output. In the input stage image is catch by a camera. It sent a particular
system to focus on a pixel of image that's gives, its output as a processed image.

As the computer technologies are growing rapidly, the importance of human computer
interaction becomes highly notable. Some persons who are disabled cannot be able to use
the computers. Eye ball movement control mainly used for disabled people. Incorporating
this eye controlling system with the computers will make them to work without the help of
other individual. Human-Computer Interface (HCI) is focused on use of computer
technology to provide interface between the computer and the human. There is a need for
finding the suitable technology that makes the effective communication between human and
computer. Human computer interaction plays the important role.

Therefore, there is a need to find a method that spreads an alternate way for making
communication between the human and computer to the individuals those who have
impairments and give them an equivalent space to be an element of Information Society. In
recent years, the human computer interfaces are attracting the attention of various
researchers across the globe.

2
1.2 Objectives

The objective of the project “Naviagting the screen through Eyeball” is to develop a user-
friendly eye tracker system which is based on images recorded by a mutated webcam to
acquire the eye movements. These eye movements are then graphed to a computer screen to
position a mouse cursor accordingly.

The movement of mouse by automatically adjusting the position of eyesight. Camera is used
to capture the image of eye movement. In general, any digital image processing algorithm
consists of three stages: input, processor and output. In the input stage image is captured by
a camera. It sent to a particular system to focus on a pixel of image that's gives, its output as
a processed image.

There are two components to the human visual line-of-sight: pose of human head and the
orientation of the eye within their sockets. Investigated these two aspects but will concentrate
on the eye gaze estimation in this concept.

1.3 Scope of the Project

The scope of the eyeball-based cursor movement project encompasses several key aspects,
including technical implementation, usability considerations, and potential applications.
Here's an outline of the project scope:

Technical Implementation: Design and develop software for real-time eye tracking using
computer vision techniques. Integrate eye tracking algorithms with cursor control
mechanisms to enable precise and responsive cursor movement.

Usability and User Experience: Design an intuitive and user-friendly interface for
interacting with the system, considering factors such as simplicity, clarity, and ease of use.
Test the system with a diverse group of users to evaluate usability, comfort, and overall user
experience.

Reliability: Develop techniques to enhance the accuracy and reliability of eye tracking,
even in challenging conditions such as varying lighting environments and user movements.
3
Accessibility and Inclusivity: Ensure that the system is accessible to users with diverse
abilities and needs, including those with motor impairments or conditions affecting
hand-eye coordination.

1.4 Advantages

• Hands-Free Operation: Eye-tracking enables hands-free operation, which can be


beneficial in situations where manual dexterity is limited or when hands are
occupied with other tasks.

• Accessibility: Provides a hands-free interface, benefiting individuals with mobility


impairments.

• Efficiency: Allows for faster and more intuitive navigation, eliminating the need
for physical input devices.

• Personalization: Enhances user experiences by adapting interfaces based on gaze


patterns in real-time.

• Precision: Offers high accuracy in selecting items on screens, particularly useful


for fine motor control tasks.

• Assistive Functionality: Serves as a valuable tool for communication and control


for individuals with disabilities.

1.5 Disadvantages

• Calibration Requirement: Requires frequent calibration for accurate operation,


which can be time-consuming.
• Eye Strain: Prolonged use may lead to eye strain and fatigue, particularly in
environments with constant focus.

4
• Accuracy Limitations: Factors like lighting conditions and user eyewear may affect
the accuracy of tracking.

• Privacy Concerns: Raises privacy issues due to the collection of sensitive gaze data
without proper consent or security measures.

• Limited Compatibility: Not all software applications or interfaces may support eye-
tracking technology, limiting its usability.

1.6 Applications

The main goal of the project is to develop the system for disable peoples.

• Assistive Technology for Patients with Disabilities: Navigating the screen through
eyeball can serve as an assistive technology tool for patients with limited or no motor
control.

• Computer-Aided Surgery and Medical Imaging: Surgeons and medical


professionals can use eyeball-based cursor movement for hands-free control of
medical imaging systems during surgeries.

1.7 Hardware and Software Requirements

HARDWARE REQUIREMENTS:
• Processor Pentium IV

• Speed 2.4 GHz

• Ram 256 MB

• Hard Disk 40 GB

• Laptop

5
SOFTWARE REQUIREMENTS:
• Operating System Windows XP Professional
• PyCharm Community Edition 2022.1.4

Programming Language:

• Python

6
CHAPTER 2

LITERATURE SURVEY

7
2.1 Existing System

MATLAB detects the iris and control course. Navigating the Screen Through Eyeball
wheel chair is existing that controls the wheel chair by monitoring eye movement. In
MATLAB is difficult to predict the Centroid of eye so we go for OpenCV.

Navigating the Screen Through Eyeball mainly used for disabled people. Incorporating this
eye controlling system with the computers will make them to work without the help of other
individual Human-Computer Interface (HCI) is formed on use of computer technology to
provide interface between the computer and the human. There is a need for finding the
suitable technology that makes the effective communication between human and computer
Human computer interaction plays the important rule Thus there is a need to find a method
that spreads an alternate way for making communication between the human and computer
to the individuals those who have impairments and give them an equivalent space to be an
element of Information Society. The camera mouse is used to change all roles of traditional
mouse actions. The proposed system can give all mouse click events.

In this method, the camera mouse system along with acts as left click event and blinking as
right click event. The real time eye-gaze estimation system is used for eye-controlled mouse
for assisting the disabled. This technology can be enhanced in the future by inventing mire
techniques like clicking events as well as to de all the mouse movements and also for human
interface systems using eye blinks. Technology also extended to the eyeball movement and
eye blinking to get the efficient and accurate movement.

This study is mainly concentrated around predicting the eyeball movements. Before
detecting the eyeball movements, we need to identify the facial landmarks. We can attain a
lot of things using these landmarks, we can detect eyeball movements, eye blinks in a video
and also predict emotions. Understanding the dlibs facial landmark finder dlib's model, not
only does a faster face- detection but also allows us to predict the 68 2D facial landmarks
accurately.

8
2.1.1 Literature Survey:

• There are many research papers that have been published related to Eyeball
Movement based cursor control.
• A paper was published by CHILUKURI SAI DURGA NARENDA [1]. Proposed
a “Work underlying this system for pupil identification uses raspberry pi board to
control cursor.”
• Another research paper was published by G. SMITHA, P. VENKATESHWAR,
M. SRIVASTAVA [2] related to Eye Ball Cursor Movement. Proposed eye
movements are classified using a support vector machine classifier.
• SHARANYAA.S[3] Published a paper based on Eyeball cursor movement
detection using Deep learning. Based on created a system that replaces the
conventional way of cursor movement using mouse with one’s eyeball movement.
• Khoushik Roy, Dibaloke Chanda [4] proposed A Robust webcam-based eye gaze
estimation system for Human-computer interaction. They proposed a non-
wearable and webcam-based eye-gaze detection method that offers multiple
benefits in terms of accuracy, robustness, and reliability over existing solutions.
• Eye-movement Analysis and Prediction using Deep Learning Techniques and
Kalman Filter proposed by Zaid Yemeni, Sameer Rafee, Xu Yun, Zhang Jian Xin
[5]. Using the Kalman filter to estimate and analyze eye position.
• A Sivasankari, D Deepa, T Anandhi, Anitha Ponraj [6] proposed a paper based on
Eyeball based cursor movement control on the detecting the center of point pupil.
• Mohamed Nasir, Mujeeb Rahman k, Maryam Mohamed Zubair, Haya Ansari,
Farida Mohamed [7]. They Proposed a system that captures the face using
MATLAB vfm tool. The eye detection is done by dividing face into three equal
regions. The title of the proposed system is Eye-Controlled Mouse Cursor for
physically Disabled Individual.
• Design and Development of Hand Gesture Based Virtual Mouse Proposed by
Kabid Hassan Shibly, Samrat Kumar Dey, Md. Aminul Islam, Shahriar Iftekhar
Showrav [8]. This paper proposes a virtual mouse system based on HCI using
computer vision and hand gestures. Gestures captured in webcam and processed
with color segmentation & detection technique.

9
• Real-time virtual mouse system using RGB-D images and fingertip detection
Proposed by Dinh-Son Tran, Ngoc-Huynh Ho [9]. In this work, propose a novel
virtual-mouse method using RGB-D images and fingertip detection. The hand
region of interest and the center of the palm are first extracted using in-depth
skeleton-joint information images from a Microsoft Kinect Sensor version 2, and
then converted into a binary image. Then, the contours of the hands are extracted
and described by a border-tracing algorithm.
• Eye Tracking with Event Camera for Extended Reality Application Proposes by
Nealson Li,Ashwin Bhat, Arijit Raychowdhury [10]. In this Paper they Present an
eventbased eye-tracking system that’s extracts pupil features and train a
convolutional neural network on 24 subjects to classify the events representing
the pupil.

2.2 Proposed System

In the proposed system,

User Interaction Flow:

1. Initialization: The user starts the system, which activates the camera module.
2. Face Detection: The camera captures the image and performs face detection using
OpenCV. This ensures that the system focuses on the user's face.
3. Eye Localization: Once the face is detected, the system locates the eyes within the
facial region. This is achieved by identifying the darker eye region compared to the
nose bridge region.
4. Pupil Detection: The system identifies the pupil within each eye to determine the gaze
direction.
5. Interpretation of Eye Movements: Based on the position of the pupil, the system
interprets the user's eye movements, such as left or right.
6. Cursor Control: The interpreted eye movements are used to control the mouse cursor
on the screen. Moving the eyes left or right corresponds to moving the cursor in the
respective direction.
7. Click Event: The system detects eye blinks or prolonged eye closure as a mouse click

10
event. For example, if the user closes their eye for at least five seconds, it triggers a
left click event.
8. Error Handling: If no eye movement is detected for a prolonged period or if the eye
remains closed for more than a minute, the system displays an error message indicating
no activity.

Admin Access:

Admin access to the system may involve additional functionalities such as calibration,
settings adjustment, and monitoring. The admin interface could include options for
adjusting sensitivity, setting up user profiles, and managing system preferences.

System Architecture:
The system architecture comprises the following components:

1. Camera Module: Captures the video feed of the user's face.


2. Face Detection Module: Utilizes OpenCV for detecting and tracking the user's face
in the video feed.
3. Eye Localization Module: Identifies the region of the eyes within the detected face
region.
4. Pupil Detection Module: Determines the position of the pupils within each eye.
5. Interpretation Module: Analyzes the position of the pupils to interpret the user's eye
movements.
6. Cursor Control Module: Translates the interpreted eye movements into cursor
movements on the screen.
7. Click Event Detection Module: Detects eye blinks or prolonged eye closures to
trigger mouse click events.

11
CHAPTER 3

PROPOSED ARCHITECTURE

12
The project employs a combination of OpenCV, MediaPipe Face Mesh model, and
PyAutoGUI to develop an innovative human-computer interface system tailored for
individuals with physical disabilities. Here's a breakdown of the methodology:
▪ Initialize System: Install OpenCV, mediapipe, pyautogui and Python.
▪ Capture Webcam Feed: Continuously capture frames.
▪ Face Detection: Identify user's face in the webcam.
▪ Eye Region Extraction: Extract eyes region for analysis.
▪ Pupil Detection: Use OpenCV for real-time pupil tracking.
▪ Detection: Identify eye blinks as user input.
▪ Gaze Detection: Determine cursor movement based on iris position.
▪ Cursor Movement: Map gaze direction to cursor movement.
▪ User Interface Interaction: Translate eye actions into mouse inputs.
▪ Usability Optimization: Fine-tune parameters for optimal performance.

3.1 BLOCK DIAGRAM

Fig 3.1: Block Diagrams

13
3.2 FLOWCHART

Fig 3.2: Flowchart Diagram

The flowchart outlines a process for capturing a video feed, detecting faces and eyes,
locating pupils, interpreting eye movements, controlling cursor movement, detecting
click events, and triggering mouse click events based on eye movements. It includes
capturing the video feed, checking if a face is detected, locating eyes within the
detected face region, detecting pupils, and interpreting eye movements. The flowchart
concludes by indicating the process.

14
3.3 CLASS DIAGRAM

Fig 3.3: Class Diagram

EXPLANATION:

• In this class diagram represents how the classes with attributes and methods are linked
together to perform the verification with security. From the above diagram shown the
various classes involved in our project.

15
3.4 OBJECT DIAGRAM

Fig 3.4: Object Diagram


EXPLANATION:

• In the above diagram talks about the flow of objects between the classes

• It is a diagram that shows a complete or partial view of the structure of a modeled


system.

• In this object diagram represents how the classes with attributes and methods are
linked together to perform the verification with security.

16
3.5 USE-CASE DIAGRAM

Fig 3.5: Use-Case Diagram

EXPLANATION:

• The main purpose of a use case diagram is to show what system functions are
performed for which actor.

• Roles of the actors in the system can be depicted.

• The actor is represented as "User".

• The system includes various use cases like capturing video feed, detecting faces,
locating eyes, detecting pupils, interpreting eye movements, controlling cursor
movement, detecting click events, and triggering mouse click events.

17
3.6 COMPONENT DIAGRAM

Fig 3.6: Component Diagram

EXPLANATION:

• In the Unified Modeling Language, a component diagram depicts how components are
wired together to form larger components and or software systems.

• They are used to illustrate the structure of arbitrarily complex systems.

• User gives main query and it converted into sub queries and sends through data
dissemination to data aggregators.

• Results are to be showed to user by data aggregators.

• All boxes are components and arrow indicate dependencies.

18
3.7 DEPLOYMENT DIAGRAM

Fig 3.7: Deployment Diagram


EXPLANATION:

• Deployment Diagram is a type of diagram that specifies the physical hardware on


which the software system will execute.

• Each box represents a module or component in your system.

• Arrows represent the flow of communication or data between components.

• You can label each box with the name of the module/component it represents.

• You can also include the nodes where these components are deployed, such as physical
servers or devices.

19
3.8 SEQUENCE DIAGRAM

Fig 3.8: Sequence Diagram

EXPLANATION:

• A sequence diagram in Unified Modeling Language (UML) is a kind of interaction


diagram that shows how processes operate with one another and in what order.

• It is a construct of a Message Sequence Chart. A sequence diagram shows object


interactions arranged in time sequence.

• It depicts the objects and classes involved in the scenario and the sequence of messages
exchanged between the objects needed to carry out the functionality of the scenario.

20
3.9 STATE DIAGRAM

Fig 3.9: State Diagram


EXPLANATION:
• The relationship between the user, webcam, and cursor movement/clicks based
on eye movements is depicted in this state diagram.

• The system and the user communicate.

• The video stream is recorded by the webcam.

• Until the eyeballs are detected, the system remains idle.

• The system recognizes eye movement as soon as it sees eyes.

• The cursor movement is determined by the system by interpreting the eye


movement.

21
• The cursor moves in line with that.

• When a click event occurs, the system recognizes eye blinks or prolonged closure.

• This feedback loop keeps going while the user interacts with the system,
transforming the user's eye movements into cursor movements or clicks on the
computer screen.

3.10 ACTIVITY DIAGRAM

Fig 3.10: Activity Diagram

EXPLANATION:

• Activity diagrams are graphical representations of workflows of stepwise activities


and actions with support for choice, iteration and concurrency.

• In the Unified Modeling Language, activity diagrams can be used to describe the
business and operational step-by-step workflows of components in a system.

22
• An activity diagram shows the overall flow control.


3.11 SYSTEM ARCHITECTURE DIAGRAM

Fig 3.11: System Architecture Diagram

EXPLANATION:

• An Architectural diagram is a visual representation that maps out the physical


implementation for components of a software.

• Its shows the general structure of the software system and the associations. limitations
and boundaries between each element.

• Software Environment and complex and they aren’t static

23
CHAPTER 4
IMPLEMENTATION

24
4.1 Algorithm

Media Pipe's Face Mesh s Face

The project utilizes Media Pipe’s Face Mesh algorithm, which is a component of the
MediaPipe library, an open-source framework developed by Google for building
multimodal machine learning pipelines. MediaPipe is designed to facilitate the
development of various computer vision and machine learning applications, providing
pre-built components for tasks such as face detection, pose estimation, hand tracking,
and more.

The Face Mesh module within MediaPipe is specifically tailored for face landmark
detection and tracking in real-time video streams. It employs deep learning techniques
to accurately locate and track key points on a person's face, such as the eyes, nose,
mouth, and facial contours.

In this project, the Face Mesh algorithm is employed to detect and track facial
landmarks, with a particular focus on the landmarks around the eyes. These landmarks
serve as reference points for understanding the user's eye movements and expressions.
By continuously analyzing the positions of these landmarks in successive video
frames, the system can infer the direction of the user's gaze and movements of their
eyes.

Once the facial landmarks, especially those around the eyes, are detected and tracked
in real-time from the webcam feed using Media Pipe’s Face Mesh, they are used to
control the movement of the cursor on the screen. The coordinates of the detected
landmarks are mapped to corresponding positions on the screen, allowing the cursor
to move accordingly.

By leveraging the capabilities of Media Pipe’s Face Mesh algorithm, the project
enables intuitive and natural interaction with the computer system based on the user's
facial expressions and eye movements. This approach replaces traditional input

25
devices like a mouse or keyboard, offering a more accessible and hands-free
computing experience, which can be particularly beneficial for individuals with
physical disabilities or limitations.

Summarize the algorithm,


• The algorithm used in this project is Media Pipe's Face Mesh is a part of the Media
Pipe library.
• Open-source framework.
• Media Pipe is a framework developed by Google for building multimodal machine
learning pipelines.
• The Face Mesh module in Media Pipe is specifically designed for face landmark
detection and tracking in the real-time video streams.
• In this project, we use Media Pipe's Face Mesh to detect facial landmarks in real-time
from a webcam feed.
• These landmarks represent key points on a person's face, particularly focusing on the
landmark around the eye.
• Landmark are then used to control Cursor movement on the screen.

4.2 Code Implementation

import CV2

cam=CV2.videocapture(0)

#Here video is continuously running in every frame after frame. To stop this, we are using
while loop.

while True:

# Here this will help the camera to read every frame of your video.

frame=cam.read()

26
CV2.imshow('Eye controlled mouse', frame)

CV2.waitkey(1)

#Run. we see the camera is opened. Stop.

import mediapipe as mp

cam=CV2.videocapture(0)

face_mesh = mp.solutions.face_mesh.FaceMesh()

while True:

frame=cam.read()

rgb_frame=CV2.cvtcolor(Frame,CV2.color_BGR2RGB)

output=face_mesh.process(rgb_frame)
landmark_points =output.multi_face_landmarks

print(landmark_points)

landmarks_points=output .multi_face landmarks

if landmarks=landmark_points[0].landmark

x=landmark.x

y = landmark. y

print(x,y)

#To know the width and height of the video frame.

27
landmark_points =output.multi_face_landmarks

frame_h,frame_w,=frame.shape

#With this it will know the height and width.

for landmark in landmarks:

x= landmark.x*frame_w

y= landmark.yframe_h

print(x,y)

#Run, here output is in the form of float. But we need integer values as an output Stop.

x=int(landmark.x* frame_w)

y=int(landmark.y*frame_h)

print(x,y)

y=int(landmark.y*frame_h)

CV2.circle(Frame,(x,y),3,(0,255,0))
print(left[0].y_left[1].y)

#Run. we see the different floating value, Stop.

#To see whether it detects the blinking operation of eye.

CV2.circle(Frame,(x,y),3,(0,255,255))

28
if (left[0].y_left[1].y)<0.004:

print('click')

#Run.Output display 'click' when you blink your eye. Stop.

CV2.circle(Frame,(x,y),3,(0,255,0))

if id==1:

pyautogui.move to (x,y)

#Run. To see whether the cursor is moving or not.


Stop.

#For enlarging the screen, we need to know the screen size.

face_mesh=mp.solutions. face_mesh.FaceMesh(refine_landmarks = True)

screen_w,screen_h=pyautogui.size()

if id==1;

screen_x =screen_w/frame_w*x
screen_y=screen_h/frame_h*y
pyautogui.moved to (screen_x,screen_y)

#Run. We can access the corsur.Stop.


face_mesh= mp. solutions.face_mesh.FaceMesh(refine_landmarks=True)

#By this we can reduce the landmarks.

for landmark in landmarks[474:478]:

29
#Run. It detect the eye.
Stop

frame=cam.read()

#flipping the image.

frame =CV2.flip(frame,1)

# Run to move the cursor Stop.

#Step 4: To move the cursor with eye.

import pyautoGUI

for landmark in enumerate(landmarks[474:478]):

#Reason to do this is to pick any one id or index or element.

for landmark in left:

x=int(landmark.x*frame_w)

y=int(landmark.y*frame_h)

CV2.circle(frame,(x,y),3,(0,255,255))

CV2.imshow('Eye Controlled Mouse',frame)

#Run. It shows whether we are blinking or not Stop.

# To know the y-axis position we need to get out of the 'for loop'.

print(left[0].y,left[1].y)

30
CV2.imshow('EyeControlled Mouse', frame)

#move out of the 'for loop'

left=[landmarks[145],landmarks[159]]

#remove the print function and replace it with.

pyautogui.click()

pyautogui.sleep(1)

#Run to check the blinking operation is working or not.Stop.

CV2.imshow('Eye Controlled Mouse',frame)

CV2.waitkey(1)

31
CHAPTER 5
RESULT

32
Results:

Step1: Opening of the camera using open cv

Fig 5.1: Opening of the camera using open cv

33
Step 2: Detection of landmark

Fig 5.2: Detection of landmark

34
Step 3: Mapping of landmarks

Fig 5.3: Mapping of landmarks

35
Step 4 :Detection of the first iris

Fig 5.4: Detection of the first iris

36
Step 5 : Cursor movement through Eyeball

Fig 5.5: Cursor movement through Eyeball

37
Step 6: Navigating screen and performing clicking
operation

Fig 5.6: Navigating through Eyeball

Step 7: Before Zooming Operation

Fig 5.7: Before Zooming Operation

38
Step 8: After Zooming Operation

Fig 5.8: After Zooming Operation

39
CHAPTER 6

CONCLUSION

40
Conclusion:

This project introduces a system designed to empower individuals with disabilities, enabling
them to actively participate in society's digital advancements. The proposed Navigating the
Screen through Eyeball aims to facilitate easy mouse accessibility for those with motor
impairments. By harnessing eye movements, users can seamlessly interact with computers,
eliminating the need for traditional input devices. This system not only levels the playing
field for disabled individuals, allowing them to navigate computers like their able-bodied
counterparts but also presents a novel option for all users. In browsing tests, the system
demonstrates improved efficiency and user experience, simplifying multimedia interaction
with minimal effort.
Furthermore, this system fosters user confidence and independence, reducing reliance on
external assistance. By addressing limitations inherent in traditional methods, the system
signifies a significant breakthrough for motor-impaired individuals. The project provides an
overview of various technologies explored in related studies, highlighting their respective
advantages and drawbacks, thus offering insights into potential advancements.

41
CHAPTER 7

FUTURE SCOPE

42
Future Scope:

The Navigating the Screen through Eyeball system represents a significant leap forward in
accessibility technology, particularly for individuals affected by conditions like
Symbrachydactyly. By enabling users to operate a computer mouse and execute its full range
of functions through eye movements, this system eliminates many of the barriers faced by
physically challenged individuals. Tasks such as left and right clicking, text selection,
scrolling, and zooming become effortlessly achievable, empowering users to navigate digital
interfaces with newfound ease.

The success of this project holds the potential to inspire developers to create more innovative
solutions tailored to the needs of the physically challenged community. As awareness grows
and technology advances, there is immense scope for further refinement and expansion of
this architecture. Future iterations may incorporate cutting-edge advancements in eye-
tracking technology, enhancing accuracy and responsiveness. Additionally, ongoing
research and development efforts could focus on refining user interfaces to maximize
intuitiveness and efficiency.

43
CHAPTER 8

REFERENCES

44
References:

1. Payel Miah, Mirza Raina Gulshan, Nusrat Jahan “Mouse Cursor Movement and
Control using Eye Gaze-A Human Computer Interaction” International Conference
on Artificial Intelligence of Things IEEE (2023).

2. Elahi, Hossain Mahbub, et al. "Webcam-based accurate eye-central localization."


Robot, Vision and Signal Processing (RVSP), 2013 Second International Conference
on. IEEE, (2013).

3. Meng, Chunning, and Xuepeng Zhao. "Webcam-Based Eye Movement Analysis


Using CNN." IEEE Access 5 (2017): 19581-19587.

4. Caceres, Enrique, Miguel Carrasco, and Sebastián Rios. "Evaluation of an eye-


pointer interaction device for human-computer interaction." Heliyon 4.3. IEEE
(2018)

5. Hegde, Veena N., Ramya S. Ullagaddimath, and S. Kumuda. "Lowcost eye-based


human computer interface system (Eye controlled mouse)," India Conference
(INDICON), 2016 IEEE Annual. IEEE.

6. Kanchan Pradhan, Sahil Sayyed, Abhishek Karhade, Abhijeet Dhumal, Sohail Shaikh
“Eye movement-based cursor using machine learning and haar cascade algorithm”
International research Journal of Modernization in Engineering Technology and
Science (2023) April.

7. Klaudia Solska, Tomasz Kocejko “Eye-tracking everywhere - software supporting


disabled people in interaction with computers” 15th International Conference on
Human System Interaction (HSI) IEEE (2022)

8. Valenti, Roberto, Nicu Sebe, and Theo Gevers. "Combining head pose and eye
location information for gaze estimation." IEEE Transactions on Image Processing
211.2 (2012)

45
9. Tsai, Jie-Shiou, and Chang-Hong Lin. "Gaze direction estimation using only a
depthcamera." 2018 3rd International Conference on Intelligent Green Building and
Smart Grid (IGBSG). IEEE, (2018).

10. G. Smitha, P. Venkateshwar, M. Srivastava “Eye movements are classified using a


Support Vector Machine Classifier” International Journal of Innovative Science and
Research Technology (2023) January.

46

You might also like