SHS Capstone Format and Guidelines 2024
SHS Capstone Format and Guidelines 2024
3 single spaces
Title must be bold, all
caps, inverted triangle,
ARIAL 12, DOUBLE SPACE
Logo must
be
1.5”x1.5”
5 single spaces
Khosa, Gurshan S.
Bottom margin Ochave, Kyle Louise Marie C.
is 1 inch
Regunda, James Timothy R.
Bottom of the
Month 2022 page
A MOBILE ASL/FSL TRANSLATOR FOR
MUTE TRAVELERS
3 single spaces
Left margin is 1.5 inches
In Partial Fulfillment
Of the Requirements for the
Practical Research 1 Course
3 single spaces
Bottom margin
is 1 inch Bottom of the
Month 2022 page
TABLE OF CONTENTS
TITLE PAGE i
APPROVAL SHEET ii
ACKNOWLEDMENT iii
ABSTRACT iv
TABLE OF CONTENTS v
CHAPTER
1 INTRODUCTION (page no.)
Introduction (page no.)
Project Objectives (page no.)
Significance of the Study (page no.)
Scope and Delimitation of the Study (page no.)
Related Literature (page no.)
Related Systems (page no.)
Theoretical Framework (page no.)
Conceptual Framework (page no.)
Operational Definition of terms (page no.)
2 METHODOLOGY (page no.)
Research Design and Process Model (page no.)
Research Locale (page no.)
Research Participants (page no.)
Research Assessment Tool (page no.)
System Development Tools (page no.)
Data Gathering Procedures (page no.)
Data Analysis (page no.)
Trustworthiness of the Study (page no.)
Ethical Considerations (page no.)
3 PROJECT DESIGN (page no.)
Project Model (page no.)
Project Mechanism/Source Code (page no.)
Project Manual (page no.)
Project Cost Analysis (page no.)
REFERENCES
APPENDICES
A. Statement of Project Member Contribution
B. Permission Letter/s
C. Memorandum of Agreement/ Letter of Expert’s Acceptance
D. Copyright Transfer Agreement
E. Letter of Informed Consent
F. Questionnaire/s
G. Project
H. Validation Sheets
I. Validation Results
J. Turnitin Result
List of Figures
Table 1 #
Table 2 #
Table 3 #
Table 4 #
Application Specifications
Table 5 #
Application Requirements
Table 6 #
Table 7 #
2 single spaces
INTRODUCTION
2 single spaces
Communication barriers pose significant challenges for mute travelers, especially in areas where
sign language is not widely recognized. While mobile applications have made strides in bridging this gap,
most existing sign language translator apps cater primarily to English-speaking users (Masbate, et al.,
2020). This study, Breaking Barriers: A Mobile ASL/FSL Translator for Mute Travelers, aims to develop
an Android-based mobile application that translates text into American Sign Language (ASL) and Filipino
Sign Language (FSL). By utilizing the Knuth-Morris-Pratt Naive Algorithm for text processing, the app
enhances translation speed and accuracy, providing a practical tool for mute travelers to navigate and
Globally, technological advancements in mobile applications and IoT integration have improved
accessibility for the deaf and mute community (Papatsimouli & Sarigiannidis, 2023). Nationally, Filipino
Sign Language remains under-researched, highlighting the need for further development in Deaf
education and communication tools (Notarte-Balanquit, 2023). Locally, the linguistic diversity of Davao
City creates communication barriers for mute travelers, emphasizing the necessity of an ASL/FSL
Project Objectives
The purpose of the research is to deal with the communication problems encountered by mute
travelers in areas where ASL and FSL are not well known. There is limited access to means of
communication, which can make it difficult to move around, get help, or socialize (Government of
Canada, 2021). This mobile ASL/FSL translator is intended to provide users real-time communication
by converting their signs into text, which enables users to to converse more independently and
effectively.
This research project study aims to achieve the following objectives in the
end of course:
1. To create a mobile ASL/FSL Translator.
2. To test the mobile ASL/FSL Translator in terms of its:
a. Functionality
b. Usability
c. Responsiveness
3. To improve the malfunctions on its safety system of the existing
lawnmowers in household use.
Mute and Deaf People. This study primarily benefits deaf and mute individuals
particularly travelers by providing a reliable tool for communication with non-signers. With
the mobile translator, communication problems can be solved through real-time AI sign
language translation.
Educational Institutions and Advocacy Groups. This study supports the inclusion of
individuals with disabilities by contributing to research and policy development, promoting
greater awareness and accessibility in education and public services.
Travel and Hospitality Industry. This study enhances accessibility for nonverbal
travelers by providing assistive communication technology that enables them to interact
effectively with staff, ensuring better service and inclusivity.
Scope and Delimitation of the Study
The objective of the research is to address the communication barriers
encountered by mute travelers, particularly in regions where sign language is not
widely recognized, by developing a mobile application capable of translating
American Sign Language (ASL) and Filipino Sign Language (FSL) into text. The
study will be conducted at Malayan Colleges Mindanao over the academic year
2024–2025. The respondents for this research will be the mute and deaf students
who use ASL or FSL. The research will employ an experimental design and will
utilize Dall E as the software for our software. The system's performance will be
evaluated by simulating various conditions, such as different lighting environments
and camera angles, to determine how well the application recognizes and
translates hand gestures in real time. This process will be done in both a lab setting
and controlled real-world simulations to replicate potential user environments. The
experimental design will also include feedback from participants during alpha and
beta testing phases, which will focus on usability, functionality, and accuracy in
translating signs. Data collected from these phases will inform further refinement
and optimization of the mobile application before it is finalized for public use. The
application will focus on real-time translation of hand gestures. The limitations of
the study are the detection of facial expressions and provide sign language lessons
for mute and deaf people.
This is where you will place the introductory paragraph for related literature
and systems. (Note: The highlight of the Capstone Project’s review of related
literature are the tools being used on the production of the product. There is no
limit on to how many related literatures shall be included on the research project
proposal, however, the developers must ensure that all of these are related to the
capstone research project. Also, there shall be a smooth, scholarly, and
understandable flow of the idea presented. It must not appear as a separated
‘copy-and-pasted’ body of information but it must have a great connection of each
information set. Also, it is a wrong notion that the related literature must only
include things that supports your study, thus, it must also present the two side of
the coin, both boon and bane, for the study.)
2 single spaces
Related Literature
2 single spaces
Android Studio
Android Studio is the primary software tool used in the development of the SITa:
Sign Talker application by (Masbate et al., 2023). It is an integrated development
environment (IDE) specifically designed for Android app development. Android Studio
provides a comprehensive suite of tools for coding, debugging, and testing applications,
making it an essential tool for developers working on mobile platforms. According to
Google (2023), Android Studio offers features such as a flexible build system, an emulator
for testing, and real-time code analysis, which significantly enhances the development
process.
The Senyales app by Mindaña et al. (2021) uses Android APIs to process
video inputs and segment them into frames for real-time translation of Filipino Sign
Language (FSL) emergency signs. By integrating neural networks and object
detection, the app can accurately identify FSL signs, enabling effective
communication for the deaf and those with hearing impairments. The Android API
facilitates interaction between the app's interface and detection models, ensuring
efficient translation. This approach aligns with previous research on real-time sign
language detection systems that rely on machine learning models for quick
processing.
Mendoza et al. (2024) developed a mobile app using the YOLOv7 algorithm to
translate Filipino Sign Language (FSL) into real-time text, bridging communication gaps
between the Deaf and hearing communities. The app captures FSL hand gestures via the
phone’s camera, converting them into text with 97.9% accuracy. Evaluated using the ISO
25010 quality model, the app received an average score of 4.45, indicating high functional
suitability, usability, and reliability. This solution has the potential to enhance
communication, particularly in education, healthcare, and social settings, by providing
seamless FSL-to-text conversion.
DALL· E API
DALL·E, developed by OpenAI, is an advanced AI model that generates images from textual descriptions by
combining natural language processing (NLP) and computer vision (DALL-E | Artificial Intelligence Tools at the
U, 2024). This technology excels in creating detailed and contextually relevant visuals, making it a powerful tool
for creative, educational, and illustrative purposes (JMIR Publications, 2023). Its ability to interpret and visualize
diverse concepts, styles, and scenarios has proven valuable across industries such as education, design, and
healthcare (Kibria Ahmad, 2024). For the development of Breaking Barriers: A Mobile ASL/FSL Translator for
Mute Travelers, DALL·E’s integration is pivotal in generating dynamic and accurate sign language visuals,
addressing the limitations of static image databases and enhancing real-time communication for users (MMC
Global).
DALL·E’s architecture, based on a transformer model similar to OpenAI’s GPT-3, encodes text inputs and maps
them to image encodings, which are then decoded into visual outputs (Dallin Munger, 2023). This generative
capability allows DALL·E to create novel images even from complex or implausible prompts, making it highly
versatile (2023). For instance, educators use DALL·E to create visual aids for abstract concepts, while designers
and marketers leverage it to generate custom artwork and advertising graphics (Kibria Ahmad, 2024). In the context
of the mobile ASL/FSL translator, DALL·E’s ability to produce real-time, customizable sign language visuals
ensures precise and personalized translations, significantly improving communication for mute travelers
(Softwebsolutions).
However, DALL·E faces challenges, including difficulties in generating highly detailed images, inconsistencies in
output based on slight textual variations, and potential biases in its training data (Robin Schmidt, 2023). Despite
these limitations, its ability to produce accessible and contextually relevant visuals makes it a transformative tool
for visual communication. In educational settings, DALL·E has demonstrated its effectiveness in creating sign
language visuals for deaf students, fostering inclusivity and accommodating diverse learning needs
(Softwebsolutions). For the mobile translator, this capability is crucial in addressing the linguistic and cultural
nuances of Filipino users, particularly in regions like Davao City, where communication barriers are prevalent
(Dreisbach, 2021).
In summary, the integration of DALL·E into the mobile ASL/FSL translator app enhances its ability to provide
accurate, real-time sign language translations. By leveraging DALL·E’s generative capabilities, the app not only
improves the user experience for mute travelers but also contributes to broader efforts in making communication
more inclusive and accessible. This underscores the transformative potential of AI-driven tools in addressing real-
world challenges and promoting inclusivity for individuals with hearing impairments.
VS Code
Mi Zhang and Biyi Fang (2018), System and apparatus for non-intrusive word and sentence level
sign language translation,
This non-intrusive sign language translation system developed by Zhang and Fang in the
United States in 2018 is designed to recognize and convert sign language gestures into
meaningful words and sentences by integrating computer vision and artificial intelligence.
The system employs skeleton joint extraction technology that accurately tracks hand
shapes and movements without the need for wearable sensors or gloves. It is intended to
benefit individuals who are deaf hard of hearing or mute by facilitating effective
communication in various sectors such as education healthcare travel and customer
service. By leveraging RGB cameras depth sensors and machine learning algorithms the
system ensures a high degree of accuracy and adaptability that allows for real-time sign
language interpretation. Its portable and scalable design makes it suitable for integration
into mobile devices kiosks and other digital platforms to promote accessibility inclusivity
and seamless interaction between sign language users and non-signers.
Fig 5. System and apparatus for non-intrusive word and sentence level sign language translation
(Mi Zhang and Biyi Fang, 2018)
Related System 2
Elwazer et al. (2017), Automatic body movement recognition and association system,
US10628664B2
The automatic body movement recognition and association system developed by Elwazer et
al. (2017) can be applied to sign language recognition as it detects and processes human
body movements using a depth-sensing capture device. Since sign language relies on hand
gestures and body movements, the system's ability to track three-dimensional (3D) skeletal
joint data allows it to identify and interpret signs. The transition posture detector captures the
starting and ending positions of gestures, while the recording module stores movement
sequences for analysis. The preprocessing component converts these sequences into
structured data, which the classifier translates into corresponding text or speech. Additionally,
the "offline" training system continuously updates the learning model, improving accuracy in
recognizing and interpreting sign language gestures over time.
Fig 2. Elwazer et al. (2017), Automatic body movement recognition and association system,
US10628664B
bidirectional translation system from South Korea designed to bridge communication between sign
language users and non-users. The system includes a portable user terminal with a database unit for
storing sign language translation data based on image input. It has a sign input unit that captures
images through a camera, a gesture recognition unit that interprets hand shapes and facial
expressions, and a controller that generates sentences by combining words from recognized
gestures. The system also adds non-categorical expressions based on facial recognition and outputs
text or voice, making communication more accessible for deaf and hard-of-hearing individuals. It uses
AI-driven image processing, high-precision cameras, and a mobile display, enhancing real-time
Choi Hang-seo (2015), Sign Language translator, system and method, KR101777807B1
John Tardif, Machine Based Sign Language Interpreter, US9098493B2.
The Machine-Based Sign Language Interpreter by John Tardif in 2010 was developed in the
United States to translate sign language into text or speech using a computer-implemented method.
Its key features include a capture device that detects hand motions, a gesture recognition system that
matches movements to signs, and a grammar library that ensures accuracy by comparing successive
signs within a contextual framework. The system benefits deaf and mute individuals by enhancing
communication in workplaces, education, and public spaces. It is built with high-precision motion
sensors, advanced microcontrollers, and intelligent software algorithms, ensuring accurate, real-time,
Fig 10. John Tardif (2010), Machine Based Sign Language Interpreter, US9098493B2
Theoretical Framework
This capstone project, titled Breaking Barriers: A Mobile ASL/FSL Translator for Mute
Travelers, attempts to solve the communication difficulties that mute people in places where sign
language is not often learned. This study applies two key theories to support its objectives.
First theory is the Communication Accommodation Theory (CAT), also known as Speech
Accommodation Theory, was developed by Howard Giles in 1971. This theory implies that people
learn to match or adjust their behavior depending on how individuals create, maintain, or decrease
social distance in interaction (Giles & Ogay, 2007). It is the behavior of how people adjust their
usual communication style to accommodate both parties' way of communicating. In relation to this
study, CAT is highly relevant for the reason that a mobile ASL/FSL translator for mute travelers
will act as a tool to break the barriers of mute travelers and non-sign language users. Allowing the
non-sign language users and mute travelers to use the app as a bridge to communicate with each
Artificial Intelligence (AI). In this project, AI refers to the computational algorithms and systems
designed to mimic human intelligence. These systems learn from sign language data to interpret
and convert gestures into text or speech.
Sign Language Processing. This is the method by which sign language gestures are captured,
analyzed, and interpreted using digital technologies. In the context of the ASL/FSL Translator, it
involves computer vision and machine learning to accurately recognize signs.
User-Friendly Interface. The design and layout of the ASL/FSL Translator that ensures ease of use.
This includes intuitive navigation, clear visual cues, and accessible customization options that
facilitate interaction for both signers and non-signers.
Real-Time Feedback. Immediate responses provided by the system as users perform sign language
inputs. This may include visual or auditory cues confirming the recognition of signs or indicating
errors, thereby guiding users through the translation process.
Communication Accessibility. The degree to which the ASL/FSL Translator makes communication
possible and effective between signers and non-signers. This is evaluated based on translation
accuracy, system responsiveness, and overall user satisfaction.
Response Time. The duration between a user’s sign language input and the system’s delivery of the
corresponding translation. It is a critical performance metric measured in seconds.
User Satisfaction. The overall contentment of the users with the system, assessed through surveys,
interviews, and usage analytics. High user satisfaction indicates that the system meets the needs of
its users effectively.
ASL/FSL Translator. The complete system or application developed to translate American Sign
Language (ASL) and Filipino Sign Language (FSL) into text or speech. It integrates AI
technologies, sign language processing, and a user-centric interface to improve communication
accessibility.
PC Components. In this project, PC Components refer to the essential hardware parts of a personal
computer—such as the CPU, GPU, RAM, and storage devices—that process and store data,
enabling efficient execution of AI algorithms and real-time sign language translation.
CHAPTER 2
2 single spaces
METHODOLOGY
2 single spaces
This is where you will place the introductory paragraph for Chapter 2. In this
paragraph, you will briefly mention what can be found inside Chapter 2. This is
where you will place the introductory paragraph for Chapter 2. In this paragraph,
you will briefly mention what can be found inside Chapter 2. This is where you will
place the introductory paragraph for Chapter 2. In this paragraph, you will briefly
mention what can be found inside Chapter 2.
Discuss the research approach here. Cite your source. Discuss the research
approach here. Cite your source. In addition, please discuss here the type of
Software Development Life Cycle (SDLC) Process model that you have used in
this research project. (For example on the paper of the capstone project, Mathuto:
A Mathematics Learning Application for Grade School) This capstone research
study will use quantitative research, the followed by definition and authors to
support the study.
The developers will use the Waterfall Model. It is a type of neural network
that uses uninterrupted learning. It learns the input; it learns to simplify the input
and organize the input into specific groups such that when the new input will come
in it will automatically identify of where it belongs to the map itself. It is a sequential
design process in which progress is seen as flowing steadily downwards (like a
waterfall) through the phases of Conception, Initiation, Analysis, Design.
Construction. Testing, Implementation, and Maintenance.
The linguistic diversity of Davao City makes accessibility hard for mute travelers
even after the Filipino Sign Language Act of 2018 was put into effect. Those who
use sign language frequently have communication difficulties due to their use of
Tagalog and Cebuano as primary languages (Dreisbach, 2021). Furthermore, the
lack of widely accessible FSL translation tools limits mute people's capacity to
communicate with service providers, navigate public areas, and obtain crucial
information.
With Davao City as the research location, this study attempts to create a mobile
ASL/FSL translator that is suited to the distinct linguistic environment of the city.
This will support inclusivity and accessibility in a variety of social and public
contexts by assisting the Deaf community and silent tourists in overcoming
communication barriers.
Research Respondents
The respondents of this study consist of Grade 12 Senior High School
students who are currently enrolled at Mapúa Malayan Colleges Mindanao for the
academic year 2024-2025. These students will serve as testers to evaluate the
usability, efficiency, and accessibility of the developed mobile ASL/FSL translator.
Their feedback will help improve the application's functionality and user
experience.
A random sampling method will be used to select the respondents. Purposive
sampling is a non-probability sampling technique where participants are chosen
based on specific characteristics relevant to the study (Etikan, Musa, & Alkassim,
2016). This method ensures that the selected students have sufficient familiarity
with mobile applications and can effectively assess the features of the ASL/FSL
translator.
1. Must be a Grade 12 Senior High School student currently enrolled at Mapúa Malayan
Colleges Mindanao for the academic year 2024-2025.
2. Must have experience using mobile or smartphone applications.
3. Must be willing to participate voluntarily in testing the mobile ASL/FSL translator.
By selecting respondents from Mapúa Malayan Colleges Mindanao, the study aims
to gather valuable feedback to refine and enhance the effectiveness of the
application before its broader implementation.
Personal Data Category The Number Percentage
Distribution %
Secondary Level Grade 9 10 33.33%
Grade 8 10 33.33%
Grade 7 10 33.33%
Table 1. Number of Respondents
Discuss here the sampling method (cite your source) and the criteria for
selection. Discuss here the sampling method (cite your source) and the criteria for
selection. Discuss here the sampling method (cite your source) and the criteria for
selection.
Before Collection
The developers will choose mute and deaf individuals as testers and utilize
Google Forms to collect their responses. On a set date, the forms will be sent to
the testers' Gmail accounts, and they will be notified in advance. To ensure clarity,
the developers will offer online support to assist the testers in completing the
survey questionnaires. Once data collection is complete, the developers will
prioritize the security and privacy of the gathered information, taking necessary
measures to handle and process the data correctly.
After Collection
After gathering data from the alpha and beta testing stages, the developers
will move on to the interpretation phase. This step is essential for identifying the
application’s needs and requirements, offering valuable insights for improvements.
As the final testing phase before the product is released to the public, the focus is
on detecting and resolving any bugs or usability issues. In a controlled setting, the
developers will analyze the data using methods like weighted mean to evaluate
the website’s overall effectiveness and user experience. This thorough analysis
ensures the app undergoes a comprehensive review, addressing any issues
before it is made available to a broader audience.
Transferability. it means whether the results from one group or place can
safely be transferred to another, and the researcher does that by giving concise
explanations (Korstjens & Moser, 2018). This for sure will be utilize in this project
Proponents will make transferability by providing descriptions of users' attempts to
communicate in ASL/FSL to others in such a way that others can determine if the
study can be transferred to a similar situation, for example, in other parts or other
deaf traveler populations is possible.
Discuss the extent to which the study may be replicated in another field.
Discuss the extent to which the study may be replicated in another field. Discuss
the extent to which the study may be replicated in another field. Discuss the extent
to which the study may be replicated in another field. Discuss the extent to which
the study may be replicated in another field. Discuss the extent to which the study
may be replicated in another field.
Confirmability. I t r e f e r s t o ensuring that findings are based on data and
not researcher bias, settled through an open, traceable process that allows others
to verify conclusions. Confirmability in this project will be ensured through keeping
proper records of data collection, analysis, and user feedback to allow others to
verify the validity of findings on the ASL/FSL translator.
Discuss how the study’s results may be verified. Discuss how the study’s
results may be verified. Discuss how the study’s results may be verified. Discuss
how the study’s results may be verified. Discuss how the study’s results may be
verified. Discuss how the study’s results may be verified. Discuss how the study’s
results may be verified.
Dependability. Discuss how biases are removed and avoided from the
questionnaire and the study. Discuss how biases are removed and avoided from
the questionnaire and the study. Discuss how biases are removed and avoided
from the questionnaire and the study. Discuss how biases are removed and
avoided from the questionnaire and the study. Discuss how biases are removed
and avoided from the questionnaire and the study.
Ethical Consideration
The developers will always protect the individual's privacy and as discussed
in the informed consent, will clearly verify their security. Although preserving
identity and anonymity, it will be closely examined. In addressing individual at
societal level, the developers will take all of the information into ethical
considerations mainly transparency, justice, privacy, confidentiality, and safety.
The respond information will not be shared with others and nothing will be directly
attributed to the name of the tester. They are entitled to change their minds and
stop participating if they do not want to be part of the study.
Transparency. Honesty and open communication are embodied by
transparency. When it is inconvenient to do so, the developers should be open or
willing to hold details. Transparency refers to a person being honest with himself
about his acts. Transparency also refers to the developers being honest and fair
about the actions they take, as well as whether such actions are in line with their
principles.
Justice. The ethical imperative to reasonably share the benefits and costs
of research can be described as the concept of justice. Developers must not
manipulate the powerless, nor must they exclude those who seek to benefit from
project testers for good purpose.
Risk, Benefits, and Safety. During project design and ethical review, tester’s
safety is expressly considered. The developers are responsible for evaluating the
potential risks and estimating the probability of the risk occurring. Thus, they will
ensure the safety and benefits of the testers as it takes the process of the project’s
improvement.
2 single spaces
PROJECT DESIGN
2 single spaces
This is where you will place the introductory paragraph for Chapter 3. In this
paragraph, you will briefly mention what can be found inside Chapter 3. This is
where you will place the introductory paragraph for Chapter 3. In this paragraph,
you will briefly mention what can be found inside Chapter 3. This is where you will
place the introductory paragraph for Chapter 3. In this paragraph, you will briefly
mention what can be found inside Chapter 3. This chapter will cover the overall
design of the CONBLEED device, along with the product specifications,
component details, as well as the component requirements. This chapter will also
cover the mechanics of the device set-up as well as the mechanisms that will
happen when the device is in use which will then be subdivided to give a clear
explanation on the functionalities of the product. System codes to be used in the
implementation of the product will also be discussed in this chapter as well as the
instructions for product use. Lastly, the project cost will be discussed in this chapter
which will help in cost analysis to determine the product’s cost-effectiveness when
compared against mid-range and premium products.
Project Model
This is where you will place the introductory paragraph for all the project
model including the Blueprint of the Project, Application Blueprint and Service
Description (if needed), Product Details, Application Details, Complete
Specification and Application Requirement.
Figure 9. Virtual model of the CONBLEED DEVICE
• Inflatable Cuff
Figure 12. Inflatable Cuff
Product Details
Heart Rate,
SpO2, Blood Pressure
Pressure Sensor
Sensor
Cuff
Microcontroller
Biometric Motor Driver
Hub
Motors and
AC Power/ Valve
Battery
Speaker
LCD Screen
Biometric Sensor
Supply Voltage 1.8V
Output Current 25mA
Operating Temperature Range -40°C to +105°C
Heart Rate SpO2, Blood Pressure Sensor
Power-Supply Voltage 1.8V 1.8V
LED Supply Voltage (RED and IR LED) 3.3V
LED Supply Voltage (Green LED) 5.0V
Supply Current 700µA
LCD touch screen
Supply Voltage 4.5V to 5.5V
Operating Temperature -10°C to +70°C
Speaker Module
Dimensions 130mm x90mm x12mm
Weight G.W 12g
Microcontroller
Input Voltage 7V to 15V
Voltage Level Jumper 3.3V to 5V
Digital I/O Pins 20 (6 PWM Outputs and 6 Analog Inputs)
Flash Memory 32k
Clock Speed 16MHz
Cuff
Tube Length 100cm
Material Nylon composite cloth
Pressure Sensor
Supply Voltage 1.8V to 3.6V
Operating Temperature -40°C to +95°C
Power Consumption 1µA
Operating Pressure Range 0 to 14 bar
Pump
Supply Voltage 6V
Load Current < 420mA
Flow ≥ 2.5LPM
Maximum Pressure ≤ -55kpa
Operating Temperature Range 5°C to 50°C
Silicone Tubing
Tube Length 1 meter
Internal Diameter 3mm
External Diameter 5mm
Motor Driver IC
Motor voltage 4.5V to 36V
Maximum Peak Motor Current 1.2A
Maximum Continuous Motor Current 600mA
Supply Voltage 4.5V to 7V
Transition Time 300ns
Project Mechanism
This segment will cover the process of creating the product which includes
setting up the sensors and different modules within the device. Also, the
mechanism of the CONBLEED device will also be discussed in this segment which
includes the circuit, machine, and system mechanism as well as the operational
flowchart of the device
In order to link the components of the device, the holes heart rate sensor,
biometric hub, and the main microcontroller will be connected in the following
pairing manner:
GND VCC
MFIO RESET
SCL
Heart Rate, SDA
SpO2, Blood 3V3
Pressure
GND
Sensor
PIN 4 SCL
PIN 5 SDA
MFIO RESET
SCL Main
Microcontroller
SDA
Biometric 3V3
Hub GND
Figure 13. Complete schematics for Heart Rate, SpO and Blood Pressure
Sensor, Biometric Hub, and Main Microcontroller
To ensure that the components will not be damaged during the set-
up process, it is highly recommended to connect the heart rate sensor and
biometric sensor to the 3.3V and GND of the microcontroller first before
connecting the MFIO and RESET to pins 4 and 5 to prevent destroying the
sensors.
MFIO RESET
SCL
Heart Rate, SDA
SpO2, Blood 3V3
Pressure
GND
Sensor
PIN 4
PIN 5
MFIO RESET
Main
SCL
Biometric Microcontroller
SDA
Hub
3V3
GND
IC Hook with
Pigtails
GND VCC
Figure 14. Complete schematics for Heart Rate, SpO and Blood Pressure
Sensor, Biometric Hub, and Main Microcontroller with IC hook
A USB micro-B cable is required in order to hook up the
microcontroller to a laptop in order to upload source codes to it which will
be elaborated on in the next section.
Next, connect the speaker module to the I/O ports, VCC, and GND
pins of the microcontroller. Illustrated below is a simple diagram of the
connections needed to make the speaker work with the microcontroller.
Source codes are also uploaded via the USB micro-B cable same as the
sensors.
Speaker
I/O SIG
NC NC
3.3V VCC
Main
Circuitry
Figure 15. Schematic for connecting the Speaker Module to the Main
Microcontroller
For the display module, the adaptor shield will be attached, which
comes with the package, onto the microcontroller. From there, easily
connect the display module with the adaptor shield using a 5-way female-
to-female cable.
Adaptor Display
Shield Module
Microcontroller
5-way female-to-female
cable
Figure
The 16. Schematic
pressure sensor thatforisconnecting the seven
used contains Displaypins
Module to the
but only Main
VCC,
Microcontroller
Mechanism of the Device
Circuit Mechanism
The heart rate, SpO2 and blood pressure sensor is used to obtain your heart
rate, blood oxygen levels and blood pressure using light. This is possible through
the process of photoplethysmography which works by illuminating the skin and
measuring the changes in light absorption. The light data will then be sent to the
biometric sensor to perform the necessary calculations to determine heart rate,
blood oxygen levels and blood pressure which will be used to define the necessary
pressure to be applied.
I2C
drive the red, green, and infrared light through the skin. After which, the light which
comes back will then be measured by the photodiode. Do note that apart from the
light coming from the LEDs, there will be ambient light coming in through the glass
pane which will also be read by the photodiode. So, to minimize reading errors,
there is circuitry inside the component that will cancel the ambient light before
converting the photodiode's analog signal into digital pulses. The digital data will
then be filtered, processed, and registered in the component's digital circuitry
before passing through the I2C interface connected to the microcontroller. Note
that the digital signal read by the heart rate sensor is transported to the biometric
hub for further calculations.
Figure 21. Data transmission from the Heart Rate Sensor to the Biometric
Hub to the Main Microcontroller
Raw data are transported to the biometric hub where the hub will perform
its embedded algorithm to come up with an accurate measure of the pulse heart
rate and blood oxygen saturation of the host. Again, as mentioned in the previous
paragraph, the sensor has a built-in ambient light rejection circuitry in order to
minimize and cancel background noises for us to have clean and correct
processed data.
The sensor's sampling time is derived from the 32.768 kHz real-time clock,
wherein the sampling rate is not fixed and is actually configurable based on your
preference. However, it is strongly advised to go with the default RTC value to
minimize power consumption. We can go with a lower sampling rate to consume
lesser power but that entails a substantial increase in size and cost as well.
Once blood pressure, heart rate and blood oxygen have been computed, the
gathered data will be used to calculate the needed pressure to be supplied to the
cuffs using this formula.
AOP=[SBP+10]/KTP
Where AOP stands for the Arterial Occlusion Pressure, SBP stands for the
Systolic Blood Pressure and KTP stands for the Tissue Padding Coefficient.
Machine Mechanism
The microcontroller will now send signals to the motor drivers and begin the
process of pressurization and depressurization.
Depressurization Pressurization
The cuff is technically the main component of the entire machine since its
purpose to restrict the blood flow which is mainly the purpose of the machine. The
cuff is meant to be pressurized and depressurized at a certain degree depending
on a variety of medical factors. Nonetheless, an algorithm is embedded into the
microcontroller to provide us with the correct pressure to be supplied to the cuff.
There will also be a slot to be engineered onto the cuff to add the elastic bandages
if human intervention is not preferred.
As discussed in the previous section, the heart rate and blood oxygen level
will determine the pressure applied to the band. The microcontroller is in-charge
to send digital pulses to our motor driver IC to pressure and depressurize the cuff.
The systemFigure 22. Machine
will pressurize theMechanism of the the
cuff to constrict CONBLEED device
blood flow and will slowl y
depressurize until such time that the heart rate reading is at a certain correct level.
Below is a diagram of how the entire machine operates.
Finger is placed on the sensor
Yes
Is there Is there a
Excess finger Yes excess Yes No No object is
motion sensor finger detected
motion? detected?
Yes
Yes
Yes
Finger
detected
The heart rate sensor and biometric sensor are configurable such that it can
provide the following basic information: a) blood rate, b) blood oxygen levels, c)
blood pressure, and d) finger detection status.
You only have to gently place your finger onto the sensor to start the data
gathering. The LCD touch screen can display the finger detection status such as
a) finger detected, b) no object detected, c) not ready, d) excessive sensor device
motion, e) excessive finger motion, and f) pressing too hard.
Lines 1 and 2 - sets the int variables for pins 4 and 5 which is how we wired
the sensors together as explained in the previous section. Make sure that the
variables are in the correct order: 4 for RESET, and 5 for MFIO.
1. Plug the power cord into an outlet. In the absence of an outlet, the backup
battery connected to the machine can be used
2. Once plugged in, the LCD screen will power up and display important
details. If device is not connected to an outlet, press the gray button on the
bleeding control pressure monitor to activate the device.
3. Remove the tips of the elastic bandage on both sides and connect them
around the wounded area in order to secure the bandage and compress.
After connecting the tips, apply the cuff with elastic bandages and compress
to the body part correctly.
4. Attach the silicone tubing to the cuff and make sure there are no
constrictions in the whole tubing length.
5. Place the human finger on the blood pressure/heart rate sensor.
6. Make sure that the finger is placed lightly and with consistent pressure.
7. Avoid moving the finger and sensor to avoid errors and inaccuracies.
8. The LCD screen will display the detected finger’s status. It will display the
following status depending on the state: a) finger detected, b) no object
detected, c) device not ready, d) excess sensor motion, e) excess finger
motion, or f) finger is pressed too hard.
9. The LCD screen of blood pressure/heart rate monitor will display the real-
time data of the measured heart rate and blood pressure. Systolic Blood
Pressure and Diastolic Blood Pressure will be displayed.
10. Data collected from the blood pressure/heart rate monitor will be used to
determine the optimal pressure to be exerted from the device into the wound
Project Cost
This segment will cover the project cost of manufacturing the product. They
will be divided according to its role in the overall task load. Additionally, the project
cost of manufacturing the device will be compared against mid-range and high-
end products that are locally available in order to determine the feasibility and cost
effectiveness of the product.
the required
calculations to
determine heart
rate and blood
oxygen.
Heart Rate, Pulse Oximeter Measures the heart 1 PHP PHP 1,073.75
SpO2 and Blood and Heart Rate rate, blood oxygen 1,073.75
Pressure Monitor levels and blood
Sensor pressure
Battery DC Battery Provides backup 1 PHP PHP 400.00
power 400.00
LCD screen Arduino Display Display details 1 PHP PHP 472.00
Module - 3.2" (hear rate, SpO2, 472.00
Main Board
Sends triggering
pulse to the air
pump based on the
embedded code
Connecting Jumper Wires Connects the 1 PHP PHP 105.00
wires (F/F, M/M, M/F) components 105.00
Cuff Pneumatic Restricts blood 1 PHP PHP 2,500.00
Tourniquet circulation 2,500.00
Elastic Bandage Elastic Bandage A compression 1 PHP PHP 500.00
bandage to wrap 500.00
around the injured
area.
Pressure sensor Pressure Measures the 1 PHP PHP 3,225.00
Output Device
The table above is divided into three main parts: a) Data Gathering
Instrument, b) Main Board, and c) Output Device. The product description and
purpose are described in the table above with its unit price.
The second component is the main board which is made up of the battery,
LCD screen, power adapter, speaker module, and the microcontroller which all
cost Php 2,644.5
Lastly, the third component is the output device which comprises the
connecting wires, cuff, elastic bandage, pressure sensor, motors, silicone tubing,
motor driver IC and valve. The total cost for the output device is around Php
8,147.50.
Adding the costs of all three main parts of the device, the project has a
projected cost of Php 12,939.50.
Project Cost Analysis
The project components' cost are all tallied in a table in the previous section.
We will now be comparing our project's cost to other existing products with a similar
purpose to decide if the project is feasible and outweighs the benefits from the
costs.
Since the costs of the related products mentioned in the review of related
literature are not available on the internet, we will then be comparing our machine's
cost to products out there that are already readily available for purchase.
Professional fee costs include the fees to be given to the lead outsourced
programmer to help the researchers carry out a seamless source code, as well as
consultation fees to medical experts for their valuable insights in terms of the
technical side of the process.
All components are bought from different suppliers and vendors, most likely
from outside of the country, so we have to consider too the delivery time and costs
of these.
For the CONBLEED, the delivery cost is 20% of the components cost, the
manufacturing cost is 30% of the components cost, and the professional fee is
estimated at 60% of the components cost. Note that all these percentages are
estimates and projected costs only and will most likely vary during the actual
production stage.
Notice that we haven’t included the delivery costs for products 1 and 2 so
that we can assume the best-case cost difference of the products.
After tallying all direct costs, we came up with a cost difference of 48.40%
and 86.29%. Finally, we can now make an informed decision that the research is
feasible and most definitely outweighs the costs of other existing products.
CHAPTER 4
2 single spaces
2 single spaces
This is where you will place the introductory paragraph for Chapter 4. This
is where you will place the introductory paragraph for Chapter 4. This is where you
will place the introductory paragraph for Chapter 4. This is where you will place the
introductory paragraph for Chapter 4.
3.1 Objective 1
Table X
Name of the Table (this is for descriptive objectives: frequency and percentage)
(Variable) Frequency Percentage
1. Retail 4.2 High
2. Service Industries 4.2 High
3. Financial Institutions 4.2 High
Overall 4.2 High
After the table, report and summarize the results from the table. Report the
highest and lowest values only, including their descriptions. Then, interpret the
results using the interpretation table you include in 2.3. This part is only one
paragraph.
After interpreting the results, use the literature in your literature review to
compare/contrast with the results of your paper. Make sure to substantiate and
contextualize your results. This should be 1-2 paragraphs. After interpreting the
results, use the literature in your literature review to compare/contrast with the
results of your paper. Make sure to substantiate and contextualize your results.
This should be 1-2 paragraphs. After interpreting the results, use the literature in
your literature review to compare/contrast with the results of your paper. Make sure
to substantiate and contextualize your results. This should be 1-2 paragraphs.
The third paragraph is intended to account for the results and their
implications. Citations are necessary if it supports this section. The third paragraph
is intended to account for the results and their implications. Citations are necessary
if it supports this section. The third paragraph is intended to account for the results
and their implications. Citations are necessary if it supports this section. The third
paragraph is intended to account for the results and their implications. Citations
are necessary if it supports this section.
3.2 Objective 2
Table XX
Name of the Table (this is for descriptive objectives: Mean and SD/Mdn and IQR)
Verbal Interpretation
(Variable) Mean SD
Description
1. Retail 4.2 High
2. Service 4.2 High
Industries
3. Financial 4.2 High
Institutions
Overall 4.2 High
After the table, report and summarize the results from the table. Report the
highest and lowest values only, including their descriptions. Then, interpret the
results using the interpretation table you include in 2.3. This part is only one
paragraph.
After interpreting the results, use the literature in your literature review to
compare/contrast with the results of your paper. Make sure to substantiate and
contextualize your results. This should be 1-2 paragraphs. After interpreting the
results, use the literature in your literature review to compare/contrast with the
results of your paper. Make sure to substantiate and contextualize your results.
This should be 1-2 paragraphs. After interpreting the results, use the literature in
your literature review to compare/contrast with the results of your paper. Make sure
to substantiate and contextualize your results. This should be 1-2 paragraphs.
The third paragraph is intended to account for the results and their
implications. Citations are necessary if it supports this section. The third paragraph
is intended to account for the results and their implications. Citations are necessary
if it supports this section. The third paragraph is intended to account for the results
and their implications. Citations are necessary if it supports this section. The third
paragraph is intended to account for the results and their implications. Citations
are necessary if it supports this section.
3.1 Objective 3
Table XXX
Name of the Table (this is for inferential objectives: t-tests)
T- Decision on Decision on
Variable Mean p-value
value Significance Ho
Category 1
Category 2
Table XXX.1
Name of the Table (this is for inferential objectives: ANOVA)
F- Decision on Decision on
Variable Mean p-value
value Significance Ho
Category 1
Category 2
Category 3
Table XXX.2
Name of the Table (this is for inferential objectives: Correlation)
Dependent Variable
Decision on
r p-value Decision on Ho
Independent Significance
Variable
After the table, report and summarize the results from the table. Report the
highest and lowest values only, including their descriptions. Then, interpret the
results using the interpretation table you include in 2.3. This part is only one
paragraph.
CHAPTER 5
2 single spaces
2 single spaces
This is where you will place the introductory paragraph for Chapter 5. This
is where you will place the introductory paragraph for Chapter 5. This is where you
will place the introductory paragraph for Chapter 5. This is where you will place the
introductory paragraph for Chapter 5.
4.1 Conclusions
Review the research problem, the purpose of the study, and the objectives.
Also, provide the major results for each research question. This should be in one
paragraph only. Review the research problem, the purpose of the study, and the
objectives. Also, provide the major results for each research question. This should
be in one paragraph only. Review the research problem, the purpose of the study,
and the objectives. Also, provide the major results for each research question. This
should be in one paragraph only.
4.2 Limitations
Provide and briefly discuss the study's limitations: what was not addressed,
what are the cautions in interpreting the paper, etc. This is also in one paragraph
only. Provide and briefly discuss the study's limitations: what were not addressed,
the cautions in interpreting the paper, etc. This is also in one paragraph only.
Provide and briefly discuss the study's limitations: what was not addressed, the
cautions in interpreting the paper, etc. This is also in one paragraph only. Provide
and briefly discuss the study's limitations: what was not addressed, the cautions in
interpreting the paper, etc. This is also in one paragraph only.
4.3 Recommendations
Malimas, M.A.P., Carreon, J.A.D. & Peña, N.W. (2018). Critical discourse
analysis of Filipino women politicians' campaign speeches. Asian EFL
Journal, 20, p386-403.
Sani, I., Hayati Abdullah, M., Abdullah, F. S., & Ali, A. M. (2012). Political cartoons
as a vehicle of setting social agenda: The newspaper example. Asian Social
Science, 8(6), 156–164. https://doi.org/10.5539/ass.v8n6p156
Permission Letter
(insert the sample signed letter here; make sure to observe anonymity and confidentiality of
identities)
APPENDIX D
(insert the sample signed letter here; make sure to observe anonymity and confidentiality of
identities)
APPENDIX E
(insert the sample signed letter here; make sure to observe anonymity and confidentiality of
identities)
APPENDIX D
Validation Sheets
Validation Results
Turnitin Result
PERSONAL INFORMATION
Name:
insert your photo
Birthday:
here, white
Age:
Grade and Section: background, bust
Strand: shot only,
Address: formal/business
Contact Number/s: attire
Email Address:
EDUCATIONAL BACKGROUND
AWARDS/ACHIEVEMENTS
Skill/s:
Talents:
Hobbies: