0% found this document useful (0 votes)
47 views6 pages

Computer Vision Integrated Website

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views6 pages

Computer Vision Integrated Website

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

International Journal of Innovative Technology and Exploring Engineering (IJITEE)

ISSN: 2278-3075 (Online), Volume-13 Issue-2, January 2024

Computer Vision Integrated Website


C S S Krishna Kaushik, Prathit Panda, P S S Asrith, M Patrick Rozario, Ayain John

Abstract: Computer vision is an integral part of artificial The study by Boris Knyazev, Roman Shvetsov, Natalia
intelligence that empowers machines to perceive the world Efremova, and Artem Kuharenko [8][12]. exemplifies pose
similar to human vision. Despite its extensive evolution, estimation evolution in their paper titled "Leveraging large
widespread awareness of its potential remains limited. The goal
of the "Computer Vision Integrated Website" paper is to face recognition data for emotion classification." Their work
enhance awareness and exhibit the capabilities of computer underscored the fusion of face recognition and audio
vision. By creating an accessible platform featuring various features, surpassing benchmarks and advocating for the
computer vision models, authors aim to captivate audiences and pivotal role of extensive datasets in fortifying accuracy and
drive growth in the field. The paper seeks to illustrate how exploring novel emotion scales. Another significant point
computers interpret visual information by integrating user- emerges from the evaluation of Support Vector Machine
friendly computer vision models into a website. Through
practical demonstrations like emotion detection and pose (SVM) kernels in emotion recognition, as evidenced by the
estimation, authors intend to showcase the potential of study conducted by Ibrahim A. Adeyanju, Elijah O.
computer vision in everyday scenarios. Ultimately, authors Omidiora, and Omobolaji
strive to narrow the knowledge gap between technical F. Oyedokun [10]. Their paper, "Performance Evaluation
advancements in computer vision and public understanding, of Different Support Vector Machine Kernels for Face
fostering curiosity and encouraging broader interest in the Emotion Recognition," identified the Quadratic function
technology.
kernel for superior accuracy, albeit revealing computational
Keywords: Computer vision, Pose Estimation, Emotion time trends that were inconclusive despite the accuracy
Detection.
improvements with larger image feature dimensions.
Facial feature extraction methodologies have played a
I. INTRODUCTION
pivotal role in shaping advancements in emotion recognition
[7]. The comprehensive survey by Viha Upadhyay and Prof.
T he paper which mainly focuses on building a basic Devangi Kotak [7], titled "A Review on Different Facial
model for the sole purpose of user interaction and awareness Feature Extraction Methods for Face Emotions Recognition
of computer vision. It has plenty of future prospects as the System," highlighted the efficacy of geometry-based and
number of Computer vision applications are always appearance-based techniques, achieving remarkable
increasing. The first models that has been integrated are the accuracy. Conversely, the hybrid approach introduced by
pose estimation and emotion detection models. The website Maryam Imani and Gholam Ali Montazer in their paper
which is the main platform that serves as access to these
"GLCM Features and Fuzzy Nearest Neighbor Classifier for
models was not the easiest part to make as integration of
Emotion Recognition from Face" surpassed existing models,
computer vision models requires considerable amount of
advocating for the superiority of their combined methodology
time and to run them takes even more time than expected.
Even though the core priority of the paper was a success, with an outstandingaverage recognition rate. Parallel
optimization is required for more smooth running. explorations ventured into multi-view human pose
estimation and activity recognition. Michael
B. Holte, Cuong Tran, Mohan M. Trivedi, and Thomas
B. Moeslund highlighted model-based pose estimation
techniques and associated challenges in their paper titled
Manuscript received on 21 December 2023 | Revised "Human Pose Estimation and Activity Recognition from
Manuscript received on 31 December 2023 | Manuscript Multi-View Videos." Simultaneously, Guyue Zhang, Jun
Accepted on 15 January 2024 | Manuscript published on 30 Liu, Hengduo Li, Yan Qiu Chen, and Larry S. Davis
January 2024. proposed an innovative fusion method for "Joint Human
*Correspondence Author(s)
C S S Krishna Kaushik, Department of Artificial Intelligence and Detection and Head Pose Estimation via Multi-Stream
Machine Learning Dayananda Sagar University, Bangalore (Karnataka), Networks for RGB-D Videos," showcasing state-of-the-art
India. E-mail: [Link]@[Link] performance through integrated data streams. Advancements
Prathit Panda, Department of Artificial Intelligence and Machine
Learning Dayananda Sagar University, Bangalore (Karnataka), India. E-
in 3D human pose estimation were underscored by Jinbao
mail: prathitpanda2003@[Link] Wang, Shujie Tan, Xiantong Zhen, Shuo Xu, Feng Zheng,
P S S Asrith, Department of Artificial Intelligence and Machine Zhenyu He, and Ling Shao [6][13] in their comprehensive
Learning Dayananda Sagar University, Bangalore (Karnataka), India. E-
mail: [Link]@[Link]
review titled "Deep 3D human pose estimation." Their paper
M Patrick Rozario, Department of Artificial Intelligence and Machine emphasized the significance of deep learning methodologies
Learning Dayananda Sagar University, Bangalore (Karnataka), India. E- while acknowledging the challenges in real-world scenarios
mail: pat222rick@[Link]
and multi-person cases.
Prof. Ayain John*, Department of Artificial Intelligence and Machine
Learning Dayananda Sagar University, Bangalore (Karnataka), India. E-
mail: ayainjohn@[Link], ORCID ID: 0000-0002-6058-1228

© The Authors. Published by Blue Eyes Intelligence Engineering and


Sciences Publication (BEIESP). This is an open access article under the
CC-BY-NC-ND license [Link]

Published By:
Retrieval Number: 100.1/ijitee.B978313020124 Blue Eyes Intelligence Engineering
DOI: 10.35940/ijitee.B9783.13020124 and Sciences Publication (BEIESP)
Journal Website: [Link] 20 © Copyright: All rights reserved.
Computer Vision Integrated Website
Collectively, these studies delineate diverse approaches are the straightforward extension of the single-
methodologies in emotion recognition from facial cues, person pose estimation methods, since the first step is to
emphasizing technological advancements while advocating detect and crop each person out and then apply single-
for robust, scalable, and efficient approaches to fortify person pose estimation algorithms. Bottom-up approaches
accuracy in practical applications. predict all the body parts first and then assemble the parts to
Though these are very helpful in the future prospects, the infer full body poses. Jinbao Wang, Shujie Tan, Xiantong
paper kickstarts the new concept of integrating multiple Zhen, Shuo Xu, Feng Zheng, Zhenyu He, and Ling Shao [2]
computer vision models. The reference can be taken into outlined recent advancements in 3D human pose estimation
account for comparing the accuracies to prove how the basic in their paper "Deep 3D human pose estimation: A review,"
model performs. emphasizing the significance of deep learning while
acknowledging challenges in real-world scenarios and multi-
II. LITERATURE SURVEY person cases. Collectively, these studies underscore diverse
methodologies in emotion recognition from facial cues,
Emotion recognition from facial expressions has evolved
emphasizing technological strides while advocating for
significantly, shaped by innovative methodologies and
robust, scalable, and efficient approaches to fortify accuracy
comprehensive dataset exploration. The paper by Boris
in practical applications.
Knyazev, Roman Shvetsov, Natalia Efremova, and Artem
Kuharenko [4][11], titled "Leveraging large face recognition
III. METHODOLOGY
data for emotion classification," showcased the fusion of
face recognition and audio features, surpassing benchmarks. The steps taken to achieve the core working model were
The study emphasized the crucial role of extensive datasets discussed in prior to avoid confusion over Implementing the
over intricate methodologies, stressing the need for enhanced computer vision layers:
data to fortify accuracy and explore novel emotion scales.
Another study, conducted by Ibrahim A. Adeyanju,
Elijah O. Omidiora, and Omobolaji F. Oyedokun [6] under
the title "Performance Evaluation of Different Support
Vector Machine Kernels for Face Emotion Recognition,"
evaluated SVM kernels, pinpointing the Quadratic function
kernel for superior accuracy. However, the study revealed
inconclusive trends in computational time despite accuracy
improvements with larger image feature dimensions.
Diverse methodologies in facial feature extraction have
been pivotal in shaping advancements. Viha Upadhyay and
Prof. Devangi Kotak [3] presented "A Review on Different
Facial Feature Extraction Methods for Face Emotions
Recognition System," extensively surveying geometry-based
and appearance- based techniques, achieving an impressive Fig. 3.1: Methodology Flow
88.9% accuracy. Conversely, the paper authored by Maryam
Imani and Gholam Ali Montazer [5], titled "GLCM Features A. Model Selection
and Fuzzy Nearest Neighbor Classifier for Emotion Pose Estimation and Emotion Detection were the models
Recognition from Face," introduced a hybrid approach chosen for this paper. The selection was divided among team
surpassing existing models, advocating for the superiority of members for more efficient time management. Various
their combined methodology with an outstanding average models, including CNN, MobileNet, YOLOv8, and Media
recognition rate. Pipe, were explored. After testing all the models, the CNN
Parallel explorations delved into multi-view human pose model was chosen for emotion detection, and the Media Pipe
estimation and activity recognition. Michael B. Holte, Cuong library was selected for pose estimation, as these models
Tran, Mohan M. Trivedi, and Thomas B. Moeslund [7], in demonstrated the best accuracy and performance for the
their paper titled "Human Pose Estimation and Activity detections.
Recognition from Multi-View Videos: Comparative B. Datasets Used
Explorations of Recent Developments," highlighted model-
The Face Emotion Recognition Dataset from Kaggle was
based pose estimation techniques and multi- level pose
estimation challenges. Simultaneously, Guyue Zhang, Jun employed for emotion detection. The dataset comprises
Liu, Hengduo Li, Yan Qiu Chen, and Larry S. Davis [8], images depicting 7 different face expressions or emotions
exhibited by various individuals, including anger, contempt,
through "Joint Human Detection and Head Pose Estimation
via Multi-Stream Networks for RGB-D Videos," proposed disgust, fear, happiness, sadness, and surprise. Each image in
an innovative fusion method, demonstrating state-of-the-art the dataset is labeled with the corresponding face emotion.
performance by integrating appearance, shape, and motion
data.
Liangchen Song, Gang Yu, Junsong Yuan and Zicheng
Liu [1][9][10] use various deep learning models where they
use top down and bottom-up approaches Top-down

Published By:
Retrieval Number: 100.1/ijitee.B978313020124 Blue Eyes Intelligence Engineering
DOI: 10.35940/ijitee.B9783.13020124 and Sciences Publication (BEIESP)
Journal Website: [Link] 21 © Copyright: All rights reserved.
International Journal of Innovative Technology and Exploring Engineering (IJITEE)
ISSN: 2278-3075 (Online), Volume-13 Issue-2, January 2024

C. Pose Estimation (OpenCV), Matplotlib, Seaborn, TensorFlow, Keras, PIL


To detect and track the movements of different body (Python Imaging Library), Zip File, Image Data Generator,
parts in images, as well as 2D and 3D motion videos, the Model Checkpoint, Early Stopping, Reduce LR On Plateau,
pose estimation model enables accurate identification of and I Python. display. Data collection involves acquiring
specific actions, whether the body is in motion or stationery. diverse image sets, followed by preprocessing, including
Understanding the position and spatial orientation of objects augmentation and normalization techniques using generators.
and bodies in a given environment, the model proves Training the model incorporates optimizing performance and
particularly useful in the fields of robotics and augmented preventing overfitting through strategic callbacks used:
reality. Model Checkpoint, Early Stopping, and Reduce LR On
Plateau. Evaluation of the model's performance on test data
was used to measure its predictive capacity. Moreover, the
model demonstrates image prediction capabilities, loading a
trained model for emotion prediction from preprocessed
images. It also includes live camera feed capture and face
detection via OpenCV, presenting real-time image analysis.
In their research, [7] utilized the six standard emotion
categories—smile, sad, surprise, anger, fear, disgust, and
neutral—to classify emotions in the final step of their Facial
Fig. 3.2: Pose Estimation Design Emotion Recognition (FER) system whereas this model uses
One specific application author has developed is a pose seven classes---happy, sad, angry, neutral, disgust and fear.
estimation model designed to count repetitions of bicep curls E. Frontend/UI Development
and determine the position of the hand during the curl (either The website was developed using the [Link] framework
'up' or 'down'). This model works seamlessly in both video for the frontend, employing HTML, CSS, and JavaScript.
recordings and real-time/live feeds. Utilizing the Media Pipe For the UI, Bootstrap CSS was utilized for different styled
computer vision model, pre-trained to detect and track body components. The website features a welcome page that
movements, authors are able to achieve precise predictions at redirects to the options page, where users are prompted to
a rate of 30frames per second. choose the computer vision model they want to try. The two
The Mediapipe library, created by Google, provides options provided are pose estimation and face emotion
access to various pre-trained machine learning models like detection. Depending on the user's choice, they are redirected
object detection and pose estimation. These models can be to the chosen computer vision model page, where they can
customized based on developers' needs. The library was try out models that perform real-time detections using the
utilized to access the pose estimation model, which assigns webcam feed.
specific landmark labels to different body parts to estimate
body posture. There are a total of 33 landmarks throughout F. Backend Implementation
the human body that can be used to individually separate and In order to handle user interactions and routing between
identify the body parts. The OpenCV library was also different pages of the website, authors implemented Python
employed to access the webcam to record and relay the live Flask in the backend. The Flask code is responsible for
feed for real-time pose estimation on a moving body. In hosting the website backend on a local server, enabling real-
contrast to the present model, the approach outlined by time routing.
involves initially focusing on methods exclusively utilizing
2D multi-view image data before delving into comprehensive IV. ARCHITECTURE
3D-based techniques.
The website implementing live feed analysis with a focus
D. Emotion Detection on pose estimation and emotion detection using computer
vision models:

Fig. 4.1: Architectural Design


Fig. 3.3: Face Emotion Detection Design
The libraries imported for image processing, deep
learning, and data visualization include NumPy, Pandas, cv2

Published By:
Retrieval Number: 100.1/ijitee.B978313020124 Blue Eyes Intelligence Engineering
DOI: 10.35940/ijitee.B9783.13020124 and Sciences Publication (BEIESP)
Journal Website: [Link] 22 © Copyright: All rights reserved.
Computer Vision Integrated Website
A. Frontend/UI Layer work harmoniously to extract intricate human poses and
▪ Live Camera Feed: Develop a real-time video feed discern emotional cues from the live video stream. The
interface using web technologies (HTML5, backend then promptly relays the analyzed results back to
JavaScript) to capture live video from the user's device the frontend, empowering users with immediate insights into
camera. detected poses and recognized emotions, thereby providing a
▪ UI Elements for Results: Create sections or overlays fluid and responsive user experience.
to display the analyzed results such as detected poses
and recognized emotions. VI. RESULT AND ANALYSIS

B. Backend/API Layer A. Pose Estimation


▪ Web Server and WebSocket Integration: Employ a After selecting the model for the Pose detection, the
backend (Flask) using WebSocket communication to website asks permission for the camera, once permission is
handle the continuous live video stream from the granted then the user can press the start button and make use
frontend to the backend for analysis. of the Pose Detection model.
▪ Real-time Image Processing: Implement real-time
image preprocessing on the server, optimizing the live
feed frames for pose estimation and emotion
detection.
▪ Pose Estimation Model Integration: Connect the
backend with the pose estimation model's API to
continuously analyze the live video frames, extracting
and annotating human poses in real time.
▪ Emotion Detection Model Integration: Similarly,
interface the backend with the emotion detection
model to detect emotions portrayed by individuals
within the live video stream. Fig. 6.1: Successfully Detected Body Pose
C. Pose Estimation Model Layer As seen in the above image, the Pose Emotion model
detects the person's body landmarks and the reps.
Model Server for Pose Estimation: Host the pose
estimation model on a dedicated server or cloud platform B. Emotion Detection
ensuring high-speed and real-time processing of video The trained model exhibited promising performance in
frames. emotion prediction from images. During training, the model
D. Emotion Detection Model Layer achieved a validation accuracy of approximately 80% after
30 epochs, indicating good generalization to unseen data.
Model Server for Emotion Detection: Host the emotion
Evaluation on the test dataset further confirmed the model's
detection model on a dedicated server or cloudplatform to
effectiveness, yielding a test accuracy of 78%. The
swiftly process and recognize emotions from the video
precision, recall, and F1-score for each emotion class were
frames.
computed, showing balanced performance across various
E. Feedback and Visualization emotions. Notably, the model demonstrated robustness in
Real-time Feedback to UI: Send the analyzed results recognizing basic emotions like happiness, sadness, and
back to the frontend/UI via WebSocket for immediate anger. Moreover, the live camera feed integration
visualization and user interaction. successfully captured real-time images, where the model
accurately predicted emotions within captured frames. Face
F. Security and Authentication detection capabilities using OpenCV further enhanced the
Secure Data Streaming: Implement security protocols to model's applicability, enabling identification and prediction
secure the live video stream between the frontend and of emotions from detected faces. Continuous improvements,
backend, ensuring data privacy during transmission. such as fine-tuning and expanding the dataset for more
nuanced emotions, can enhance the model's accuracy and
V. INTEGRATION applicability in diverse scenarios. Overall, the model
The integration of live feed analysis for pose estimation showcased promising results, offering a solid foundation for
and emotion detection within a website involved emotion prediction in images, particularly in real-time
orchestrating a seamless synergy between the frontend and applications and human-computer interaction scenarios. In
backend layers. Through the frontend/UI, a live camera feed the study by, the analysis involved evaluating four SVM
interface is crafted, enabling users to stream video directly kernels for classifying seven face emotions, with the
from their devices. This continuous video feed is transmitted Quadratic function kernel demonstrating superior
to the backend via WebSocket communication, where real- performance in accuracy compared to the other three
time image preprocessing optimizes frames for efficient kernels.
analysis. Here, specialized computer vision models for pose
estimation, such as Open Pose, and emotion detection
models, like CNN-based classifiers, are integrated. These
models, hosted on dedicated servers or cloud platforms,
Published By:
Retrieval Number: 100.1/ijitee.B978313020124 Blue Eyes Intelligence Engineering
DOI: 10.35940/ijitee.B9783.13020124 and Sciences Publication (BEIESP)
Journal Website: [Link] 23 © Copyright: All rights reserved.
International Journal of Innovative Technology and Exploring Engineering (IJITEE)
ISSN: 2278-3075 (Online), Volume-13 Issue-2, January 2024

Additionally, they observed an increase in average DECLARATION STATEMENT


accuracy correlating with higher image feature dimensions. Funding No, I did not receive.
However, it's worth noting that the current model being No conflicts of interest to the best of our
Conflicts of Interest
presented generally outperforms many existing approaches knowledge.
in terms of overall performance. The figures showcase the No, the article does not require ethical
Ethical Approval and
approval and consent to participate with
processed datasets. Consent to Participate
evidence.
Availability of Data
and Material/ Data Not relevant.
Access Statement
All authors have equal participation in this
Authors Contributions
article.

REFERENCES
1. Liangchen Song, Gang Yu, Junsong Yuan and Zicheng Liu / Human
Pose Estimation and Its Application to Action Recognition: A
Survey - Journalof Visual Communication and Image Representation
- (2021)
2. Jinbao Wang, Shujie Tan, Xiantong Zhen, Shuo Xu, Feng Zheng,
Fig. 6.2: Dataset Showing the Face Emotions (Surprised
Zhenyu He and Ling Shao / Deep 3D human pose estimation: A
and Happy) review - Computer Vision and Image Understanding Volume 210 -
The model was set for thirty epochs and the accuracy September 2021 [Link]
3. Viha Upadhyay and Prof. Devangi Kotak / A Review on Different
score was not the best but the basic emotions were predicted Facial Feature Extraction Methods for Face Emotions Recognition
without a problem. Once the user selects the Face Emotion System - Pro- ceedings of the Fourth International Conference on
Detection model, then the web page will ask permission for Inventive Systems and Control - (ICISC 2020)
using the camera. The user should grant permission and [Link]
4. Boris Knyazev, Roman Shvetsov, Natalia Efremova and Artem
press the start button for the model to start identifying the Kuharenko / Leveraging large face recognition data for emotion
Emotions classification - 13th IEEE International Conference on Automatic
Face & Gesture Recognition - [2018]
5. Maryam Imani and Gholam Ali Montazer / GLCM Features and
Fuzzy Nearest Neighbor Classifier for Emotion Recognition from
Face - 7th In- ternational Conference on Computer and Knowledge
Engineering (ICCKE2017) - [October 26-27 2017]
[Link]
6. Ibrahim A. Adeyanju, Elijah O. Omidiora and Omobolaji F.
Oyedokun / Performance Evaluation of Different Support Vector
Machine Kernels for Face Emotion Recognition - SAI Intelligent
Systems Conference - [2015 November 10-11]
7. Michael B. Holte, Cuong Tran, Mohan M. Trivedi and Thomas B.
Moeslund / Human Pose Estimation and Activity Recognition From
Multi- View Videos:Comparative Explorations of Recent
Developments - IEEE JOURNAL OF SELECTED TOPICS IN
SIGNAL PROCESSING, VOL.6, NO. 5, - SEPTEMBER 2012
[Link]
Fig. 6.3: Showing the Face Emotion 8. Guyue Zhang, Jun Liu, Hengduo Li, Yan Qiu Chen and Larry S.
Davis / Joint Human Detection and Head Pose Estimation via
As seen in the above image, the Face Emotion model detects Multi-Stream Net- works for RGB-D Videos - IEEE Signal
the person's emotion as happy. Processing Letters
9. Bhushan, Dr. U. (2022). Review of Literature on the Media Uses
and Gratifications Derived by Students of Higher Education in
VII. CONCLUSION AND FUTURE ENHANCEMENT India. In Indian Journal of Mass Communication and Journalism
The paper holds the potential to expand and grow (Vol. 2, Issue 1, pp. 1–5).
[Link]
significantly, given the abundance of readily available 10. Storozhenko, L., & Petkun, S. (2019). Electronic Communications
computer vision models. The goals were achieved by as an Element of Management. In International Journal of
implementing two models, but the paper can be expanded by Innovative Technology and Exploring Engineering (Vol. 8, Issue
integrating additional models, exploring new techniques, 11, pp. 459–466). [Link]
11. Dogra, A., & Dr. Taqdir. (2019). Detecting Intrusion with High
enhancing APIs, discovering more efficient libraries and Accuracy: using Hybrid K-Multi Layer Perceptron. In International
datasets, and even switching to a cloud platform like Amazon Journal of Recent Technology and Engineering (IJRTE) (Vol. 8,
AWS for faster and more dependable output. Optimizing the Issue 3, pp. 4994–4999).
API to boost the speed of data processing would ensure that [Link]
12. Karanje, P., & Eklarker, Dr. R. (2019). Efficient Multipath Routing
the paper performs optimally, allowing users to receive to Increase QoS by Link Estimation and Minimum Interference path
results in real time. Overall, there are countless opportunities in MANET’S. In International Journal of Engineering and
to enhance the paper and make it even more efficient and Advanced Technology (Vol. 9, Issue 2, pp. 4806–4811).
powerful. By continuing to explore new, innovative [Link]
approaches and different computer vision models, we can
take the paper to a completely new level.

Published By:
Retrieval Number: 100.1/ijitee.B978313020124 Blue Eyes Intelligence Engineering
DOI: 10.35940/ijitee.B9783.13020124 and Sciences Publication (BEIESP)
Journal Website: [Link] 24 © Copyright: All rights reserved.
Computer Vision Integrated Website
13. Proença, M. da C. (2022). On the Need of Quick Monitoring for Prof. Ayain John is currently working as an Assistant
Wildfire Response from City Halls. In Indian Journal of Image Professor in the Department of Computer Science &
Processing and Recognition (Vol. 2, Issue 3, pp. 1–4). Engineering (AIML) at DSU. Prior to this, she served as
[Link] an Assistant Professor in the Department of Information
Science and Engineering at AMC Engineering College.
She completed her undergraduate and postgraduate
AUTHORS PROFILE studies at Anna University Chennai. Ayain is a
Patrick, currently pursuing a specialized BTech in dedicated and passionate professional with one year of extensive
Artificial Intelligence and Machine Learning, is driven experience in Quality Analysis, Quality Engineering, and 16 years of
by a profound passion for the convergence of experience in academia. She has presented and published several papers on
technology and human potential. He finds immense Machine Learning and Deep Learning in peer-reviewed journals and
fascination in the eloquence of analogies when conferences. Ayain received the Selfless Service Award in 2023, the
articulating intricate technological concepts, alongside Teaching Excellence Award in 2019, and the Best Teacher Award in 2006.
nurturing a penchant for creative crafting pursuits. She is currently pursuing research on Cognitive Machine Learning at
Engaged actively in diverse learning events, he is motivated by the belief Amrita University, with a focus on machine learning, deep learning, and
that innovation and continuous learning serve as pivotal keys to unlocking computer vision.
technology's vast potential. His journey involves exploring technological
frontiers and seeking innovative solutions dedicated to the betterment of
society.
Disclaimer/Publisher’s Note: The statements, opinions and
Pulya Satya Sri Rama Asrith, currently pursuing a data contained in all publications are solely those of the
Bachelor's degree in Computer Science Engineering individual author(s) and contributor(s) and not of the Blue
with a specialization in Artificial Intelligence and Eyes Intelligence Engineering and Sciences Publication
Machine Learning. Throughout my academic
journey, I have engaged in numerous projects (BEIESP)/ journal and/or the editor(s). The Blue Eyes
encompassing a wide array of machine learning Intelligence Engineering and Sciences Publication (BEIESP)
algorithms, ranging from Linear Regression and and/or the editor(s) disclaim responsibility for any injury to
Naïve Bayes to Convolutional Neural Networks. The hands-on experience people or property resulting from any ideas, methods,
and research involvement with these algorithms have fueled my enthusiasm
for delving deeper into the realm of Computer Vision. Collaborating with instructions or products referred to in the content.
my peers, I actively participated in the development of a project titled
"Computer Vision Integrated Website." I played a crucial role in
implementing a Facial Emotion Recognition model within this project. To
ensure optimal functionality, I conducted an extensive comparison of
various Computer Vision algorithms, ultimately selecting the one that
demonstrated the most promising results for our specific application. This
immersive experience has not only enhanced my comprehension of
Computer Vision but has also intensified my passion for the field,
motivating me to explore and contribute more extensively to this dynamic
domain.

Kaushik, a dedicated and enthusiastic third-year


student pursuing a [Link] degree in Computer
Science Engineering with a specialization in Artificial
Intelligence and Machine Learning at Dayananda
Sagar University in Bangalore. At the age of 20, I
have already immersed myself in the fields of
Machine Learning and Full Stack Web Development.
As a Machine Learning Engineer and Full Stack Web Developer, I bring a
unique blend of theoretical knowledge and practical skills to the table. My
academic journey has equipped me with a strong foundation in cutting-edge
technologies, enabling me to contribute meaningfully to the intersection of
AI and web development. I am passionate about exploring the vast potential
of artificial intelligence to solve real-world challenges. My experiences in
machine learning projects and web development have honed my problem-
solving abilities and fostered a keen interest in creating innovative
solutions.

Prathit Panda, presently enrolled in the pursuit of a


Bachelor's degree in Computer Science Engineering
with a focus on Artificial Intelligence and Machine
Learning. Throughout my academic journey, I have
undertaken several projects involving diverse machine
learning algorithms, including Linear Regression, Naïve
Bayes, and Convolutional Neural Networks. Motivated
by my research and practical experience with these algorithms, I
developed a keen interest in specializing further in Computer Vision. In
collaboration with my classmates, I actively contributed to the creation of
a project titled "Computer Vision Integrated Website." Within this project,
I played a pivotal role in implementing a Pose Detection model. To ensure
optimal performance, I conducted a thorough comparison of various
Computer Vision algorithms and selected the one that yielded the best
results for our specific application. This experience has deepened my
understanding and passion for the field of Computer Vision, driving my
desire to explore and contribute further to this exciting domain.

Published By:
Retrieval Number: 100.1/ijitee.B978313020124 Blue Eyes Intelligence Engineering
DOI: 10.35940/ijitee.B9783.13020124 and Sciences Publication (BEIESP)
Journal Website: [Link] 25 © Copyright: All rights reserved.

You might also like