An
Internship Project Report
On
BHAVI AI Technologies Ltd Internship
For
fulfilment of award of the
[Link]. Degree
in
Information Technology
2024-25
Mr. Arun Kumar Takuli
Assistant Professor
Sidharth Gill
2101920130173
Department of Information Technology
G. L. Bajaj Institute of Technology and Management
Plot No 2, Knowledge Park-III, Greater Noida-201306
1
Department of Information Technology
Certificate
This is to certify that Project Report entitled “BHAVI AI Technologies Ltd
Internship” that is submitted by Sidharth Gill and in fulfillment of the necessity for
the award of degree B. Tech. in Department of Information Technology of Abdul
Kalam Technical University, are record of the candidate own work distributed by him
below my/our oversight. The matter embodied during this thesis is original and has not
been submitted for the award of the other degree.
Date:
Mr. Arun Kumar Takuli Dr. P C Vashist
(Assistant Professor) (Head of Department)
2
3
Department of Information Technology
Declaration
I herewith declare that the project work conferred during this report entitled “Bhavi AI
Technologies Ltd Internship”, in fulfillment of the necessity for the award of the
degree of Bachelor of Technology in Information Technology, submitted to A.P.J.
Abdul Kalam Pradesh Technical University, Uttar Pradesh, is an authentic record of
my/our own work distributed in Department of Information Technology &
Engineering, G.L. Bajaj Institute of Technology & Management, Greater Noida. It
contains no material antecedently printed or written by another person except wherever
due acknowledgement has been created within the text. The project work reported
during this report has not been submitted by me/us for award of the other degree or
certification.
Signature:
Name : Sidharth Gill
Roll No :2101920130173
Date:
Place: Greater Noida
4
Department of Information Technology
Acknowledgement
I would like to express our sincere thanks to our project supervisor Mr. Arun
Kumar Takuli and our Head of department Dr. P.C Vashist for their invaluable
guidance and suggestions. This internship helped me to understand the concept of
machine learning & backend development as well as Devops. This project enriches
my knowledge and experience of working in a team and a live project. Also, I would
like to express gratitude to Mr. Arun Kumar Takuli for his help in preparation and
overview of the project.
Lastly, I would like to thank all the faculties for providing their valuable time
whenever needed for helping us carry on with our internship.
5
Abstract
During my tenure at Bhavi AI Technologies Pvt Ltd., I gained hands-on experience and
contributed to various domains including Machine Learning (ML), DevOps, cloud
infrastructure, backend development, and system automation. This report highlights my
contributions, such as designing predictive models, automating data ingestion, deploying
applications, managing cloud infrastructure, and optimizing backend systems. The work ranged
from stock market analysis to implementing efficient CI/CD pipelines, enabling smoother
deployment of applications and workflows.
I successfully integrated backend APIs with [Link] frontends, enabling seamless interaction
for data retrieval and processing. My work with databases like Azure SQL and Cosmos DB
ensured real-time data management and efficient storage solutions. Additionally, I streamlined
workflows through automation, utilizing tools like Jenkins, Terraform, and Docker to create
consistent, reproducible environments.
Through my work on key projects, such as automated trading algorithms and real-time market
data retrieval systems, I contributed to enhancing the company's decision-making capabilities
and operational efficiency. These projects required a strong understanding of both backend and
machine learning systems, and the ability to deploy solutions in a cloud environment.
My contributions involved creating and deploying predictive models, automating workflows,
and ensuring efficient and scalable infrastructure management. I also facilitated the integration
of various cloud services, streamlined backend operations, deployed applications to
production, and implemented Redis-based solutions in Docker environments for caching and
real-time data processing. Below is a detailed summary of my key contributions and
achievements.
This document details the technologies used, challenges faced, and key projects accomplished
during this period.
6
TABLE OF CONTENTS
1. Introduction 8
2. Machine Learning and Data Processing 9
2.1 Stock Market Analysis and Forecasting 9
2.2 Automation of Data Ingestion 9
2.3 Advanced Data Analysis Techniques 9
2.4 Integration of ML Models into Production 10
3. DevOps and Cloud Deployment 11
3.1 CI/CD Pipelines 11
3.2 AWS Cloud Infrastructure 11
3.3 Terraform Scripts 11
4. Enabling Monitoring and Logging Solutions 12
4.1 System and Application Monitoring 12
4.2 CloudWatch Agent Configuration 12
4.3 Alerting and Notifications 12
4.4 Centralized Logging with ELK Stack 12
5. Backend Development and API Integration 13
5.1 Flask API Development 13
5.2 API Testing and Optimization 13
5.3 Integration with Frontend Systems 13
5.4 Advanced Security Features 13
6. Code Snippets 14
6.1 Dockerfile Setup for [Link] Application 14
6.2 Build Code and Publish on AWS S3 14
6.3 Reverse Proxy Setup with Express and HTTP Proxy 15
7. Conclusion 16
8. References 17
7
1. Introduction
During my enriching tenure at Bhavi AI Technologies Pvt Ltd., I delved into a comprehensive
range of responsibilities encompassing Machine Learning (ML), DevOps, cloud infrastructure,
backend development, and system automation. This internship provided a platform to integrate
theoretical knowledge with practical expertise, enabling me to contribute meaningfully to high-
impact projects. My role was pivotal in designing scalable solutions, automating workflows,
and deploying production-ready applications that addressed complex business challenges.
Working in a dynamic and collaborative environment, I gained firsthand experience in
leveraging advanced technologies to develop innovative solutions. From creating predictive
models for stock market analysis to streamlining CI/CD pipelines and automating
infrastructure setup using Terraform, my responsibilities spanned a broad spectrum of tasks.
These experiences not only enriched my technical expertise but also fostered critical thinking
and adaptability in fast-paced development cycles.
Moreover, this internship enabled me to collaborate with multidisciplinary teams, where I
refined my communication and project management skills. I contributed to optimizing cloud
infrastructure with AWS, ensuring efficient resource allocation, and implementing robust
monitoring systems to maintain application reliability. Through rigorous testing and iterative
improvements, I ensured that the solutions delivered were efficient, secure, and aligned with
organizational goals.
By integrating cutting-edge tools such as Docker, Jenkins, and Flask, I played an instrumental
role in building and deploying production-grade applications. Additionally, my contributions
extended to backend development and API integrations, creating seamless interactions between
various system components. These projects underscored the importance of scalability and
performance in building enterprise-level solutions.
Through this immersive experience, I solidified my foundation in key technical domains while
embracing a mindset of continuous learning and innovation. My tenure at Bhavi AI
Technologies has been a transformative journey, equipping me with the skills and insights
necessary to excel in the ever-evolving landscape of technology and development.
8
2. Machine Learning and Data Processing
During my tenure at Bhavi AI Technologies Pvt. Ltd., I was deeply involved in the
development and optimization of machine learning models and data pipelines. My
responsibilities encompassed designing, implementing, and fine-tuning predictive algorithms
that addressed complex business challenges.
2.1 Stock Market Analysis and Forecasting
• Predictive Modelling: Designed and implemented advanced predictive models to
analyze and forecast stock market trends. Leveraged time-series data to generate
actionable insights for optimizing trading strategies.
• Feature Engineering: Conducted extensive feature engineering to extract relevant
indicators such as moving averages, Bollinger Bands, and the Relative Strength Index
(RSI), significantly improving the accuracy of predictions.
• Technology Stack: Utilized powerful libraries and frameworks, including TensorFlow,
NumPy, and Pandas, for model creation and Matplotlib for visualizing intricate data
patterns.
• Optimization: Performed hyperparameter tuning and cross-validation to enhance
model robustness and ensure reliable performance across diverse datasets.
2.2 Automation of Data Ingestion
• Automated Workflows: Developed and deployed automation scripts for ingesting
high-frequency stock market data using the Google Drive API, reducing manual
intervention and increasing data pipeline efficiency.
• Data Preprocessing: Designed custom preprocessing pipelines to clean, normalize,
and transform raw data into structured formats suitable for machine learning
workflows.
• Real-time Integration: Integrated automated ingestion systems with downstream ML
pipelines to enable real-time analytics and decision-making.
• Error Handling: Implemented robust error-handling mechanisms to manage
anomalies in data ingestion and ensure system reliability.
2.3 Advanced Data Analysis Techniques
9
• Anomaly Detection: Built machine learning models to detect anomalies in stock
market data, flagging unusual patterns that could indicate market shifts or irregularities.
• Clustering and Classification: Applied clustering techniques to segment data into
meaningful groups and classification algorithms for predictive insights.
• Visualization: Created dashboards and interactive visualizations to present analysis
results to stakeholders, ensuring transparency and actionable insights.
2.4 Integration of ML Models into Production
• Model Deployment: Collaborated with the DevOps team to deploy machine learning
models into production environments, ensuring scalability and seamless operation.
• Performance Monitoring: Set up monitoring systems to evaluate the performance of
deployed models and retrained them periodically to adapt to new data trends.
• Cloud Integration: Leveraged AWS services, including S3 for data storage and
Lambda for triggering data processing workflows, to streamline operations.
By applying cutting-edge machine learning techniques and leveraging automation, I
contributed to building robust systems capable of deriving actionable insights from complex
datasets, thus driving key business decisions effectively.
10
3. DevOps and Cloud Deployment
3.1 CI/CD Pipelines
• Implemented CI/CD pipelines using Jenkins to automate the integration and
deployment of machine learning models.
• Containerized applications with Docker for platform-independent deployments,
ensuring consistency across environments.
• Configured Jenkins pipelines to automatically pull code changes from Git repositories
and deploy them to staging and production environments.
• Developed multi-stage Jenkins pipelines with steps for testing, building, and deploying,
reducing deployment errors and improving development velocity.
3.2 AWS Cloud Infrastructure
• Provisioned and managed AWS EC2 instances using Terraform to support scalable
infrastructure.
• Configured AWS services such as S3 for secure data storage, IAM roles for controlled
access, and CloudWatch for real-time monitoring of applications.
• Utilized Auto Scaling Groups to ensure high availability and performance of cloud-
hosted applications during peak usage.
• Designed secure VPCs (Virtual Private Clouds) with subnets, route tables, and internet
gateways to segregate and manage cloud resources effectively.
3.3 Terraform Scripts
• Automated the setup of EC2 instances and storage configurations using Terraform
scripts.
• Ensured infrastructure reproducibility and compliance with company standards by
maintaining version-controlled Terraform configurations.
• Wrote custom Terraform modules to standardize resource creation processes across
teams.
• Integrated Terraform with Jenkins to automate infrastructure provisioning as part of the
CI/CD pipeline, enabling seamless infrastructure updates alongside application
deployments.
11
4. Enabling Monitoring and Logging Solutions
4.1 System and Application Monitoring
Configured AWS CloudWatch to provide real-time monitoring of system performance and
application health. By deploying custom metrics, specific key performance indicators (KPIs)
such as API response times and error rates were tracked effectively. These metrics were
visualized using CloudWatch Dashboards, which offered actionable insights to stakeholders,
aiding in informed decision-making and operational improvements.
4.2 CloudWatch Agent Configuration
Installed and configured AWS CloudWatch agents with JSON-based configuration files to
enable detailed system metrics collection. These agents facilitated the centralization of logs
for custom applications, ensuring comprehensive troubleshooting and audit trails. Using
CloudWatch Logs Insights, queries were run on application logs to debug issues and analyze
usage patterns, streamlining maintenance efforts and enhancing system reliability.
4.3 Alerting and Notifications
Established robust alerting mechanisms in AWS CloudWatch to notify the team about system
anomalies in real time through email and SMS. Thresholds were configured for critical
metrics, such as CPU utilization and disk usage, to preemptively address potential
downtimes. Additionally, notifications were integrated with Slack channels, enabling rapid
team responses during incidents and fostering collaborative problem-solving.
4.4 Centralized Logging with ELK Stack
Deployed an ELK stack (Elasticsearch, Logstash, and Kibana) to centralize log management
and facilitate advanced log analysis. Logstash pipelines were configured to parse and ingest
logs from diverse sources, including application servers and AWS CloudWatch. Kibana
dashboards were developed to visualize log trends, empowering data-driven decision-making
and enabling proactive system management.
12
5. Backend Development and API Integration
5.1 Flask API Development
Flask-based RESTful APIs were designed and developed to enable seamless data
communication between systems. Endpoints such as /spot and /option were created to retrieve
and process live market data efficiently. Reliability was ensured through proper error handling,
validation mechanisms, and logging practices. Additionally, Flask APIs were integrated with
external data sources to support real-time updates, significantly enhancing decision-making
processes.
5.2 API Testing and Optimization
API endpoints were rigorously tested using tools like Postman to ensure functionality,
reliability, and performance. Response times were optimized by implementing caching
mechanisms and efficient database querying strategies. Load testing was conducted to identify
bottlenecks, and server configurations were fine-tuned to improve overall system scalability.
Automated test scripts were developed for regression testing, enabling faster iterations and
maintaining high code quality.
5.3 Integration with Frontend Systems
Flask APIs were seamlessly connected with [Link] frontends to deliver a cohesive user
experience with real-time data updates. State management techniques were employed to
effectively synchronize frontend behavior with backend operations. The integrated systems
were deployed on AWS EC2 instances, ensuring robust production environments. Close
collaboration with frontend teams ensured that intuitive APIs were designed, aligning with user
interface requirements.
5.4 Advanced Security Features
Secure API authentication mechanisms, including JWT tokens, were implemented to protect
data integrity. HTTPS was configured to ensure secure communication between clients and
servers, mitigating risks of data breaches. Rate-limiting strategies were employed to prevent
abuse and maintain consistent API performance under varying loads. API usage patterns were
continuously monitored to detect anomalies and safeguard against unauthorized access
attempts.
13
6. Code Snippets
6.1 Dockerfile Setup for [Link] Application
Fig 6.1 Docker file
This Dockerfile (Fig 6.1) sets up an environment for a [Link] application by installing
necessary dependencies like [Link] and Git, copying required files into the container, setting
up executable permissions, and specifying the entry point script ([Link]) to run when the
container starts.
6.2 Build Code and Publish on A3 (AWS S3)
Fig 6.2 Index file
14
This script automates the build process, installs dependencies, and uploads the built files to an
S3 bucket (deployflowoutput). It utilizes (Fig 6.2) Redis for publishing build logs and monitors
the progress, providing real-time feedback during the entire build and upload process.
6.3 Reverse Proxy Setup with Express and HTTP Proxy
Fig 6.3 Installation of dependencies
This script (Fig 6.3) sets up an HTTP reverse proxy using Express and http-proxy. Based on
the subdomain of the incoming request, it dynamically proxies the request to the appropriate
folder on AWS S3, providing a seamless route to static assets. The proxy also ensures that
requests to the root path (/) resolve to [Link].
15
7. Conclusion
My tenure at Bhavi AI Technologies allowed me to apply and enhance my technical skills
in a real-world environment. I contributed to the development of machine learning models
for stock market forecasting, automating data ingestion pipelines, and optimizing system
infrastructure. By implementing CI/CD pipelines, automating deployments, and using
cloud services like AWS and Azure, I improved both the efficiency and scalability of the
company's operations.
In addition to the aforementioned contributions, I implemented Redis in Docker for caching
and real-time data processing, significantly improving the system's performance and
response times. Redis was utilized to handle session management, store real-time data, and
cache frequently accessed information, reducing the load on the primary database and
improving overall application speed.
I successfully integrated backend APIs with [Link] frontends, enabling seamless
interaction for data retrieval and processing. My work with databases like Azure SQL and
Cosmos DB ensured real-time data management and efficient storage solutions.
Additionally, I streamlined workflows through automation, utilizing tools like Jenkins,
Terraform, and Docker to create consistent, reproducible environments.
Through my work on key projects, such as automated trading algorithms and real-time
market data retrieval systems, I contributed to enhancing the company's decision-making
capabilities and operational efficiency. These projects required a strong understanding of
both backend and machine learning systems, and the ability to deploy solutions in a cloud
environment.
Overall, my time at Bhavi AI Technologies was marked by impactful contributions across
multiple domains, helping to drive the success of the company’s projects while gaining
invaluable experience in system design, cloud computing, machine learning, and
containerized solutions with Redis.
16
8. References
[1] Machine Learning with TensorFlow
Source: [Link]
[2] Building CI/CD Pipelines with Jenkins
Source: [Link]
[3] AWS EC2 and S3 Tutorial
Source: [Link]
[4] Redis in Docker: A Quickstart Guide
Source: [Link]
[5] Flask APIs and Gunicorn Deployment
Source: [Link]
[6] Introduction to Terraform for Infrastructure as Code
Source: [Link]
[7] Integrating Cosmos DB with Python Applications
Source: [Link]
[8] React for Frontend Development
Source: [Link]
[9] Docker: A Comprehensive Guide
Source: [Link]
[10] Best Practices for Building REST APIs with Flask
Source: [Link]
17