FSD Week 3
FSD Week 3
What is Devops
A DevOps team includes developers and IT operations working collaboratively
throughout the product life cycle, in order to increase the speed and quality of software
deployment. It’s a new way of working, a cultural shift, that has significant implications for
teams and the organizations they work for. DevOps is an evolving philosophy and framework
that encourages faster, better application development and faster release of new or revised
software features or products to customers.
This closer relationship between “Dev” and “Ops” permeates every phase of the
DevOps lifecycle: from initial software planning to code, build, test, and release phases and
on to deployment, operations, and ongoing monitoring. This relationship propels a
continuous customer feedback loop of further improvement, development, testing, and
deployment. One result of these efforts can be the more rapid, continual release of necessary
feature changes or additions.
● Plan. This phase helps define business value and requirements. Sample tools
include Jira or Git to help track known issues and perform project management.
● Code. This phase involves software design and the creation of software code.
Sample tools include GitHub, GitLab, Bitbucket, or Stash.
JB PORTALS 1
FULL STACK DEVELOPMENT - WEEK 3
● Build. In this phase, you manage software builds and versions, and use
automated tools to help compile and package code for future release to
production. You use source code repositories or package repositories that also
“package” infrastructure needed for product release. Sample tools include
Docker, Ansible, Puppet, Chef, Gradle, Maven, or JFrog Artifactory.
● Test. This phase involves continuous testing (manual or automated) to ensure
optimal code quality. Sample tools include JUnit, Codeception, Selenium, Vagrant,
TestNG, or BlazeMeter.
● Deploy. This phase can include tools that help manage, coordinate, schedule, and
automate product releases into production. Sample tools include Puppet, Chef,
Ansible, Jenkins, Kubernetes, OpenShift, OpenStack, Docker, or Jira.
● Operate. This phase manages software during production. Sample tools include
Ansible, Puppet, PowerShell, Chef, Salt, or Otter.
● Monitor. This phase involves identifying and collecting information about issues
from a specific software release in production. Sample tools include New Relic,
Datadog, Grafana, Wireshark, Splunk, Nagios, or Slack.
JB PORTALS 2
FULL STACK DEVELOPMENT - WEEK 3
This post focuses on its modern application and use in agile CI/CD software
environments.
Continuous integration (CI) is the practice that requires developers to integrate code
into a shared repository often and obtain rapid feedback on its success during active
development.
This is done as developers finish a specific piece of code and it has successfully passed
unit testing. CI also means creating a build in a tool like Bamboo/Jenkins/Gitlab that runs
after developer check-in, runs any test you have that can run on this build (unit & integration
for example) and provides feedback to the development team if it worked or if it failed. The
JB PORTALS 3
FULL STACK DEVELOPMENT - WEEK 3
end goal is to create small workable chunks of code that are validated and integrated back
into the centralized code repository as frequently as possible. As such, CI is the foundation
for both continuous delivery and continuous deployment DevOps practices.
Automated Testing
Test automation is the practice of automatically reviewing and validating a software
product, such as a web application, to make sure it meets predefined quality standards for
code style, functionality (business logic), and user experience.Testing practices typically
involve the following stages:
● Unit testing: validates individual units of code, such as a function, so it works as
expected
● Integration testing: ensures several pieces of code can work together without
unintended consequences
● End-to-end testing: validates that the application meets the user’s expectations
● Exploratory testing: takes an unstructured approach to reviewing numerous areas of
an application from the user perspective, to uncover functional or visual issues
The different types of testing are often visualized as a pyramid. As you climb up the pyramid,
the number of tests in each type decreases, and the cost of creating and running tests
increases.
Infrastructure as Code
Infrastructure as Code (IaC) is the managing and provisioning of infrastructure
through code instead of through manual processes. With IaC, configuration files are created
that contain your infrastructure specifications, which makes it easier to edit and distribute
configurations. It also ensures that you provision the same environment every time. By
codifying and documenting your configuration specifications, IaC aids configuration
management and helps you to avoid undocumented, ad-hoc configuration changes.
JB PORTALS 4
FULL STACK DEVELOPMENT - WEEK 3
IaC is used to define code that when executed, can stand up an entire physical or
virtual environment including computing and networking infrastructure. It is a type of IT
infrastructure that operation teams can automatically manage and provision through code,
rather than using a manual process. An example of using IaC would be to use Terraform to
rapidly stand up nodes in a cloud environment, and then have the ability to destroy and
rebuild the environment consistently each time. Doing so gives the user the ability to version
control their infrastructure, and can be more agile when recovering from infrastructure
outages.
Continuous Delivery
The practice of making every change to source code ready for a production release as
soon as automated testing validates it. This includes automatically building, testing and
deploying. An approach to code approval and delivery approval needs to be in place to
ensure that the code can be deployed in an automated fashion with appropriate pauses for
approval depending on the specific needs of a program. This also implies the same process
for the lower environments, like QA, UA, etc.
Continuous Deployment
Continuous Deployment is the practice that strives to automate production
deployment end to end. In order for this practice to be implemented, a team needs to have
extremely high confidence in their automated tests. The ultimate goal is that as long as the
build has passed all automated tests, the code will be deployed. However, manual steps in
the deployment process can be maintained if necessary.
For example, a team can determine what type of changes can be deployed to
production in a completely automated fashion, while other types of changes may maintain a
manual approval step. Such a hybrid approach is a good way to begin to adopt this practice.
JB PORTALS 5
FULL STACK DEVELOPMENT - WEEK 3
Continuous Monitoring
DevOps monitoring entails overseeing the entire development process from planning,
development, integration and testing, deployment, and operations. It involves a complete
and real-time view of the status of applications, services, and infrastructure in the
production environment. Features such as real-time streaming, historical replay, and
visualizations are critical components of application and service monitoring. Continuous
monitoring is the practice of proactively monitoring, alerting, and taking action in key areas
to give teams visibility into the health of the application in the production environment. The
following areas are included to be aware of the impact of every deployment and reduce the
time between issue identification and resolution:
JB PORTALS 6
FULL STACK DEVELOPMENT - WEEK 3
Configuration Management
What is Version Control System?
Version control systems allow multiple developers, designers, and team members to
work together on the same project. It helps them work smarter and faster! A version control
system is critical to ensure everyone has access to the latest code and modifications are
tracked. As development becomes increasing complex and teams grow, there's a bigger need
to manage multiple versions and components of entire products.
The responsibility of the Version control system is to keep all the team members on
the same page. It makes sure that everyone on the team is working on the latest version of
the file and, most importantly, makes sure that all these people can work simultaneously on
the same project.
Let's try to understand the process with the help of this diagram:
There are 3 workstations or three different developers at three other locations, and
there's one repository acting as a server. The work stations are using that repository either
for the process of committing or updating the tasks.
There may be a large number of workstations using a single server repository. Each
workstation will have its working copy, and all these workstations will be saving their source
codes into a particular server repository.
This makes it easy for any developer to access the task being done using the
repository. If any specific developer's system breaks down, then the work won't stop, as
there will be a copy of the source code in the central repository.
JB PORTALS 7
FULL STACK DEVELOPMENT - WEEK 3
Collaboration
There are so many people located at different places, there may be a need to
communicate for a particular reason, or a set of people are working on the same project but
from other regions.
Storing Versions
The project is completed into several versions; in that situation, keeping all such commits in
a single place is a considerable challenge.
Fundamentals of Git
Git is the best choice for most software teams today. While every team is different and
should do their own analysis, here are the main reasons why version control with Git is
preferred over alternatives:
Git is good
Git has the functionality, performance, security and flexibility that most teams and individual
developers need. These attributes of Git are detailed above. In side-by-side comparisons with
most other alternatives, many teams find that Git is very favorable.
Git is a de facto standard
Git is the most broadly adopted tool of its kind. This makes Git attractive for the
following reasons. At Atlassian, nearly all of our project source code is managed in Git.
JB PORTALS 9
FULL STACK DEVELOPMENT - WEEK 3
Cloning an existing repository: git clone
If a project has already been set up in a central repository, the clone command is the
most common way for users to obtain a local development clone. Like git init, cloning is
generally a one-time operation. Once a developer has obtained a working copy, all version
control operations are managed through their local repository.
git clone <repo url>
git clone is used to create a copy or clone of remote repositories. You pass git clone a
repository URL. Git supports a few different network protocols and corresponding URL
formats. In this example, we'll be using the Git SSH protocol. Git SSH URLs follow a template
of: git@HOSTNAME:USERNAME/REPONAME.git
HOSTNAME: bitbucket.org
USERNAME: rhyolight
REPONAME: javascript-data-store
When executed, the latest version of the remote repo files on the main branch will be
pulled down and added to a new folder. The new folder will be named after the REPONAME
in this case javascript-data-store. The folder will contain the full history of the remote
repository and a newly created main branch.
git add
The git add command adds a change in the working directory to the staging area. It
tells Git that you want to include updates to a particular file in the next commit. However, git
add doesn't really affect the repository in any significant way—changes are not actually
recorded until you run git commit.
In conjunction with these commands, you'll also need git status to view the state of
the working directory and the staging area.
JB PORTALS 10
FULL STACK DEVELOPMENT - WEEK 3
How it works
The git add and git commit commands compose the fundamental Git workflow. These
are the two commands that every Git user needs to understand, regardless of their team’s
collaboration model. They are the means to record versions of a project into the repository’s
history.
Developing a project revolves around the basic edit/stage/commit pattern. First, you
edit your files in the working directory. When you’re ready to save a copy of the current state
of the project, you stage changes with git add. After you’re happy with the staged snapshot,
you commit it to the project history with git commit. The git reset command is used to undo
a commit or staged snapshot.
In addition to git add and git commit, a third command git push is essential for a
complete collaborative Git workflow. git push is utilized to send the committed changes to
remote repositories for collaboration. This enables other team members to access a set of
saved changes.
As in any revision control system, it’s important to create atomic commits so that it’s
easy to track down bugs and revert changes with minimal impact on the rest of the project.
Common options
git add <file>
Stage all changes in <file> for the next commit.
git add <directory>
Stage all changes in <directory> for the next commit.
git add -p
Begin an interactive staging session that lets you choose portions of a file to add to
the next commit. This will present you with a chunk of changes and prompt you for a
command. Use y to stage the chunk, n to ignore the chunk, s to split it into smaller chunks, e
to manually edit the chunk, and q to exit.
JB PORTALS 11
FULL STACK DEVELOPMENT - WEEK 3
Examples
When you’re starting a new project, git add serves the same function as svn import.
To create an initial commit of the current directory, use the following two commands:
git add .
git commit
Once you’ve got your project up-and-running, new files can be added by passing the path to
git add:
git add hello.py
git commit
Git commit
The git commit command captures a snapshot of the project's currently staged
changes. Committed snapshots can be thought of as “safe” versions of a project—Git will
never change them unless you explicitly ask it to. Prior to the execution of git commit, The
git add command is used to promote or 'stage' changes to the project that will be stored in a
commit. These two commands git commit and git add are two of the most frequently used.
How it works
At a high-level, Git can be thought of as a timeline management utility. Commits are
the core building block units of a Git project timeline. Commits can be thought of as snapshots
or milestones along the timeline of a Git project. Commits are created with the git commit
command to capture the state of a project at that point in time. Git Snapshots are always
committed to the local repository.
Common options
git commit
Commit the staged snapshot. This will launch a text editor prompting you for a commit
message. After you’ve entered a message, save the file and close the editor to create the actual
commit.
git commit -a
Commit a snapshot of all changes in the working directory. This only includes modifications
to tracked files (those that have been added with git add at some point in their history).
git commit -m "commit message"
A shortcut command that immediately creates a commit with a passed commit message. By
default, git commit will open up the locally configured text editor, and prompt for a commit
message to be entered. Passing the -m option will forgo the text editor prompt in-favor of an
inline message.
git commit -a -m "commit message"
A power user shortcut command that combines the -a and -m options. This combination
immediately creates a commit of all the staged changes and takes an inline commit message.
git commit --amend
JB PORTALS 12
FULL STACK DEVELOPMENT - WEEK 3
This option adds another level of functionality to the commit command. Passing this option
will modify the last commit. Instead of creating a new commit, staged changes will be added
to the previous commit. This command will open up the system's configured text editor and
prompt to change the previously specified commit message.
Examples
# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
# On branch main
# Changes to be committed:
# (use "git reset HEAD ..." to unstage)
#
#modified: hello.py
Git doesn't require commit messages to follow any specific formatting constraints,
but the canonical format is to summarize the entire commit on the first line in less than 50
characters, leave a blank line, then a detailed explanation of what’s been changed. For
example:
JB PORTALS 13
FULL STACK DEVELOPMENT - WEEK 3
Viewing the Commit History
After you have created several commits, or if you have cloned a repository with an
existing commit history, you’ll probably want to look back to see what has happened. The
most basic and powerful tool to do this is the git log command.
Associated git command : If you’re running git from the command line, the equivalent
command is git log <filename>. For example, if you want to find history information about a
README.md file in the local directory, run the following command:
git log
Git displays output similar to the following, which includes the commit time in UTC format:
commit 0e62ed6d9f39fa9bedf7efc6edd628b137fa781a
Author: Mike Jang <[email protected]>
Date: Tue Nov 26 21:44:53 2019 +0000
GIT BRANCHING
Git branches are effectively a pointer to a snapshot of your changes. When you want
to add a new feature or fix a bug—no matter how big or how small—you spawn a new branch
to encapsulate your changes. This makes it harder for unstable code to get merged into the
main code base, and it gives you the chance to clean up your future's history before merging
it into the main branch.
The diagram above visualizes a repository with two isolated lines of development,
one for a little feature, and one for a longer-running feature. By developing them in branches,
it’s not only possible to work on both of them in parallel, but it also keeps the main branch
free from questionable code.
JB PORTALS 14
FULL STACK DEVELOPMENT - WEEK 3
How it works
A branch represents an independent line of development. Branches serve as an
abstraction for the edit/stage/commit process. You can think of them as a way to request a
brand-new working directory, staging area, and project history. New commits are recorded
in the history for the current branch, which results in a fork in the history of the project.
The git branch command lets you create, list, rename, and delete branches. It doesn’t
let you switch between branches or put a forked history back together again. For this reason,
git branch is tightly integrated with the git checkout and git merge commands.
Common Options
git branch
List all of the branches in your repository. This is synonymous with git branch --list.
git branch <branch>
Create a new branch called <branch>. This does not check out the new branch. Then, you
create a branch using the following command:
Note that this only creates the new branch. To start adding commits to it, you need to select
it with git checkout, and then use the standard git add and git commit commands.
Deleting Branches
Once you’ve finished working on a branch and have merged it into the main code base, you’re
free to delete the branch without losing any history:
git branch -d crazy-experiment
Switching Branches
Switching branches is a straightforward operation. Executing the following will point
HEAD to the tip of <branchname>.
git checkout <branchname>
Git tracks a history of checkout operations in the reflog. You can execute git reflog to view
the history.
JB PORTALS 15
FULL STACK DEVELOPMENT - WEEK 3
How it works
Git merge will combine multiple sequences of commits into one unified history. In the
most frequent use cases, git merge is used to combine two branches. The following examples
in this document will focus on this branch merging pattern. In these scenarios, git merge
takes two commit pointers, usually the branch tips, and will find a common base commit
between them. Once Git finds a common base commit it will create a new "merge commit"
that combines the changes of each queued merge commit sequence.
Invoking this command will merge the specified branch feature into the current
branch, we'll assume main. Git will determine the merge algorithm automatically (discussed
below).
Merge commits are unique against other commits in the fact that they have two
parent commits. When creating a merge commit Git will attempt to auto magically merge the
separate histories for you. If Git encounters a piece of data that is changed in both histories
it will be unable to automatically combine them.
JB PORTALS 16
FULL STACK DEVELOPMENT - WEEK 3
Merging
Once the previously discussed "preparing to merge" steps have been taken a merge
can be initiated by executing git merge where is the name of the branch that will be merged
into the receiving branch.
Our first example demonstrates a fast-forward merge. The code below creates a new
branch, adds two commits to it, then integrates it into the main line with a fast-forward
merge.
JB PORTALS 17
FULL STACK DEVELOPMENT - WEEK 3
What is GitHub?
GitHub is a Git repository hosting service that provides a web-based graphical
interface. It is the world’s largest coding community. Putting a code or a project into GitHub
brings it increased, widespread exposure. Programmers can find source codes in many
different languages and use the command-line interface, Git, to make and keep track of any
changes. GitHub helps every team member work together on a project from any location
while facilitating collaboration. You can also review previous versions created at an earlier
point in time.
Benefits of GitHub
GitHub can be separated as the Git and the Hub. GitHub service includes access
controls as well as collaboration features like task management, repository hosting, and
team management. The key benefits of GitHub are as follows.
● It is easy to contribute to open source projects via GitHub.
● It helps to create an excellent document.
● You can attract recruiter by showing off your work. If you have a profile on GitHub,
you will have a higher chance of being recruited.
● It allows your work to get out there in front of the public.
● You can track changes in your code across versions.
2. Type a short, memorable name for your repository. For example, "hello-world".
JB PORTALS 18
FULL STACK DEVELOPMENT - WEEK 3
3. Optionally, add a description of your repository. For example, "My first repository on
GitHub."
JB PORTALS 19
FULL STACK DEVELOPMENT - WEEK 3
Push to repositories
git push -u -f origin main
The -u (or --set-upstream) flag sets the remote origin as the upstream reference. This allows
you to later perform git push and git pull commands without having to specify an origin since
we always want GitHub in this case.
The -f (or --force) flag stands for force. This will automatically overwrite everything
in the remote directory. We’re using it here to overwrite the default README that GitHub
automatically initialized.
All together
git init
git add -A
git commit -m 'Added my project'
git remote add origin [email protected]:sammy/my-new-project.git
git push -u -f origin main
Versioning in Github
Lately I've been doing a lot of thinking around versioning in repositories. For all the
convenience and ubiquity of package.json, it does sometimes misrepresent the code that is
contained within a repository. For example, suppose I start out my project at v0.1.0 and
that's what's in my package.json file in my master branch. Then someone submits a pull
request that I merge in - the version number hasn't changed even though the repository now
no longer represents v0.1.0. The repository is actually now in an intermediate state, in
between v0.1.0 and the next official release.
To deal with that, I started changing the package.json version only long enough to
push a new release, and then I would change it to a dev version representing the next
scheduled release (such as v0.2.0-dev). That solved the problem of misrepresenting the
version number of the repository (provided people realize "dev" means "in flux day to day").
However, it introduced a yucky workflow that I really hated. When it was time for a release,
I'd have to:
1. Manually change the version in package.json.
2. Tag the version in the repo.
3. Publish to npm.
4. Manually change the version in package.json to a dev version.
5. Push to master.
There may be some way to automate this, but I couldn't figure out a really nice way to do it.
That process works well enough when you have no unplanned releases. However, what if
I'm working on v0.2.0-dev after v0.1.0 was released, and need to do a v0.1.1 release?
JB PORTALS 20
FULL STACK DEVELOPMENT - WEEK 3
Then I need to:
1. Note the current dev version.
2. Manually change the version to v0.1.1.
3. Tag the version in the repo.
4. Publish to npm.
5. Change the version back to the same version from step 1.
6. Push to master.
Add on top of this trying to create an automated changelog based on tagging, and things
can get a little bit tricky. My next thought was to have a release branch where the last
published release would live. Essentially, after v0.1.0, the release branch remains at v0.1.0
while the master branch becomes v0.2.0-dev. If I need to do an intermediate release, then I
merge master onto release and change versions only in the release branch. Once again, this
is a bit messy because package.json is guaranteed to have different versions on master and
release, which always causes merge conflicts. This also means the changelog is updated only
in the release branch. This solution turned out to be more complex than I anticipated.
I'm still not sure the right way to do this, but my high-level requirements are:
1. Make sure the version in package.json is always accurate.
2. Don't require people to change the version to make a commit.
3. Don't require people to use a special build command to make a commit.
4. Distinguish between development (in progress) work vs. official releases.
5. Be able to auto-increment the version number (via npm version).
Collaboration
You can invite users to become collaborators to your personal repository. If you're
using GitHub Free, you can add unlimited collaborators on public and private repositories.
1. Ask for the username of the person you're inviting as a collaborator. If they don't have
a username yet, they can sign up for GitHub For more information, see "Signing up for
a new GitHub account".
2. On GitHub.com, navigate to the main page of the repository.
3. Under your repository name, click Settings.
JB PORTALS 21
FULL STACK DEVELOPMENT - WEEK 3
6. In the search field, start typing the name of person you want to invite, then click a
name in the list of matches.
8. The user will receive an email inviting them to the repository. Once they accept your
invitation, they will have collaborator access to your repository.
Migration in Github
A migration is the process of transferring data from a source location (either a
GitHub.com organization or a GitHub Enterprise Server instance) to a target GitHub
Enterprise Server instance. Migrations can be used to transfer your data when changing
platforms or upgrading hardware on your instance.
JB PORTALS 22
FULL STACK DEVELOPMENT - WEEK 3
Types of migrations
There are three types of migrations you can perform:
● A migration from a GitHub Enterprise Server instance to another GitHub Enterprise
Server instance. You can migrate any number of repositories owned by any user or
organization on the instance. Before performing a migration, you must have site
administrator access to both instances.
● A migration from a GitHub.com organization to a GitHub Enterprise Server instance.
You can migrate any number of repositories owned by the organization. Before
performing a migration, you must have administrative access to the GitHub.com
organization as well as site administrator access to the target instance.
● Trial runs are migrations that import data to a staging instance. These can be useful
to see what would happen if a migration were applied to your GitHub Enterprise
Server instance. We strongly recommend that you perform a trial run on a staging
instance before importing data to your production instance.
JB PORTALS 24
FULL STACK DEVELOPMENT - WEEK 3
Why Cloud Computing Infrastructure?
Cloud infrastructure offers the same capabilities as physical infrastructure but can
provide additional benefits like a lower cost of ownership, greater flexibility, and scalability.
Cloud computing infrastructure is available for private cloud, public cloud, and hybrid cloud
systems. It’s also possible to rent cloud infrastructure components from a cloud provider,
through cloud infrastructure as a service (Iaas). Cloud infrastructure systems allow for
integrated hardware and software and can provide a single management platform for multiple
clouds.
There are the following three types of cloud service models -
1. Infrastructure as a Service (IaaS)
2. Platform as a Service (PaaS)
3. Software as a Service (SaaS)
Infrastructure as a Service | IaaS
Iaas is also known as Hardware as a Service (HaaS). It is one of the layers of the
cloud computing platform. It allows customers to outsource their IT infrastructures such as
servers, networking, processing, storage, virtual machines, and other resources. Customers
access these resources on the Internet using a pay-as-per use model.
In traditional hosting services, IT infrastructure was rented out for a specific period
of time, with pre-determined hardware configuration. The client paid for the configuration
and time, regardless of the actual use. With the help of the IaaS cloud computing platform
layer, clients can dynamically scale the configuration to meet changing requirements and are
billed only for the services actually used.
IaaS provider provides the following services -
1. Compute: Computing as a Service includes virtual central processing units and
virtual main memory for the Vms that is provisioned to the end- users.
2. Storage: IaaS provider provides back-end storage for storing files.
3. Network: Network as a Service (NaaS) provides networking components such as
routers, switches, and bridges for the Vms.
4. Load balancers: It provides load balancing capability at the infrastructure layer.
JB PORTALS 25
FULL STACK DEVELOPMENT - WEEK 3
Advantages of IaaS cloud computing layer
There are the following advantages of IaaS computing layer -
1. Shared infrastructure : IaaS allows multiple users to share the same physical
infrastructure.
2. Web access to the resources: Iaas allows IT users to access resources over the internet.
3. Pay-as-per-use model: IaaS providers provide services based on the pay-as-per-use
basis. The users are required to pay for what they have used.
4. Focus on the core business: IaaS providers focus on the organization's core business
rather than on IT infrastructure.
5. On-demand scalability: On-demand scalability is one of the biggest advantages of IaaS.
Using IaaS, users do not worry about to upgrade software and troubleshoot the issues related
to hardware components.
Disadvantages of IaaS cloud computing layer
1. Security: Security is one of the biggest issues in IaaS. Most of the IaaS providers are not
able to provide 100% security.
2. Maintenance & Upgrade: Although IaaS service providers maintain the software, but
they do not upgrade the software for some organizations.
3. Interoperability issues: It is difficult to migrate VM from one IaaS provider to the other,
so the customers might face problem related to vendor lock-in.
Top Iaas Providers who are providing IaaS cloud computing platform
Amazon Web Elastic, Elastic Compute The cloud computing platform pioneer,
Services Cloud (EC2) Amazon offers auto scaling, cloud monitoring,
MapReduce, Route 53, and load balancing features as part of its
Virtual Private Cloud, portfolio.
etc.
Netmagic Netmagic IaaS Cloud Netmagic runs from data centers in Mumbai,
Solutions Chennai, and Bangalore, and a virtual data
center in the United States. Plans are underway
to extend services to West Asia.
JB PORTALS 26
FULL STACK DEVELOPMENT - WEEK 3
Rackspace Cloud servers, cloud The cloud computing platform vendor focuses
files, cloud sites, etc. primarily on enterprise-level hosting services.
Reliance Reliance Internet Data RIDC supports both traditional hosting and
Communicati Center cloud services, with data centers in Mumbai,
ons Bangalore, Hyderabad, and Chennai. The cloud
services offered by RIDC include IaaS and SaaS.
JB PORTALS 27
FULL STACK DEVELOPMENT - WEEK 3
1. Programming languages
PaaS providers provide various programming languages for the developers to
develop the applications. Some popular programming languages provided by PaaS providers
are Java, PHP, Ruby, Perl, and Go.
2. Application frameworks
PaaS providers provide application frameworks to easily understand the application
development. Some popular application frameworks provided by PaaS providers are
Node.js, Drupal, Joomla, WordPress, Spring, Play, Rack, and Zend.
3. Databases
PaaS providers provide various databases such as ClearDB, PostgreSQL, MongoDB,
and Redis to communicate with the applications.
4. Other tools
PaaS providers provide various other tools that are required to develop, test, and
deploy the applications.
Advantages of PaaS
There are the following advantages of PaaS -
1) Simplified Development: PaaS allows developers to focus on development and
innovation without worrying about infrastructure management.
2) Lower risk: No need for up-front investment in hardware and software. Developers only
need a PC and an internet connection to start building applications.
3) Prebuilt business functionality: Some PaaS vendors also provide already defined
business functionality so that users can avoid building everything from very scratch and
hence can directly start the projects only.
4) Instant community: PaaS vendors frequently provide online communities where the
developer can get the ideas to share experiences and seek advice from others.
5) Scalability: Applications deployed can scale from one to thousands of users without any
changes to the applications.
JB PORTALS 28
FULL STACK DEVELOPMENT - WEEK 3
3) Integration with the rest of the systems applications: It may happen that some
applications are local, and some are in the cloud. So there will be chances of increased
complexity when we want to use data which in the cloud with the local data.
Providers Services
Google App Engine App Identity, URL Fetch, Cloud storage client library, Logservice
(GAE)
JB PORTALS 29
FULL STACK DEVELOPMENT - WEEK 3
Social Networks - As we all know, social networking sites are used by the general public, so
social networking service providers use SaaS for their convenience and handle the general
public's information.
Mail Services - To handle the unpredictable number of users and load on e-mail services,
many e-mail providers offering their services using SaaS.
2. One to Many
SaaS services are offered as a one-to-many model means a single instance of the
application is shared by multiple users.
JB PORTALS 30
FULL STACK DEVELOPMENT - WEEK 3
5. No special software or hardware versions required
All users will have the same version of the software and typically access it through
the web browser. SaaS reduces IT support costs by outsourcing hardware and software
maintenance and support to the IaaS provider.
6. Multidevice support
SaaS services can be accessed from any device such as desktops, laptops, tablets,
phones, and thin clients.
7. API Integration
SaaS services easily integrate with other software or services through standard APIs.
8. No client-side installation
SaaS services are accessed directly from the service provider using the internet
connection, so do not need to require any software installation.
2) Latency issue: Since data and applications are stored in the cloud at a variable distance
from the end-user, there is a possibility that there may be greater latency when interacting
with the application compared to local deployment. Therefore, the SaaS model is not suitable
for applications whose demand response time is in milliseconds.
4) Switching between SaaS vendors is difficult: Switching SaaS vendors involves the
difficult and slow task of transferring the very large data files over the internet and then
converting and importing them into another SaaS also.
JB PORTALS 31
FULL STACK DEVELOPMENT - WEEK 3
Popular SaaS Providers
Provider Services
JB PORTALS 32
FULL STACK DEVELOPMENT - WEEK 3
Public Cloud
The name says it all. It is accessible to the public. Public deployment models in the
cloud are perfect for organizations with growing and fluctuating demands. It also makes a
great choice for companies with low-security concerns. Thus, you pay a cloud service
provider for networking services, compute virtualization & storage available on the public
internet. It is also a great delivery model for the teams with development and testing. Its
configuration and deployment are quick and easy, making it an ideal choice for test
environments.
JB PORTALS 33
FULL STACK DEVELOPMENT - WEEK 3
Limitations of Public Cloud
● Data Security and Privacy Concerns - Since it is accessible to all, it does not fully
protect against cyber-attacks and could lead to vulnerabilities.
● Reliability Issues - Since the same server network is open to a wide range of users, it
can lead to malfunction and outages
● Service/License Limitation - While there are many resources you can exchange with
tenants, there is a usage cap.
Private Cloud
Now that you understand what the public cloud could offer you, of course, you are
keen to know what a private cloud can do. Companies that look for cost efficiency and greater
control over data & resources will find the private cloud a more suitable choice.
It means that it will be integrated with your data center and managed by your IT team.
Alternatively, you can also choose to host it externally. The private cloud offers bigger
opportunities that help meet specific organizations' requirements when it comes to
customization. It's also a wise choice for mission-critical processes that may have frequently
changing requirements.
JB PORTALS 34
FULL STACK DEVELOPMENT - WEEK 3
Limitations of Private Cloud
● Higher Cost - With the benefits you get, the investment will also be larger than the
public cloud. Here, you will pay for software, hardware, and resources for staff and
training.
● Fixed Scalability - The hardware you choose will accordingly help you scale in a
certain direction
● High Maintenance - Since it is managed in-house, the maintenance costs also increase.
Community Cloud
The community cloud operates in a way that is similar to the public cloud. There's just
one difference - it allows access to only a specific set of users who share common objectives
and use cases. This type of deployment model of cloud computing is managed and hosted
internally or by a third-party vendor. However, you can also choose a combination of all
three.
JB PORTALS 36
FULL STACK DEVELOPMENT - WEEK 3
Virtualization in Cloud Computing
Virtualization is the "creation of a virtual (rather than actual) version of something,
such as a server, a desktop, a storage device, an operating system or network resources". In
other words, Virtualization is a technique, which allows to share a single physical instance
of a resource or an application among multiple customers and organizations. It does by
assigning a logical name to a physical storage and providing a pointer to that physical
resource when demanded.
The machine on which the virtual machine is going to create is known as Host
Machine and that virtual machine is referred as a Guest Machine
Types of Virtualization:
1. Hardware Virtualization.
2. Operating system Virtualization.
3. Server Virtualization.
4. Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the hardware system is known as hardware virtualization. The main job of
hypervisor is to control and monitoring the processor, memory and other hardware
resources. After virtualization of hardware system we can install different operating system
on it and run different applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual
machines is much easier than controlling a physical server.
JB PORTALS 37
FULL STACK DEVELOPMENT - WEEK 3
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly
installed on the Server system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple
servers on the demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.
Usage:
Storage virtualization is mainly done for back-up and recovery purposes.
JB PORTALS 38
FULL STACK DEVELOPMENT - WEEK 3
There is no
Some knowledge is requirement about
Technical It requires technical
required for the technicalities
understanding. knowledge.
basic setup. company handles
everything.
JB PORTALS 39
FULL STACK DEVELOPMENT - WEEK 3
Operating System,
Runtime, Data of the
User Controls Nothing
Middleware, and application
Application data
Applications of AWS
The most common applications of AWS are storage and backup, websites, gaming, mobile,
web, and social media applications. Some of the most crucial applications in detail are as
follows:
1. Storage and Backup
One of the reasons why many businesses use AWS is because it offers multiple types
of storage to choose from and is easily accessible as well. It can be used for storage and file
indexing as well as to run critical business applications.
2. Websites
Businesses can host their websites on the AWS cloud, similar to other web
applications.
3. Gaming
There is a lot of computing power needed to run gaming applications. AWS makes it
easier to provide the best online gaming experience to gamers across the world.
JB PORTALS 40
FULL STACK DEVELOPMENT - WEEK 3
5. Big Data Management and Analytics (Application)
● Amazon Elastic MapReduced to process large amounts of data via the Hadoop
framework.
● Amazon Kinesis to analyze and process the streaming data.
● AWS Glue to handle, extract, transform and load jobs.
● Amazon Elasticsearch Service to enable a team to perform log analysis, and tool
monitoring with the help of the open source tool, Elastic-search.
6. Artificial Intelligence
● Amazon Lex to offer voice and text chatbot technology.
● Amazon Polly to translate text-to-speech translation such as Alexa Voice Services
and echo devices.
● Amazon Rekognition to analyze the image and face.
JB PORTALS 41
FULL STACK DEVELOPMENT - WEEK 3
Companies Using AWS
Whether it’s technology giants, startups, government, food manufacturers or retail
organizations, there are so many companies across the world using AWS to develop, deploy
and host applications. According to Amazon, the number of active AWS users exceeds
1,000,000. Here is a list of companies using AWS:
● Netflix
● Intuit
● Coinbase
● Finra
● Johnson & Johnson
● Capital One
● Adobe
● Airbnb
● AOL
● Hitachi
Features of AWS
JB PORTALS 42
FULL STACK DEVELOPMENT - WEEK 3
1) Flexibility
● The difference between AWS and traditional IT models is flexibility.
● The traditional models used to deliver IT solutions that require large investments in
a new architecture, programming languages, and operating system. Although these
investments are valuable, it takes time to adopt new technologies and can also slow
down your business.
● The flexibility of AWS allows us to choose which programming models, languages,
and operating systems are better suited for their project, so we do not have to learn
new skills to adopt new technologies.
● Flexibility means that migrating legacy applications to the cloud is easy, and cost-
effective. Instead of re-writing the applications to adopt new technologies, you just
need to move the applications to the cloud and tap into advanced computing
capabilities.
● Building applications in aws are like building applications using existing hardware
resources.
● The larger organizations run in a hybrid mode, i.e., some pieces of the application run
in their data center, and other portions of the application run in the cloud.
● The flexibility of aws is a great asset for organizations to deliver the product with
updated technology in time, and overall enhancing the productivity.
2) Cost-effective
● Cost is one of the most important factors that need to be considered in delivering IT
solutions.
● For example, developing and deploying an application can incur a low cost, but after
successful deployment, there is a need for hardware and bandwidth. Owing our own
infrastructure can incur considerable costs, such as power, cooling, real estate, and
staff.
● The cloud provides on-demand IT infrastructure that lets you consume the resources
what you actually need. In aws, you are not limited to a set amount of resources such
as storage, bandwidth or computing resources as it is very difficult to predict the
requirements of every resource. Therefore, we can say that the cloud provides
flexibility by maintaining the right balance of resources.
● AWS provides no upfront investment, long-term commitment, or minimum spend.
● You can scale up or scale down as the demand for resources increases or decreases
respectively.
JB PORTALS 43
FULL STACK DEVELOPMENT - WEEK 3
3) Scalable and elastic
● In a traditional IT organization, scalability and elasticity were calculated with
investment and infrastructure while in a cloud, scalability and elasticity provide
savings and improved ROI (Return On Investment).
● Scalability in aws has the ability to scale the computing resources up or down when
demand increases or decreases respectively.
● Elasticity in aws is defined as the distribution of incoming application traffic across
multiple targets such as Amazon EC2 instances, containers, IP addresses, and Lambda
functions.
● Elasticity load balancing and scalability automatically scale your AWS computing
resources to meet unexpected demand and scale down automatically when demand
decreases.
● The aws cloud is also useful for implementing short-term jobs, mission-critical jobs,
and the jobs repeated at the regular intervals.
4) Secure
● AWS provides a scalable cloud-computing platform that provides customers with
end-to-end security and end-to-end privacy.
● AWS incorporates the security into its services, and documents to describe how to
use the security features.
● AWS maintains confidentiality, integrity, and availability of your data which is the
utmost importance of the aws.
5) Experienced
● The AWS cloud provides levels of scale, security, reliability, and privacy.
● AWS has built an infrastructure based on lessons learned from over sixteen years of
experience managing the multi-billion dollar Amazon.com business.
● Amazon continues to benefit its customers by enhancing their infrastructure
capabilities.
● Nowadays, Amazon has become a global web platform that serves millions of
customers, and AWS has been evolved since 2006, serving hundreds of thousands of
customers worldwide.
JB PORTALS 44
FULL STACK DEVELOPMENT - WEEK 3
Create a simple webapp using cloud services
In this module, you will use the AWS Amplify console to deploy the static resources for your
web application. In subsequent modules, you will add dynamic functionality to these pages
using AWS Lambda and Amazon API Gateway to call remote RESTful APIs.
Key concepts
Static website – A static website has fixed content, unlike dynamic websites. Static websites
are the most basic type of website and are the easiest to create. All that is required is creating
a few HTML pages and publishing them to a web server.
Web hosting – Provides the technologies/services needed for the website to be viewed on
the internet.
AWS Regions – Separate geographic areas that AWS uses to house its infrastructure. These
are distributed around the world so that customers can choose a Region closest to them to
host their cloud infrastructure there.
JB PORTALS 46
FULL STACK DEVELOPMENT - WEEK 3
HOW TO TEST YOUR WEB APP
https://learn.microsoft.com/en-us/azure/active-directory-b2c/add-password-reset-
policy?pivots=b2c-user-flow
https://learn.microsoft.com/en-us/azure/active-directory/authentication/
JB PORTALS 47
FULL STACK DEVELOPMENT - WEEK 3
6. On the Create a user flow page, select the Sign up and sign in user flow.
JB PORTALS 48
FULL STACK DEVELOPMENT - WEEK 3
Note
You can also create custom attributes for use in your Azure AD B2C tenant.
13. Select Create to add the user flow. A prefix of B2C_1 is automatically prepended to the
name.
14. Follow the steps to handle the flow for "Forgot your password?" within the sign-up or
sign-in policy.
What is CI/CD?
CI or Continuous Integration is the practice of automating the integration of code
changes from multiple developers into a single codebase. It is a software development
practice where the developers commit their work frequently into the central code repository
(Github or Stash). Then there are automated tools that build the newly committed code and
do a code review, etc as required upon integration.
The key goals of Continuous Integration are to find and address bugs quicker, make
the process of integrating code across a team of developers easier, improve software quality
and reduce the time it takes to release new feature updates. Some popular CI tools are
Jenkins, TeamCity, and Bamboo.
How CI Works?
Below is a pictorial representation of a CI pipeline- the workflow from developers
checking in their code to its automated build, test, and final notification of the build status.
JB PORTALS 49
FULL STACK DEVELOPMENT - WEEK 3
Once the developer commits their code to a version control system like Git, it triggers
the CI pipeline which fetches the changes and runs automated build and unit tests. Based on
the status of the step, the server then notifies the concerned developer whether the
integration of the new code to the existing code base was a success or a failure.
This helps in finding and addressing the bugs much quickly, makes the team more productive
by freeing the developers from manual tasks, and helps teams deliver updates to their
customers more frequently. It has been found that integrating the entire development cycle
can reduce the developer’s time involved by ~25 – 30%.
CD or Continuous Delivery
CD or Continuous Delivery is carried out after Continuous Integration to make sure
that we can release new changes to our customers quickly in an error-free way. This includes
running integration and regression tests in the staging area (similar to the production
environment) so that the final release is not broken in production. It ensures to automate
the release process so that we have a release-ready product at all times and we can deploy
our application at any point in time.
Continuous Delivery automates the entire software release process. The final decision
to deploy to a live production environment can be triggered by the developer/project lead
as required. Some popular CD tools are AWS CodeDeploy, Jenkins, and GitLab.
Why CD?
Continuous delivery helps developers test their code in a production-similar
environment, hence preventing any last moment or post-production surprises. These tests
may include UI testing, load testing, integration testing, etc. It helps developers discover and
resolve bugs preemptively.
JB PORTALS 50
FULL STACK DEVELOPMENT - WEEK 3
By automating the software release process, CD contributes to low-risk releases,
lower costs, better software quality, improved productivity levels, and most importantly, it
helps us deliver updates to customers faster and more frequently.
How CI and CD work together?
The below image describes how Continuous Integration combined with Continuous
Delivery helps quicken the software delivery process with lower risks and improved quality.
CI / CD workflow
We have seen how Continuous Integration automates the process of building, testing,
and packaging the source code as soon as it is committed to the code repository by the
developers. Once the CI step is completed, the code is deployed to the staging environment
where it undergoes further automated testing (like Acceptance testing, Regression testing,
etc.). Finally, it is deployed to the production environment for the final release of the product.
GitHub Actions
GitHub Actions goes beyond just DevOps and lets you run workflows when other
events happen in your repository. For example, you can run a workflow to automatically add
the appropriate labels whenever someone creates a new issue in your repository. You only
need a GitHub repository to create and run a GitHub Actions workflow. In this guide, you'll
add a workflow that demonstrates some of the essential features of GitHub Actions.
JB PORTALS 51
FULL STACK DEVELOPMENT - WEEK 3
The following example shows you how GitHub Actions jobs can be automatically
triggered, where they run, and how they can interact with the code in your repository.
Workflows
A workflow is a configurable automated process that will run one or more jobs.
Workflows are defined by a YAML file checked in to your repository and will run when
triggered by an event in your repository, or they can be triggered manually, or at a defined
schedule.
Events
An event is a specific activity in a repository that triggers a workflow run. For
example, activity can originate from GitHub when someone creates a pull request, opens an
issue, or pushes a commit to a repository. You can also trigger a workflow run on a schedule,
by posting to a REST API, or manually.
JB PORTALS 52
FULL STACK DEVELOPMENT - WEEK 3
Jobs
A job is a set of steps in a workflow that execute on the same runner. Each step is
either a shell script that will be executed, or an action that will be run. Steps are executed in
order and are dependent on each other. Since each step is executed on the same runner, you
can share data from one step to another. For example, you can have a step that builds your
application followed by a step that tests the application that was built.
Actions
An action is a custom application for the GitHub Actions platform that performs a
complex but frequently repeated task. Use an action to help reduce the amount of repetitive
code that you write in your workflow files. An action can pull your git repository from
GitHub, set up the correct toolchain for your build environment, or set up the authentication
to your cloud provider.
Runners
A runner is a server that runs your workflows when they're triggered. Each runner
can run a single job at a time. GitHub provides Ubuntu Linux, Microsoft Windows, and macOS
runners to run your workflows; each workflow run executes in a fresh, newly-provisioned
virtual machine. GitHub also offers larger runners, which are available in larger
configurations.
JB PORTALS 53
FULL STACK DEVELOPMENT - WEEK 3
- uses: actions/checkout@v3
- uses: actions/setup-node@v3
with:
node-version: '14'
- run: npm install -g bats
- run: bats -v
3. Commit these changes and push them to your GitHub repository.
JB PORTALS 54
FULL STACK DEVELOPMENT - WEEK 3
4. Under "Workflow runs", click the name of the run you want to see.
5. Under Jobs or in the visualization graph, click the job you want to see.
Committing the workflow file to a branch in your repository triggers the push event and runs
your workflow.
JB PORTALS 56
FULL STACK DEVELOPMENT - WEEK 3
4. From the list of workflow runs, click the name of the run you want to see.
6. The log shows you how each of the steps was processed. Expand any of the steps to
view its details.
For example, you can see the list of files in your repository:
The example workflow you just added is triggered each time code is pushed to the branch,
and shows you how GitHub Actions can work with the contents of your repository.
JB PORTALS 57
FULL STACK DEVELOPMENT - WEEK 3
EXTRA QUESTIONS FROM PREVIOUS YEAR QUESTION PAPERS
1. Identify the following cloud service types and list their characteristics and
advantage Cisco WebEx, Google App Engine, Amazon EC2. (10 MARKS)
2. Draw the CI/CD build process flow diagram for an online application and explain
each component. (10 MARKS)
The CI/CD build process flow diagram for an online application typically involves
multiple stages and components to automate the building, testing, and deployment of the
application.
2. CI Server (Continuous Integration): The CI server monitors the version control system
for code changes. Whenever a new commit is pushed or a pull request is submitted, the CI
server is triggered. As primary purpose is to automate the integration of code changes into a
shared repository and perform various automated tasks.
JB PORTALS 59
FULL STACK DEVELOPMENT - WEEK 3
3. Automated Build (Build Server): Upon triggering, the CT server initiates an automated
build process. It compiles the source code, gathers dependencies, and generates a build
artifact (eg. executable, binary, or container image). This artifact represents the built
application.
4. Automated Unit Tests and Code Analysis: After the build, the CI server runs automated
unit tests to check the functionality and correctness of the application. Additionally, it may
perform static code analysis to identify potential issues, bugs, or code style violations.
6. Deployment to Staging Environment: If all the previous stages (build and automated
tests) are successful, the application is deployed to a staging environment. The staging
environment is a near-production replica where final testing is conducted before going live
7. User Acceptance Testing (UAT): In this phase, the application is tested by actual users
(typically non-technical stakeholders) to ensure it meets business requirements and user
expectations Deployment to Production: Upon successful UAT and approval, the application
is deployed to the production environment, making it available to end-users.
JB PORTALS 60
FULL STACK DEVELOPMENT - WEEK 3
3. Draw the CI/CD build process flow diagram for an online foot ware store application
and explain each component
There are several ways that Ted could automate the deployment process and adjust
the CPU speed and RAM on multiple servers. One approach would be to use a configuration
16 management tool such as Ansible or Puppet to define the desired state of the servers,
including the CPU speed and RAM configuration.
The configuration management tool could then be used to automate the process of
deploying the desired configuration to the servers. Another option would be to use a
containerization tool such as Docker to package the application and its dependencies into a
container, which can then be easily deployed to multiple servers.
This approach can help to ensure that the application is consistently deployed across
servers, and can also make it easier to adjust the CPU and RAM resources allocated to the
application by modifying the container configuration.
Ted could also consider using a cloud provider's managed services, such as AWS
Elastic Beanstalk or Google Cloud Platform's Kubernetes Engine, which provide an
automated way to deploy and scale applications with the ability to adjust CPU and RAM
resources as needed. Ultimately, the best approach will depend on Ted's specific
requirements and the resources available to him
JB PORTALS 61