00:00:00
Docker is a hugely popular technology used to develop and run applications knowing
how to use it is an essential skill for any developer in today's video you'll learn
the most important parts about Docker I'll walk you through the process of
containerizing a simple python-based application and explain to you fundamental
concepts like Docker files images and containers I'll show you how to build your
application run it locally and deploy it to the cloud but before that we need to
answer what is
00:00:28
Docker at its heart Docker is a tool used to bundle your application so that it can
be deployed on any machine it packages up the entirety of your app so that it runs
the same on your machine my machine the cloud anywhere it's Hardware agnostic so it
works the same even when being used on Windows Macs or Linux machines this is the
entire point of Docker to get rid of the works on my machine problem so that it
apps deploy and run consistently across systems to understand how Docker works you
need to
00:00:56
learn three important Concepts Docker files images and containers a Docker file is
the starting point for your application it contains a series of commands that
specify how to run your program this includes things like copying directories
installing dependencies and running your main process a Docker file built into an
executable package called an image images include everything needed to run your
program including code runtime libraries environment variables and configuration
files images usually
00:01:24
inherit from a base image that contains pre-installed dependencies that you can
build on and use right away the docker Hub has a ton of publicly available images
that you can use as a starting point for your project a container is a runnable
instance of an image you can deploy many containers at once including on your own
machine or deploy through a cloud provider like AWS or Microsoft azour if your
application is running a critical workload and needs constantly running containers
you can use a
00:01:50
container orchestration system like Docker swarm kubernetes or aws's ECS or eks
services to manage the life cycle and scale up your application so those are the
Basics but the best way to learn Docker is to create and deploy a project of your
own I'm now going to show you how to dockerize a sample python based application
and deploy it to the AWS Cloud I'll provide you the accompanying code and necessary
commands on GitHub so you can follow along feel free to skip around if you're
already familiar with
00:02:16
some of the concepts so let's get started with installing Docker to get started
using Docker we first need to install the docker client head over to the docker
website and install the docker engine Docker is currently supported on all major
platforms including Linux Mac and windows select your platform type to download the
installer after running the installation wizard you should notice a Docker icon in
your system tray double clicking on this icon will bring up the docker desktop
client the desktop client
00:02:46
provides a userfriendly interface to visualize and manage your Docker resources
like containers images and volumes you can also view past builds and use the newly
released Docker Scout feature to analyze your images for for security
vulnerabilities while you're at it you should consider installing the docker
extension for VSS code this extension provides command auto complete and makes it
easier to navigate Docker infrastructure directly within your editor project setup
within Docker is Breeze thanks to some new command line
00:03:18
tools that make it easier to set up your project first type Docker D- version to
confirm everything is set up correctly on your CLI we'll be using the docker and
nit command to set up our initial project with an necessary files first we need to
select our language of choice in which case we'll choose python we can also specify
the version number or accept the default and we can choose our port number that
we'd like the docker container to listen on here I'm selecting port 8080 next we
need to
00:03:45
provide the startup command that Docker will use when it starts our container here
I'm just telling it to run the app.py file which we'll create in a moment pressing
enter completes the initialization for our project you'll notice a handful of files
were created for you as part of the setup the Docker ignore file is used to specify
which files and directories should be ignored by the docker build process the
composed. yaml file is used to define and run multicontainer Docker applications
the docker file contains
00:04:13
instructions to build our Docker image and the readme file provide some additional
documentation to learn and use Docker for now I'm going to empty out the composed.
yaml and Docker file so we can start out with a clean slate we'll go ahead and
create an app.py file to write our flask code we'll import some dependencies and
create an instance of our app and then we'll create a simple API endpoint that
returns some placeholder text this isn't a sophisticated app but it's just meant to
00:04:39
get you started so that you could add more functionality later finally we use the
app.run command providing it a local host IP address and the port we'd like the app
to run on next we create a requirements.txt file and drop in the flask dependency
so that we can incorporate it into our application next comes adding to our Docker
file in line one using the from command and adding the python base image with a
3.10.1 version tag this is an official image from dockerhub that already has python
pre-installed next we specify the
00:05:11
application working directory then we copy everything from the current directory to
the Dockers slapp directory to ensure that flask is installed we need to run the
PIP install and give it a reference to our requirements.txt file we use the expose
command to open up port 8080 the same port that our flask app is listing for
traffic on and then run the CMD command to instruct Docker to run our app.py file
on Startup before we can run our image in a container we first need to build it for
this we use
00:05:41
the docker build command with the dasht flag to give our image a name you can
optionally provide a tag to your image by separating the name with the colon and
the tag name the period at the end specifies the build context which is our current
working directory pressing enter now will trigger the build including downloading
all the base image dependencies and stepping through each line in our Docker file
to start up the container with this image we use the docker run command using- P
allows us to
00:06:07
control Port mappings we provide 127.0.0.1 to denote the port is only accessible on
Local Host and then map port 8080 on the host to port 8080 on our container finally
we provide the name of the image that we would like to run now that our application
is running we can visit it on Local Host port 8080 to verify it's working if we
bring up our browser we can see that the container started up successfully and is
serving up our content with a running container you can start to see the value of
Docker desktop in this tool you're
00:06:37
able to quickly Click on each running container and inspect things like logs and
other configurations you can also stop restart and delete the container if you so
choose from this UI another important concept about Docker to understand are Docker
volumes volumes allow you to share data between your host machine and your Docker
containers and they allow you to persist data even after your containers have been
shut down using volumes you can map a folder on your host machine to a folder on
your container with this setup you can access
00:07:08
host files from your container and vice versa here's a quick example here I'm
running our image with the- v command and providing it a host path of c/ users the
path after the second colon indicates the folder in the container of where the
contents will be present if you open up your Docker desktop and head over to the
bind Mount section you can verify that the folder was attached successfully a
different way to use volumes is by creating a named volume named volume store data
on your host
00:07:37
machine but manage the contents for you here I'm using the volume create command
and providing it the name of our volume now with the docker run command in the- V
flag we can provide it the name of our shared volume alongside the path that we
would like to store the volume in our container here I'm calling it myor path now
within Docker desktop under the volume section we can see that the shared volume
has been created and is currently in use we can then navigate over to our container
section and see
00:08:06
that it is currently being used as well as you can see here by the blue indicator
my path in the container is referencing our volume Docker compose is another useful
feature that lets us run multiple containers at once on the same host machine this
can be useful in cases where you have two distinct processes say an API and a
database and would like to run them both at once on a single host to use this
feature you need to create a a special compose doyo file that contains the
configuration for all
00:08:34
of your containers in our case our Docker compose file contains three top level
Fields Services volumes and secrets Services indicate the containers You' like to
deploy volumes are the ones that you like to use and secrets are for the sensitive
data like passwords or access keys that you'd like to keep private the server
service is for our flask API within it we Define the build context the port
mappings and a Nifty new feature called Docker watch which allows you to
automatically rebuild your
00:09:01
application anytime you change your code the DB container uses the postgress image
and references a Secrets entry a volume we'd like to use environment variables our
Port of choice and a health check configuration finally the volume section defines
a named volume and the secret section a reference to a Secrets file that we store
on disk running our containers using compose requir some slightly different syntax
this time we use the docker compose up and build flag to build and run the
containers in our compos file now in
00:09:32
Docker desktop we're able to see both of our containers running at once now
deploying your application locally is one thing but you may be wondering how do I
go about getting my application onto the cloud so that customers can start using it
for this we're going to use AWS or Amazon web services now AWS has a whole bunch of
services and there are many different ways to deploy containers onto AWS now
there's no right or wrong way but there are certain pros and cons for going with
the various
00:09:57
approaches by far the most popular choices are elastic kubernetes service and
elastic container service both of these examples can be used to manage the life
cycle of your container to provide infrastructure in a serverless mode and can be
scaled up to handle tons of traffic ECS is by far the most popular if you're
already integrated in the AWS ecosystem so we'll be going with that in this example
to deploy our flask API now in order to get started with ECS we first need to
upload our image into
00:10:24
elastic container repository or ECR for short this is the equivalent of Docker Hub
but owned by aw to get started we first need to log in to ECR using the following
command make sure to substitute your region and your account ID if everything
worked you should see a login succeeded pop up like we see here now we can tag our
image to prepare it for the upload into ECR again make sure to substitute your
account ID and your region that you're using we can now push our image to ECR using
the push
00:10:53
command notice though that we're getting an error that the repository does not yet
exist to address this we will go into the AWS console to create the repository in
the ECR service under the repository section click on create repository then give
your repository a name and scroll down to click create repository if we try the
push command once more you'll see that it succeeds and the image gets pushed to ECR
without an issue now we can go to the ECS service and create something called a
cluster a cluster just contains
00:11:23
infrastructure that we'll use to host our service after selecting create cluster
we'll give it a name and then we'll go ahead and select the AWS fargate or
serverless mode for our infrastructure then we can go ahead and click on create
next we need to create a task definition which is essentially a blueprint for our
application after clicking create we go ahead and give it a name ensure that we
still have AWS fargate selected then specify the CPU and memory that we need before
finally
00:11:49
selecting none as the task rooll finally we need to provide some details for the
container we need to give the container a name and then the repository URI which
can be retrieved from the ECR section of the console the last step is to specify
the port mapping which is 8080 for our Docker container which will be 8080 on our
host machine now you can scroll all the way down and click on create now that our
task definition is created we can deploy our service using the cluster we created
previously you just need to
00:12:18
scroll down and give your service a name before clicking on create in the bottom
right after a few brief moments it'll deploy our task onto our fargate service
we'll know it's done when the task status changes to running the we can click on
our task grab the public IP address and then paste it into our browser on port 8080
to verify that it's working if you have trouble you may need to add a security
group setting that looks like this one to enable connectivity and just like that
you
00:12:42
learned everything you need to know about Docker and deployed a container into the
cloud