Docker is a containerization platform that packages applications and dependencies into containers that can run on any infrastructure. Containers are more lightweight than virtual machines and provide operating-system-level virtualization. The key Docker components are the Docker Engine (including the daemon and client), images, containers, registries, and networks. Dockerfiles define how to build images automatically by running commands. Images act as templates for containers, which are lightweight and portable environments for applications.
2. 2
Introduction to Docker
Installand Setup Docker
Docker Architecture
How to create Docker file and its command
How to use docker commands
Introduction to Docker Composeand docker Swarm
Ways of Docker Networking
Dockerizing the Application
Docker swarmvs Kubernetes
Introduction to Docker: Before starting the Docker, we should understand the
following terms:
What is Virtualization?
Virtualization is the technique of importing a Guest operating system on top of a
Host operating system. This technique was a revelation at the beginning
because it allowed developers to run multiple operating systems in different
virtual machines all running on the same host. This eliminated the need for
extra hardware resource. The advantages of Virtual Machines or Virtualization
are:
Multiple operating systems can run on the samemachine
3. 3
Maintenance and Recovery were easy in caseof failure conditions
Total cost of ownership was also less due to the reduced need for
infrastructure
In the diagram on the left, you can see
there is a host operating system on
which there are 3 guest operating
systems running which is nothing but
the virtualmachines.
As you know nothing is perfect,
Virtualization also has some
shortcomings. Running multiple Virtual
Machines in the same host operating
system leads to performance
degradation. This is becauseof the
guest OS running on top of the host OS, which will have its own kernel and set
of libraries and dependencies. This takes up a large chunk of system resources,
i.e. hard disk, processor and especially RAM.
Another problem with Virtual Machines which uses Virtualization is that it takes
almost a minute to boot-up. This is very critical in case of real-time applications.
Following are the disadvantages of Virtualization:
Running multiple Virtual Machines leads to unstableperformance
Hyper-visorsarenotas efficient as the hostoperating system
Boot up process is long and takes time
These drawbacks led to the emergence of a new technique called
Containerization. Now let me tell you about Containerization.
What is Containerization?
1. Containerization is the technique of bringing Virtualization to the operating
systemlevel.
4. 4
2. While Virtualization brings abstraction to the hardware, Containerization
bringsabstraction to the operating system.
3. Containerizationis alsoatype of Virtualization.
4. Containerization is more efficient because there is no guest OS here and
utilizes a host’s operating system, share relevant libraries & resources as &
when needed unlike virtual machines.
5. Application specific binaries and libraries of containers run on the host kernel,
which makes processing and execution very fast.
6. Booting-up a container takes only a fraction of a second. Because all the
containers share, host operating system and holds only the application related
binaries & libraries.
7. They are lightweight and faster than Virtual Machines.
Advantages of Containerizationover Virtualization:
Containers on the sameOS kernelare lighter and smaller
Better resourceutilization compared to VMs
Boot-up process is shortand takes few seconds
In the diagram on the right, you can
see that there is a host operating
system which is shared by all the
containers. Containers only contain
application specific libraries which
are separate for each container and
they are faster and do not waste any
resources.
All these containers are handled by the containerization layer which is not
native to the host operating system. Hence a software is needed, which can
enable you to create & run containers on your hostoperating system.
What are virtual machines and Virtualization?
5. 5
Before containers came along, the “virtual machine” was the technology of
choice for optimizing server capacity. Programmed to emulate the hardware of
a physical computer with a complete operating system, VMs (and hyper-visors)
make it possible to run what appear to be multiple computers with multiple
different operating systems on the hardwareof a single physicalserver.
What is a Hypervisor?
Virtualization is not possible without the hyper-visor. A hyper-visor, or virtual
machine monitor, is the software or firmware layer that enables multiple
operating systems to run side-by-side, all with access to the same physical
server resources. The hyper-visor orchestrates and separates the available
resources (computing power, memory, storage, etc.), aligning a portion to each
virtual machine as needed.
IntroductionTo Docker
Docker is a software containerization platform, meaning you can build your
application, package them along with their dependencies into a container and
then these containers can be easily shipped to run on other machines.
Docker is one such tool that truly lives up to its promise of Build, Ship and Run.
For example: Lets consider a linux based application which has been written
both in Ruby and Python. This application requires a specific version of linux,
Ruby and Python. In order to avoid any version conflicts on user’s end, a linux
docker container can be created with the required versions of Ruby and
6. 6
Python installed along with the application. Now the end users can use the
application easily by running this container without worrying about the
dependencies or any version conflicts.
Docker is a containerization platform that packages your application and all its
dependencies together in the form of Containers to ensure that your application
worksseamlessly in any environment.
As you can see in the diagram on the left, each
application will run on a separate container and
will have its own set of libraries and dependencies.
This also ensures that there is process level
isolation, meaning each application is independent
of other applications, giving developers surety that
they can build applications that will not interfere
with one another.
As a developer, I can build a container which has different applications installed
on it and give it to my QA team who will only need to run the container to
replicate the developer environment.
Benefits of Docker
Now, the QA team need not install all the dependent software and applications
to test the code and this helps them save lots of time and energy. This also
ensures that the working environment is consistent across all the individuals
involved in the process, starting from development to deployment. The number
of systems can be scaled up easily and the code can be deployed on them
effortlessly.
Virtualizationvs Containerization
Virtualization and Containerization both let you run multiple operating systems
inside a host machine.
7. 7
Virtualization deals with creating many operating systems in a single host
machine. Containerization on the other hand will create multiple containers for
every type of application as required.
As we can see from the image, the major difference is that there are multiple
Guest Operating Systems in Virtualization which are absent in Containerization.
The best part of Containerization is that it is very lightweight as compared to the
heavy Virtualization.
Dockerfile, Docker Image AndDocker Container:
1. A Docker Imageis created by the sequence of commands written in a file
called as Dockerfile.
2. When this Dockerfileis executed using a docker command it results into a
Docker Imagewith a name.
3. When this Imageis executed by “docker run”command it will by itself
startwhatever application or serviceit muststart on its execution.
Docker Hub:
8. 8
Docker Hub is like GitHub for Docker Images. It is basically a cloud registry
where you can find Docker Images uploaded by different communities, also you
can develop your own image and upload on Docker Hub, but first, you need to
create an account on DockerHub.
Docker File:
A Docker File is a simple text file with instructions on how to build your
images.The following steps explain how you should go about creating a Docker
File.
Step 1 − Create a file called Docker File and edit it using vim. Please note that
the name of the file has to be "Dockerfile" with "D" as capital.
Step2 − Build your Docker File using the following instructions.
#This is a sampleImage
FROM ubuntu
MAINTAINER demousr@gmail.com
RUN apt-get update
RUN apt-get install –y nginx
CMD [“echo”,”Image created”]
9. 9
The following points need to be noted about the above file −
1.The first line "#This is a sample Image" is a comment. You can add comments
to the Docker File with the help of the # command
2.The next line has to start with the FROM keyword. It tells docker, from which
base image you want to base your image from. In our example, we are creating
an image from the ubuntu image.
3.The next command is the person who is going to maintain this image. Here
you specify the MAINTAINERkeyword and justmention the email ID.
4.The RUN command is used to run instructions against the image. In our case,
we first update our Ubuntu system and then install the nginx server on our
ubuntu image.
5.The last command is used to display a messageto the user.
Step3 − Save the file.
How to Build the Docker File:
$docker build -t ImageName:TagName dir
-t: is to mention a tag to the image.
ImageName − This is the name you want to give to your image.
TagName − This is the tag you want to give to your image.
Dir − The directory where the Docker Fileis present.
Example
sudo docker build –t myimage:0.1.
Here, myimage is the name we are giving to the Imageand 0.1 is the tag
number we are giving to our image.
10. 10
Since the Docker File is in the presentworking directory, weused "." at the end
of the command to signify the present working directory.
Output: Fromthe output, you will firstsee that the Ubuntu Imagewill be
downloaded fromDocker Hub, because there is no image available locally on
the machine.
Finally, when the build is complete, all the necessary commands would haverun
on the image.
Docker Image:
An image is a combination of a file system and parameters. Let’s take an
example of the following command in Docker.
$ docker run hello-world
This tells the Docker program on the Operating System that something needs to
be done. The run command is used to mention that we want to create an
instance of an image, which is then called a container.
Finally, "hello-world" represents theimage from which the container is made.
Now let’s look at how we can use the CentOS image available in Docker Hub to
run CentOS on our Ubuntu machine. We can do this by executing the following
command on our Ubuntu machine −
sudo docker run -it centos /bin/bash
Sudo: sudo command to ensure that itruns with root access.
11. 11
Centos: centos is the name of the image we want to download from Docker Hub and install on our Ubuntu
machine.
-it: is used to mention that we want to run in interactivemode.
/bin/bash:itis used to run the bash shell onceCentOS is up and running.
Displaying Docker Images
To see the list of all Docker images on the system, command is:
sudo docker images
From the above output, you can see that the server has three images: centos,
newcentos, and jenkins. Each image has the following attributes
TAG − This is used to logically tagimages.
Image ID − This is used to uniquely identify the image.
Created − The number of days sincethe image was created.
Virtual Size− The sizeof the image.
DownloadingDocker Images.
Docker Hub:
Docker hub is a repository whereall the docker images are uploaded and we can
download any image at any point of time by using the following commands:
sudo docker run centos
This command will download the centos image, if it is not already present, and
run the OS as a container.
12. 12
When we run the abovecommand, we will get the following result −
You will now see the CentOS Docker image downloaded. Now, if we run the
Docker images command to see the list of images on the system, we should be
able to see the centos image as well.
Removing Docker Images
The Docker images on the system can be removed via the docker rmi command.
Let’s look at this command in more detail.
docker rmi ImageID
sudo docker rmi 7a86f8ffcb25
Here, 7a86f8ffcb25is the ImageID of the newcentos image.
13. 13
When we run the abovecommand, it will produce the following result −
Docker Architecture
Below is the simple diagramof a Docker architecture.
Docker Engine
It is the core part of the whole Docker system. Docker Engine is an application
which follows client-serverarchitecture. Itis installed on the hostmachine.
14. 14
There are three components in the Docker Engine:
Server: It is the docker daemon called dockerd. It can create and manage
docker images. Containers, networks, etc.
Rest API: Itis used to instruct docker daemon whatto do.
Command Line Interface (CLI): It is a client which is used to enter the
docker commands.
The Docker daemon
The Docker daemon (dockerd) listens for Docker API requests and manages
Docker objects such as images, containers, networks, and volumes. A daemon
can also communicate with other daemons to manage Docker services.
Docker Client
Docker users can interact with Docker through a client. When any docker
commands runs, the client sends them to dockerd daemon, which carries them
out. Docker API is used by Docker commands. Docker client can communicate
with more than one daemon.
15. 15
Docker Registries
A Docker registry stores Docker images. Docker Hub is a public registry that
anyone can use, and Docker is configured to look for images on Docker Hub by
default. You can even run your own private registry. If you use Docker
Datacenter (DDC), it includes Docker Trusted Registry (DTR).
When you use the docker pull or docker run commands, the required images are
pulled from your configured registry. When you use the docker push command,
your image is pushed to your configured registry.
Docker Objects
When you are working with Docker, you use images, containers, volumes,
networks; all these are Docker objects.
Images
Docker images are read-only templates with instructions to create a docker
container. Docker image can be pulled from a Docker hub and used as it is, or
you can add additional instructions to the base image and create a new and
modified docker image. You can create your own docker images also using a
docker file.
Create a docker file with all the instructions to create a container and run it; it
will create your customdocker image.
Docker image has a base layer which is read-only, and the top layer can be
written. When you edit a docker file and rebuild it, only the modified part is
rebuilt in the top layer.
Containers
After you run a docker image, it creates a docker container. All the applications
and their environment run inside this container. You can use Docker API or CLI
to start, stop, delete a docker container.
16. 16
Example docker run command
The following command runs an ubuntu container, attaches interactively to your
local command-line session, and runs /bin/bash.
$ docker run -i -t ubuntu /bin/bash
When you run this command, the following happens (assuming you are using
the default registry configuration):
1.If you do not have the ubuntu image locally, Docker pulls it from your
configured registry, as though you had run docker pull ubuntu manually.
2.Docker creates a new container, as though you had run a docker container
create command manually.
3.Docker allocates a read-write filesystem to the container, as its final layer. This
allows a running container to create or modify files and directories in its local
filesystem.
4.Docker creates a network interface to connect the container to the default
network, sinceyou did not specify any networking options. This includes
assigning an IP address to the container. By default, containers can connect to
external networks using the hostmachine’s network connection.
5.Docker starts thecontainer and executes /bin/bash. Because the container is
running interactively and attached to your terminal (due to the -i and -t flags),
you can provideinput using your keyboard whilethe output is logged to your
terminal.
6.When you type exit to terminate the /bin/bash command, the container stops
but is not removed. You can startit again or removeit.
Volumes
17. 17
The persisting data generated by docker and used by Docker containers are
stored in Volumes. They are completely managed by docker through docker CLI
or Docker API. Volumes work on both Windows and Linux containers. Rather
than persisting data in a container’s writable layer, it is always a good option to
use volumes for it. Volume’s content exists outside the life cycle of a container,
so using volumedoes not increase the size of a container.
You can use-v or –mount flag to starta container with a volume.
docker run -d --name <contrainer_name> -v <volume_nae>:/app nginx:latest
Networks
Docker networking is a passage through which all the isolated container
communicate. There are mainly five network drivers in docker:
1. Bridge: It is the default network driver for a container. You use this
network when your application is running on standalone containers, i.e.
multiple containers communicating with same docker host.
2. Host: This driver removes the network isolation between docker
containers and docker host. It is used when you don’t need any network
isolation between host and container.
3. Overlay: This network enables swarm services to communicate with each
other. It is used when the containers are running on different Docker
hosts or when swarmservices areformed by multiple applications.
4. None: This driver disables all the networking.
5. macvlan: This driver assigns mac address to containers to make them look
like physical devices. The traffic is routed between containers through
their mac addresses. This network is used when you want the containers
to look like a physicaldevice, for example, while migrating a VM setup.
The underlying technology
Docker is written in Go and takes advantage of several features of the Linux
kernel to deliver its functionality.
Namespaces
18. 18
Docker uses a technology called namespaces to provide the isolated workspace
called the container. When you run a container, Docker creates a set of
namespacesfor that container.
These namespaces provide a layer of isolation. Each aspect of a container runs
in a separate namespace and its access is limited to that namespace.
Docker Engine uses namespaces such as the following on Linux:
The pid namespace: Process isolation (PID: Process ID).
The net namespace: Managing network interfaces (NET: Networking).
The ipc namespace: Managing access to IPC resources (IPC: InterProcess
Communication).
The mnt namespace: Managing filesystem mount points (MNT: Mount).
The uts namespace: Isolating kernel and version identifiers. (UTS: Unix
Timesharing System).
Control groups
Docker Engine on Linux also relies on another technology called control groups
(cgroups). A cgroup limits an application to a specific set of resources. Control
groups allow Docker Engine to share available hardware resources to containers
and optionally enforce limits and constraints. For example, you can limit the
memory available to a specific container.
Union file systems
Union file systems, or UnionFS, are file systems that operate by creating layers,
making them very lightweight and fast. Docker Engine uses UnionFS to provide
the building blocks for containers. Docker Engine can use multiple UnionFS
variants, including AUFS, btrfs, vfs, and DeviceMapper.
Container format
Docker Engine combines the namespaces, control groups, and UnionFS into a
wrapper called a container format. The default container format is libcontainer.
In the future, Docker may supportother container formats.
19. 19
How to install and set up Docker :
Following are the steps to install Docker on Ubuntu 17.10 machine.
1. Installrequired Packages
2. Setup Docker repository
3. InstallDocker On Ubuntu
1. Install RequiredPackages:
There are certain packages you require in your system for installing Docker.
Execute the below command to install those packages.
sudo apt-get install curl apt-transport-https ca-certificates
software-properties-common
2. SetupDocker Repository:
Now, import Dockers official GPG key to verify packages signature before
installing them with apt-get. Run the below command on terminal:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add
Now, add the Docker repository on your Ubuntu system which contains Docker
packages including its dependencies, for that execute the below command:
sudo add-apt-repository "deb [arch=amd64]
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
3. Install Docker OnUbuntu:
Now you need to upgradeapt index and install Docker community edition, for
that execute the below commands:
sudo apt-get update
20. 20
sudo apt-get install docker-ce
Congratulations!You have successfully installed Docker.
Docker CommandsTutorial
1. docker –version
This command is used to get the currently installed version of docker
2. docker pull
Usage: docker pull <image name>
This command is used to pull images from the docker repository
(hub.docker.com)
3. docker run
Usage: docker run -it -d <image name>
21. 21
This command is used to create a container from an image
4. docker ps: This command is used to list the running containers
5. docker ps -a: This command is used to show all the running and exited
containers
6. docker exec: This command is used to access the running container
Usage: docker exec -it <container id> bash
22. 22
7. docker stop: This command stops a running container
Usage: docker stop <container id>
8. docker kill: This command kills the container by stopping its execution
immediately. The difference between ‘docker kill’ and ‘docker stop’ is that
‘docker stop’ gives the container time to shutdown gracefully, in situations
when it is taking too much time for getting the container to stop, one can opt to
kill it
Usage: docker kill <container id>
9. docker commit: This command creates a new image of an edited container on
the local system
Usage: docker commit <conatainer id> <username/imagename>
23. 23
10. docker login: This command is used to login to the docker hub repository
11. docker push: This command is used to push an image to the docker hub
repository.
Usage: docker push <username/imagename>
12. docker images: This command lists all the locally stored docker images
24. 24
13. docker rm: This command is used to delete a stopped container
Usage: docker rm<container id>
14. docker rmi: This command is used to delete an image from local storage
Usage: docker rmi <image-id>
15. docker build: This command is used to build an image from a specified docker file.
Usage: docker build <path to docker file>
25. 25
Introduction to Docker Compose and docker Swarm
Compose is a tool for defining and running multi-container Docker applications.
With Compose, you use a YAML file to configure your application’s services.
Then, with a single command, you create and start all the services from your
configuration.
Compose works in all environments: production, staging, development, testing,
as well as CI workflows.
Using Compose is basically a three-step process:
1.Defineyour app’s environmentwith a Dockerfileso it can be reproduced
anywhere.
2.Definethe services that make up your app in docker-compose.ymlso they can
be run together in an isolated environment.
3.Run docker-composeup and Composestarts and runs your entire app.
A docker-compose.ymllookslike this:
version: '2.0'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
- logvolume01:/var/log
links:
- redis
redis:
image: redis
volumes:
logvolume01: {}
26. 26
Composehas commands for managing the wholelifecycle of your application:
Start, stop, and rebuild services
View the status of running services
Stream the log output of running services
Run a one-off command on a service
docker Swarm
A swarm is a cluster of one or more computers running Docker. It can consist of
one or more machines, so you can use swarm functionality on your local
machine. With the introduction of swarm mode, Docker received built-in
orchestration, clustering and scheduling capabilities.
When you run containers with the Docker command, swarm mode does not
come into play by default. If you are using swarm mode, you are dealing on a
higher abstraction level - services instead of containers. You can deal with both
kinds on the same machine.
When a container is running in service mode, swarm mode takes over and
manages the life-cycle of the service. It strives to bring up a desired number of
replicas, as specified in the definition. That part of the functionalities is very
similar to Kubernetes.
So, Docker Swarm is something to orchestrate clusters, but it also got some
functionality which was prior to docker compose into the docker engine itself. In
addition, the packaging changed - you can getthe new version for Ubuntu here.
With the stack command, the Docker engine takes care of bringing up stacks of
containers using docker stack deploy, which was taken care of by
docker-compose previously.
Overview of Docker Networking
27. 27
Docker networking allows you to attach a container to as many networks as you
like. You can also attach an already running container.
Docker Containers are nothing but the run-time instance of Docker Image.
These images are uploaded onto the Docker Hub(Git repository for Docker
Images) which contains public/privaterepositories.
So, from public repositories, you can pull your image as well and you can upload
your own images onto the Docker Hub. Then, from Docker Hub, various teams
such as Quality Assurance or Production teams will pull that image and prepare
their own containers. These individual containers, communicate with each other
through a network to perform the required actions, and this is nothing but
Docker Networking.
So, you can define Docker Networking as a communication passage through
which all the isolated containers communicate with each other in various
situations to perform the required actions.
Container Network Model(CNM)
Libnetwork, Libnetwork is an open source Docker library which implements all of
the key conceptsthat make up the CNM.
28. 28
So, Container Network Model (CNM) standardizes the steps required to provide
networking for containers using multiple network drivers. CNM requires a
distributed key-valuestorelike consoleto store the network configuration.
The CNM has interfaces for IPAMplugins and network plugins.
The IPAM plugin APIs are used to create/delete address pools and
allocate/deallocate container IP addresses, whereas the network plugin APIs are
used to create/delete networks and add/removecontainers from networks.
A CNM has mainly built on 5 objects: Network Controller, Driver, Network,
Endpoint, and Sandbox.
Network Controller: Provides the entry-point into Libnetwork that exposes
simple APIs for Docker Engine to allocate and manage networks. Since
Libnetwork supports multiple inbuilt and remote drivers, Network Controller
enables users to attach a particular driver to a given network.
Driver: Owns the network and is responsible for managing the network by
having multiple drivers participating to satisfy various use-cases and
deployment scenarios.
29. 29
Network: Provides connectivity between a group of endpoints that belong to
the same network and isolate from the rest. So, whenever a network is created
or updated, the corresponding Driver willbe notified of the event.
Endpoint: Provides the connectivity for services exposed by a container in a
network with other services provided by other containers in the network. An
endpoint represents a service and not necessarily a particular container,
Endpoint has a global scopewithin a cluster as well.
Sandbox: Created when users request to create an endpoint on a network. A
Sandbox can have multiple endpoints attached to different networks
representing container’s network configuration such as IP-address,
MAC-address, routes, DNS.
Network Drivers
There are mainly 5 network drivers: Bridge, Host, None, Overlay, Macvlan.
Bridge: The bridge network is a
private default internal network
created by docker on the host.
So, all containers get an internal
IP address and these containers
can access each other, using this
internal IP. The Bridge networks
are usually used when your
applications run in standalone
containers that need to
communicate.
30. 30
Host: This driver removes the
network isolation between the
docker host and the docker
containers to use the host’s
networking directly. So with this,
you will not be able to run multiple
web containers on the same host, on
the same port as the port is now
common to all containers in the
hostnetwork.
None: In this kind of network,
containers are not attached to any
network and do not have any access
to the external network or other
containers. So, this network is used
when you want to completely
disable the networking stack on a
container and, only create a
loopback device.
Overlay: Creates an internal private network that spans across all the nodes
participating in the swarm cluster. So, Overlay networks facilitate
communication between a swarm service and a standalone container, or
between two standalone containers on differentDocker Daemons.
31. 31
Macvlan: Allows you to assign a MAC address to a container, making it appear
as a physical device on your network. Then, the Docker daemon routes traffic to
containers by their MAC addresses. Macvlan driver is the best choice when you
are expected to be directly connected to the physical network, rather than
routed through the Docker host’s network stack.
How the networks are created & containers communicate with each other.
Suppose you want to store courses name and courses ID, for which you will
32. 32
need a web application. Basically, you need one container for web application
and you need one more container as MySQL for the backend, that MySQL
container should be linked to the web application container.
For the execution of above statement, following steps will be involved:
1. Initialize Docker Swarm on machine, to form a Swarm cluster.
2. Create an Overlay Network
3. Create services for both web application and MySQL
4. Connect the applications through the network.
Step1: InitializeDocker Swarmon the machine.
docker swarm init --advertise-addr 192.168.56.101
The –advertise-addr flag configures the manager node to publish its address
as 192.168.56.101. The other nodes in the swarm must be able to access the
manager at the IP address.
Step2: Now, if you want to join this manager node to the worker node, copy the
link that you get when you initialize swarm on the worker node.
Step3: Create an overlay network.
docker network create -d overlay myoverlaynetwork
33. 33
Where myoverlay is the network name and -d enables Docker Daemon to run in
the background.
Step 4.1: Create a service webapp1 and use the network you have created to
deploy this serviceover the swarmcluster.
docker service create --name webapp1 -d --network myoverlaynetwork -p
8001:80 hshar/webapp
Where -p is for port forwarding, hshar is the account name on Docker Hub, and
webapp is the name of the web application already presenton Docker Hub.
Step4.2: Now, check if the serviceis created or not.
docker servicels
Step 5.1: Now, create a service MySQL and use the network you have created to
deploy the service over the swarmcluster.
docker service create --name mysql -d --network myoverlaynetwork -p
3306:3306 hshar/mysql:5.5
Step5.2: Now, check if the serviceis created or not.
docker servicels
Step 6.1: After that, check which container is running on your master node and
go into the hshar/webapp container.
34. 34
Docker ps
Step 6.2: So, you can see that only the webapp service is on the manager node.
So, get into the webapp container.
docker exec -it container_id bash
nano var/www/html/index.php
The docker ps command will list both your containers with their respective
container id. The second command will enable that container in an interactive
mode.
Step 7: Now, change the $servername from localhost to mysql and $password
from “”” to “edureka”, and also change all fill in the database details required
and save your index.php file by using the keyboard shortcut Ctrl+x and after that
y to save, and press enter.
35. 35
Step8: Now, go into the mysqlcontainer which is running on another node.
docker exec -it container_id bash
Step 9: Once you go inside the mysql container, enter the below commands to
use the database in MySQL.
Step 9.1: Get an access to usethe mysqlcontainer.
mysql-u root-pedureka
Where -u represents the user and -p is the password of your machine.
Step 9.2: Create a database in mysql which will be used to get data from
webapp1.
CREATE DATABASE HandsOn;
Step 9.3: Use the created database.
USE HandsOn;
Step 9.4: Create a table in this database which will be used to get data from
webapp1.
CREATE TABLE course_details (course_name VARCHAR(10), course_id
VARCHAR(11));
36. 36
Step 9.5: Now, exit MySQL and container as well using the command exit.
Step 10: Go to your browser and enter the address as localhost:8001/index.php.
This will open up your web application. Now, enter the details of courses and
click on Submit Query.
Step 11: Once you click on Submit Query, go to the node in which your MySQL
serviceis running and then go inside the container.
docker exec -itcontainer_id bash
mysql -u root -pedureka
USE HandsOn;
SHOW tables;
select * from course_details;
This will show you the output of all the course, of which you have filled in the
details.
37. 37
Dockerizing the Application
Step 1: docker Installation: Install the Docker and check, if it is installed and
working correctly. (Installation process and command are already given in above
sections)
Step2: Dockerizing the Application
Deploying web app with docker: Create a Docker Image
We will dockerize a simple REST service spring-boot application build via Gradle.
I have a Gradle Spring-boot application as shown below and have created a
Dockerfileinside the projectwithout any extension.
38. 38
We have to add below snippet to the docker file
FROM java:8
VOLUME /tmp
EXPOSE 8080
ADD /build/libs/<jar_name.jar><docker_jar_name.jar>
ENTRYPOINT ["java","-jar","spring-boot-docker-1.0.jar"]
FROM — image name; VOLUME — Command for mount a directory.
EXPOSE — Set a specific portto listen by Docker Container; ADD — uses for copy files.
ENTRYPOINT — allows you to configure a container that will run as an executable.
Builda Docker Image
Build the Docker image using following commands including the dot.
$ docker build -f Dockerfile-t <docker_image_name> .
-t – for tagging the image. (in my casetag is spring-boot-docker)
-f – pointto a Docker filelocation
39. 39
Using docker images command check all Docker images like below:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
spring-boot-docker latest 4c46f96846e7 18 seconds ago 658MB
hello-world latest 725dcfab7d63 2 weeks ago 1.84kB
java 8 d23bdf5b1b1b 10 months ago 643MB
And if you run docker container ps -a you can see only hello-world container is
running as we have not yet run the image created above (spring-boot-docker)
which will instantiate the another container.
Run Docker Image
Run the Spring Boot application using this command, mapping your machine’s
port 8080 to the container’s published port8080 using -p
docker run -p <machine_port>:<container_port> <Docker_Image_Name>
$ docker run -p 8080:8080 spring-boot-docker
Finally, test the REST api using the http://localhost:8080/ Or use curl command
as below
$ curlhttp://localhost:8080/
Greetings fromGradle Spring Boot Application!
40. 40
$ curlhttp://localhost:8080/hello?name=gaurang
Hello gaurang
Now if you run docker container ps -a you can see another container running
spring-boot-docker application
Share your Image
Login with your Docker Id. If you don’t have a Docker account, sign up for one at
hub.docker.com. Makenote of your username.
$ docker login
Tag the image with -
docker tag image username/repository:tag
example :
$ docker tag spring-boot-docker itsgaurang/springboot-repo:1.0
Now if you run docker images you will see
docker images
Publishthe image
$ docker push itsgaurang/springboot-repo:1.0
Pull and run the image from remote repository
41. 41
$ docker run -p 8080:8080 itsgaurang/springboot-repo:1.0
It will run the application and you can test it using the http://localhost:8080/ Or
use curlcommand
Kubernetes vs Docker Swarm
What is Kubernetes
Kubernetes has been developed by Google. It is an IT management tool that has
been specifically designed to simplify the scalability of workloads using
containers. It has the ability to automate deployment, scaling, and operating
application containers.
By using Kubernetes, you will be able to define how your applications should run
and the manner in which they interact with other applications. Empowering the
user with interfaces and composable platform primitives, Kubernetes allows
high degrees of flexibility and reliability.
Clusters in Kubernetes include severalcomponents like:
● Pods: These are a group of containers on the same node that are created,
scheduled, and deployed together.
● Labels: These are key-value tags that are assigned to elements like pods,
services, and replication controllers to identify them.
● Services: These are used to give names to Pod groups. As a result, they can act
as load balancers to direct traffic to run containers.
● Replication Controllers: These are frameworks that have specifically designed
for the purpose of ensuring that, at any given moment, a certain number of pod
replicas are scheduled and running.
What is Docker Swarm
42. 42
Docker Swarm has been a popular open source standard for packaging and
distributing containerized applications. It is native clustering for Docker. A pool
of Docker hosts can be turned into a single virtual host.
Open Source, the community version of Docker — Docker CE — quickly rose to
prominence since it is highly relevant for developers and do-it-yourself Ops
teams. On the other hand, the Docker Enterprise Edition is mostly used run
mission critical applications.
Swarms area cluster of nodes that consistof the following components:
● Manager Nodes: Tasks involved here include Control Orchestration, Cluster
Management, and Task Distribution.
● Worker Nodes: Functions here include running containers and services that
have been assigned by the manager node.
● Services: This gives a description of the blueprint via which an individual
container can distribute itself across the nodes.
● Tasks: These are slots in which single containers place their work.
Kubernetes Vs Docker Swarm — A Look at the Differences
43. 43
1. Set up, InstallationandCluster Configuration
Kubernetes: Installing Kubernetes is easy when it comes to installing it in a test
bed. Though if you plan to run it at scale requires a bit more planning and
efforts.
Docker Swarm: Swarm uses the Docker CLI to run its programs. Hence,
knowledge about only a single set of tools is needed to be learned in order to
build environments and configurations. Since the Swarm program runs on your
currentDocker, you can begin by opting into Swarm.
2. Building and Running Containers
Kubernetes: Kubernetes embraces uniqueness by having its own API, client, and
YAML definitions. These are in high contrast to the standard Docker equivalents
since it is not possible to use Docker Compose or Docker CLI to define
containers.
Docker Swarm: The same Docker CLI is used and new containers can be spinned
with a single command. Although the Swarm API supports multiple tools that
work with Docker, it can prove to be a hassle if Docker lacks a specific operation.
3. Logging and Monitoring
Kubernetes: Inbuilt tools have been included in Kubernetes for the purpose of
Logging and Monitoring. Logging helps to understand the cause of failures
through the analysis of past records/logs. On the other hand, monitoring
enables us to be constantly aware of the health of the nodes and the services
that are containerized by them.
Docker Swarm: Swarm lacks the capability of inbuilt tools to handle Logging and
Monitoring. Though, you can make use of third-party tools to achieve this. The
tool ELK is one such example. On the other hand, tools like Reimann can be used
for monitoring.
44. 44
4. Scalability
Kubernetes: Kubernetes is known to be slightly better than Swarm when it
comes to maintaining the strength of the clusters. Though it is not as scalable as
Swarm owing to the complexity involved. A unified set of APIs is what adds
complexity to Kubernetes, along with a strong focus on the cluster state.
Though when it comes to Auto-scaling, Kubernetes is way ahead in the race
since it can analyze the server load and scale up or down in tandem with your
requirements. This makes traffic handling a breeze.
Docker Swarm: Swarm is known to be more scalable than Kubernetes.
Containers can be deployed faster both when it comes to large clusters and high
cluster fill stages. Only a single update command is enough to deploy new
replicas.
On the other hand, Auto-scaling is as smooth with Swarm as it is with
Kubernetes, but AWS autoscaling is possible with the help of StackStorm. The
idea is to create additional worker nodes when when a cluster fills up on load,
and then add them to the cluster.
5. GUI
Kubernetes: If you are a stickler for Dashboards, Kubernetes is your end-game.
The GUI provided is a reliable dashboard which can be used to effortlessly
control the cluster. This can be a huge boon for you if you are not from a
technical background since it requires no technical efforts and the instructions
are in plain English.
Docker Swarm: Docker hosts and Swarm Clusters can be managed with the help
of third party tool such as Portainer.io that provide an easy management UI. On
the other hand, the Universal Control Pane in the Enterprise edition of Docker
provides you an interface to manage the clusters.
6. Load Balancing
Kubernetes: Load Balancing is permitted in Kubernetes when Container Pods are
defined as Services. Apart from this, you need to manually configure the load
balancing settings. Only a certain set of pods and policies give access to each
service.
45. 45
Docker Swarm: Docker Swarm provides an inbuilt facility of Load Balancing. A
common network is joined by all containers that are in the same cluster. This
allows connection of any node to any container.
ComparisonTable
46. 46
Pros of Kubernetes
1. Open sourceand modular solution
2. Ability to run on any sort of operating system
3. Easy organization of servicewith pods
4. Developed by Google, who bring years of valuable industry experience to
the table.
Cons of Kubernetes
1. The installation and configuration can proveto be too complex for first
timers.
2. Not compatible with any existing Docker CLI and Composetools
Pros of Docker Swarm
1. Efficient and easier initial set up
2. Integrates and works with existing Docker tools
3. Lightweight installation
4. Open sourcesolution
Cons of Docker Swarm
1. Limited functionality as per the availability in the Docker API.
2. Limited fault tolerance.
Kubernetes is a superior and stronger marketcompetitor today.