Brief overview of the Docker eco system, the paradigm change it brings to development and operations processes. While docker has lots of potential its still working to mature into a viable production system that has proved itself secure, stable, and viable.
2. Where we are going…
• Basics
• Advanced (If we have time)
• Practical Use and Integration
3. Basics
• Docker allows you to package an application with all of its
dependencies into a standardized unit for software
development.
• Containers running on a single machine all share the
same operating system kernel so they start instantly and
make more efficient use of RAM. Images are constructed
from layered filesystems so they can share common files,
making disk usage and image downloads much more
efficient.
4. Basics
• Each virtual machine includes the application,
the necessary binaries and libraries and an
entire guest operating system
Virtual Machines
5. Basics
• Docker Shares Hardware and Kernel at a
minimum, then each Container is run as an
isolated process in userspace.
• Individual Containers can also share an Image
to further optimize resource usage.
• “Containers are to Virtual Machines as threads
are to processes.”
Docker Containers
6. Basics
• An Image is a read only package made of 1 or
more layers that contains the OS filesystem,
binaries, and resources to run your
application.
• A Container is a R/W image that is running on
the docker daemon.
• When an image is run a writable layer is
added to the filesystem to make it R/W.
Images vs. Containers
7. Architecture
• The Docker Registry is a repository of Images.
• Docker Hub is a Registry of Public and Private Images.
• Docker Registry can also be hosted on private servers (often called Docker Trusted Registry).
Docker Registry
8. Architecture
• At the core of the Docker platform is Docker Engine, a lightweight runtime and robust tooling that
builds and runs your Docker containers. Docker Engine runs on Linux to create the operating
environment for your distributed applications. The in-host daemon communicates with the Docker
client to execute commands to build, ship and run containers.
Docker Engine (daemon)
9. Architecture
• The Docker Client is a tool to manage images and containers running on a docker
daemon. It is usually installed locally on the server running the daemon. It can also be on
a remote computer.
Docker Client
12. Architecture
• Distributed applications consist of many small
applications that work together. Docker
transforms these applications into individual
containers that are linked together. Instead of
having to build, run and manage each individual
container, Docker Compose allows you to define
your multi-container application with all of its
dependencies in a single file, then spin your
application up in a single command. Your
application’s structure and configuration are held
in a single place, which makes spinning up
applications simple and repeatable everywhere.
Docker Compose
13. Architecture
• Docker Machine automatically sets up
Docker on your computer, on cloud
providers, and inside your data center.
Docker Machine provisions the hosts,
installs Docker Engine on them, and then
configures the Docker client to talk to the
Docker Engines.
Docker Machine
14. Architecture
• The nature of distributed applications
requires compute resources that are also
distributed. Docker Swarm provides
native clustering capabilities to turn a
group of Docker engines into a single,
virtual Docker Engine. With these pooled
resources, you can scale out your
application as if it were running on a
single, huge computer.
Docker Swarm
15. Architecture
• Docker Universal Control Plane (UCP) is
an enterprise on premise solution that
enables IT operations teams to deploy
and manage their Dockerized applications
in production, while giving developers the
agility and portability they need, all from
within the enterprise firewall.
Docker Universal Control Plane
17. Currently
• For testing and dev a server is built
• the dev then builds out the needed applications config, and loads the code
• repeat for production on a cloud server…or hand the code off to a client
• months or years later os updates and app updates require manual
maintenance and upkeep, server replacement, etc…
18. The Future
• Developers create the Docker image to quickly automate the process.
• The docker image can be tested on dev’s local computers or on internal
servers with a single command to run the Docker image or automated by
Jenkins.
• When the app is ready for primetime a simple change on the run
command or automated build process can move it from a local computer,
to a local server room, or up into a cloud hosting platform.
19. How is this possible?
• Filters - Constraints are key/value pairs associated to particular nodes.
You can see them as node tags.
• Affinity - You use an --affinity:<filter> to create “attractions” between
containers.
• Strategy - the scheduler’s strategy for ranking nodes
20. Integration
• Docker integrates great with Jenkins CI to create an
automated build environment to pull code from source,
update docker images, and to run the container.
• Xen Orchestra support management of Docker Nodes and
Containers directly from the web interface.
• XenServer supports native management and integration of
Docker Hosts and Containers from XenCenter.
• Docker Engine can integrate directly with Rackspace and
other cloud providers to manage your cloud and local
Docker hosts from a single location.
• CoreOS is an extremely light weight, stable, and secure
platform for running Docker containers.
21. The Reality
• Docker and containerization is here but is still young.
• The automation and management of large easily
scalable application is in infancy and requires a lot of
work and many technologies to make it completely
automated.
• But, if you’re not running google, your application are
relatively small and don’t need to build a production
architecture yourself.