What is Docker?
Learn what Docker is and how it differs from regular virtualization.
January 25, 2015
Q: What is Docker?
A: Docker is an open-source solution that enables isolation and mobility for Linux applications; however, it's also coming to Windows. Docker's primary purpose is to simplify the deployment of applications that over time have become more and more complex. For those familiar with App-V, the concept of isolating applications from each other through the use of virtual layers will be familiar; this is similar to how Docker works. The following figure illustrates the difference between traditional virtualization and how Docker containers function.
Note that in traditional virtualization, each application runs in its own private virtual machine with its own OS instances, as well as its own libraries and binaries. This gives great isolation but also generates a lot of overhead, a lot of management, and a lot of provisioning time. Traditional virtualization doesn't help in moving applications between OS instances or between environments, such as from development to test to production.
With Docker containers, each application runs in its own container with its own binaries and libraries, in an isolated process with its own virtual file system, but shares a common guest OS with other Docker containers on the same OS instance (which shows all running instances of Docker containers on an OS instance). In addition, the Docker containers enable different versions of an application to be leveraged through various snapshots that create a tree-like structure of layers that make up the various versions as changes are made.
It's possible to actually view the change history between the various checkpoints, as well as the differences between them. The different versions of an application share common binaries and libraries, making them very lightweight because only changes are stored. In addition, it's easy to move a Docker container between OS instances, making it very simple and fast to not only initially deploy but also move applications between OS instances and environments.
Note that Linux had containers before Docker; however, those containers were complex to create and secure, whereas Docker provides not only the container technology through the Docker engine but also a packaging tool to create the containers and the Docker Hub, which is a place where applications can be shared. In addition, Docker provides a standard that has achieved a critical mass, enabling standardization across almost every distribution of Linux (and soon Windows). Docker has capabilities such the application listening on a certain port but mapping to a different port via the container, enabling environmental changes without modifying the actual application configuration. It's also possible to digitally sign the containers to provide attestation of the content of the container. Each container also supports control related to the amount of resource such as CPU, memory, and storage that can be consumed to avoid "noisy neighbor" problems with multiple containers running on a single OS instance.
For more information about Docker, check out the What is Docker page and the Microsoft TechEd session "Docker and Microsoft: How Azure is Bringing the World of Windows and Linux Together." Note that with Server App-V being removed from System Center Virtual Machine Manager (SCVMM), this new container functionality coming to Windows will be important. Also of interest is the Microsoft Research Drawbridge technology, which enables similar capabilities.
About the Author
You May Also Like