Flexera logo
Image: Docker vs. VMs? Combining Both for Cloud Portability Nirvana

Docker and container technology in general is very interesting to us because it promises to help simplify cloud portability — that is, how to run the same application in different clouds — as well as some aspects of configuration management. Today the RightScale multi-cloud management platform provides a portability framework that combines multi-cloud base images with configuration management tools such as Chef and Puppet to allow users to deploy workloads across clouds and hypervisors. With Docker, we can offer cloud users another approach to portability. In this post I’d like to share our experience to date and what we’ve observed thus far.

While you most likely have heard of Docker and containers in general, you may not have looked into the details or explored the various ways to use containers. Container technology is more than a decade old. What’s different about Docker is that it makes containers easier to use, and because it is gaining broad support across the ecosystem, it offers a path to cloud portability. Basically a container encapsulates applications and defines their interface to the surrounding system, which should make it simpler to drop applications into VMs running Docker in different clouds.

Often containers are compared to virtualization. While that’s certainly valid, I don’t think it’s actually the most useful comparison. Here’s why: In a virtual machine you’ll find a full operating system install with the associated overhead of virtualized device drivers, memory management, etc., while containers use and share the OS and device drivers of the host. Containers are therefore smaller than VMs, start up much faster, and have better performance — however this comes at the expense of less isolation and greater compatibility requirements due to sharing the host’s kernel.

Diagram source: Docker Inc.

Virtual machines have a full OS with its own memory management, device drivers, daemons, etc. Containers share the host’s OS and are therefore lighter weight.

The more interesting comparison is between containers and processes: Containers really are processes with their full environment. A computer science textbook will define a process as having its own address space, program, CPU state, and process table entry. But in today’s software environment this is no longer the full story. The program text is actually memory mapped from the filesystem into the process address space and often consists of dozens of shared libraries in addition to the program itself, thus all these files are really part of the process.

In addition, the running process usually requires a number of files from the environment (typically in /etc or /var/lib on Linux) and this is not just for config files or to store program data. For example, any program making SSL connections needs the root CA certs, most programs need locale info, etc. All these shared libraries and files make the process super-dependent on its filesystem environment. What containers do is encapsulate the traditional process plus its filesystem environment. The upshot of all this is that it’s much more appropriate to think of a container as being a better abstraction for a process than as a slimmed-down VM.

The right way to think about Docker is thus to view each container as an encapsulation of one program with all its dependencies. The container can be dropped into (almost) any host and it has everything it needs to operate. This way of using containers leads to small and modular software stacks and follows the Docker principle of one concern per container. A blog post by Jerome Petazzoni has a good discussion on all this.

An aspect of Docker that leads many first-time users astray is that most containers are built by installing software into a base OS container, such as Ubuntu, CentOS, etc. It’s thus easy to think that the container is actually going to “run” this base OS. But that’s not true; rather, the base OS install only serves the purpose of starting out with a standard filesystem content as the environment on which the program being run in the container can rely upon. One could install just the files actually required by the program into the container, but targeting such minimalist container content is often difficult, cumbersome and inflexible. Besides, the way Docker uses overlay filesystems minimizes the cost of the extra unused files.

Starting out with a full OS base install also often leads users to want to run a suite of system daemons in a container, such as initd, syslogd, crond, sshd, etc. But it’s really better to think of a container as running just one process (or process tree) and having all its filesystem dependencies encapsulated.

All this being said, it is possible to treat containers as slimmed-down VMs in some use cases, but I won’t be focusing on those here.

Trying It Out for Real

To test out the premise that containers make it easier to port apps across clouds, I took one of our newest containerized apps and ran some tests. The app consists of seven containers:

  • Two Ruby app containers (two separate apps)
  • A ZooKeeper container (Java app)
  • A Kafka container (Scala app running in a JVM)
  • A Redis container (in-memory database)
  • A MariaDB container (fork of mysql)
  • A graphite-statsd container (monitoring)

In a typical production environment, several of these subsystems would be launched in their own multi-server cluster, but for this first experiment I tested an all-in-one configuration where everything runs on a single server. I used a RightScale ServerTemplate™ consisting of four RightScripts: the first to install Docker, switch it to the btrfs filesystem, and format the ephemeral drives with btrfs; the second to set up keys to pull private Docker repositories, the third to launch the off-the-shelf containers, and the fourth to launch our two app containers. Launching the containers is pretty simple and consists of a number of docker-run invocations with the appropriate port and volume mappings to stitch the various containers together:

docker run -d --name zookeeper -p $ZOOKEEPER_PORT:2181 \
 -p 2888:2888 -p 3888:3888 jplock/zookeeper
docker run --name redis -d -p $REDIS_PORT:6379 dockerfile/redis
docker run -d -t --name kafka -p $KAFKA_PORT:9092 --link zookeeper:zk -e BROKER_ID=0 \
 -e HOST_IP=0.0.0.0 -e PORT=9092 wurstmeister/kafka:0.8.1
docker run --name mariadb -d -p $MARIADB_PORT:3306 -e MARIADB_PASS=admin tutum/mariadb
docker run -d -i -t --name graphite-statsd -p $GRAPHITE_PORT:80 -p 2003:2003 \
 -p $STATSD_PORT:8125/udp -v /var/log/graphite:/var/log \
 -v /opt/graphite/storage -v /opt/graphite/conf \
 hopsoft/graphite-statsd /opt/hopsoft/graphite-statsd/start

From a production perspective this is a bit over simplified because each container launch should really be wrapped into something like upstart or supervisord in order to auto-restart on failure, for example. Logging is another aspect that a production configuration needs to deal with.

The part that worked well is the isolation between the VM environment, where I could deal with the idiosyncrasies of each cloud, and the containers, which remained unchanged across the clouds and operating systems I played with. For example, on AWS it was easiest for me to run using the official Ubuntu 14.04 AMI published by Canonical, and I used a local SSD drive as storage for the running containers. I had to install the latest version of Docker at boot time, format the SSD filesystem, and install a couple of utilities. On Google Compute Engine there is no official Ubuntu image but Google provides a “container-optimized” Debian image. That one has the latest version of Docker already installed so I didn’t have to do that and it’s all set up for storing containers on the root filesystem. The latter is not ideal and I should have written a script to reallocate it to a mounted persistent disk, but that will have to wait for the next iteration. Futhermore, in both clouds I should have created and mounted EBS, respectively PD, volumes for MariaDB and Kafka and the procedures for that vary from cloud to cloud, but the result is transparent to the software running in the containers. The bottom line is that until a giant wave of further standardization comes in there are lots of cloud-dependent things to do in the VM environment around the containers.

When putting this demo together I encountered two issues I didn’t expect. The first is that before switching to btrfs I got filesystem corruption in the containers. This appears to be a problem in the Linux kernel and not in Docker and is being tracked as an issue. The second issue is that the machines took a long time to boot. To dig into that, I wrote a set of comparison RightScripts (shell scripts) that install the same software directly on the same base Ubuntu image as used for the Docker version on AWS. It downloads and installs everything, including the required JVM and ruby 2.1.2 version, that is lots and lots of apt-get installs, git clones, and ruby bundle installs. Here’s the boot-time comparison (I ran each version 3 times using c3.large instances in us-east, the timings were very consistent):

Boot time steps Docker version RightScript version
Launch and boot 53s 49s
Install RightScale agent 1s 1s
Prep VM environment 36s 16s
Prep User creds, etc 0s 0s
Install & launch zookeeper, redis, kafka, mariadb, graphite, statsd 4m57s 1m5s
Install ruby n/a 54s
Install & launch custom apps 2m23s 3m3s
TOTAL 8m50s 6m8s

The point here is not whether writing scripts is better but that Docker’s download times and sizes need some dramatic optimizations to be competitive! Before this experiment, I had a mental picture of the container images just flying onto the instance in mere seconds and everything starting up in a flash, but the reality is that it takes a … long … long … time.

Most people react by saying “must be a network problem” or “a CDN problem” but that’s not the case. Docker has two issues in particular. The first is that the downloading of containers has buffering and pipelining issues, which is actively being worked on. The second is that the filesystem layers underlying Docker containers often contain lots of deleted or superseded data. While this is recognized, I haven’t seen work to address it, but CenturyLink has a nice explanatory blog with tips. (The containers used in this example don’t have a lot of deleted files, but some do have a lot of deletable files and thus could be shrunk if it weren’t for this issue.)

Summary So Far

As we discovered in our tests, Docker did work nicely to install our application across different clouds and operating systems. Putting the RightScale ServerTemplate for Docker together took much less time than putting the script-based version of the application together. The containerized version is also much simpler to maintain and to port to new cloud environments. Despite the hiccups, none of the issues encountered with Docker are fundamental problems, and we expect they’ll be resolved over time. However, they do highlight that it’s still early in its maturity and that early adopters should expect some bumps in the road.

The great promise of Docker in our view is that it makes it easier to port application stacks across clouds by using each cloud’s VM environment to deal with the idiosyncrasies of the cloud. This represents a useful capability — one that has to be implemented in a more complex and tedious way today without containers. In effect, the cloud provider uses the hypervisor to provide a standard environment for VMs and the ops team uses the VM to provide a standard environment for containers.

Cloud Management icon

Cloud Management

Take control of cloud use with out-of-the-box and customized policies to automate cost governance, operations, security and compliance.

One thing that is important to understand is that while containers are small and portable, in a production setting there is quite a bit of infrastructure required around them. In particular, many aspects of automation, configuration and governance need to be dealt with, including:

  • Logging (syslog): placing logs in a standard location, rotating them, shipping them to a central storage and analysis location
  • Monitoring: monitoring the VM itself and each app in each container, sending monitoring data to a central system
  • User access: managing interactive user access (if only for troubleshooting purposes), including SSH keys and such
  • Security: operating security software from OSSEC to Tripwire
  • Disks: mounting various disks (root, ephemeral, network-attached), partitioning/striping, and creating filesystems, managing snapshots with attendant filesystem freezes, etc.
  • Network: managing multiple network interfaces, host-based firewall rules, VPN connections
  • Operating system: some clouds have preferred operating systems (in that these always have the latest drivers and performance tweaks, such as Amazon Linux on AWS and Debian on Google Compute Engine) and using containers the application’s OS environment is not tied to the VM’s OS

Eventually these concerns will be addressed by cluster management tools, such as Kubernetes, Fleet, and Mesos. These tools offer lower level automation of placement of Docker containers on hosts, and will become increasingly important in the Docker environment. In the future, as these tools mature, they could eliminate the need for an enclosing VM.

Dealing with all the above and more is what a broader cloud management platform like RightScale offers — which is why we see exciting, new technologies like Docker and associated cluster management tools as complementary to what we do and that we will leverage moving forward.