Docker Machine – not quite ready for prime time under Windows? (Quick Note)

Docker machine was recently announced as a quick way to get a new VM provisioned and ready to run Docker containers. That is, it creates a new VM with the Docker engine installed in it for you, setting up the various certificate files for authentication. It knows how to talk to VirtualBox (locally), Azure, Amazon, Digital Ocean and more out of the box. So Docker machine takes some of the pain out of the provisioning process. Once you have a Docker host running you can then start issuing Docker commands from your local environment to talk to the remote Docker host. This sounded convenient, so I tried to give it a go under Windows.

Quickly I hit a problem however. The ‘machine’ command was looking for public-key.json whereas boot2docker was using cert.pem and key.pem for identity. I created a issue with the Docker machine project, but will see if it is just my ignorance. There have been previous issues reporting a similar problem, but they seem to have got JSON files working. I suspect its just a bit too early still for ‘machine’ – there are a few kinks to iron out still.

Until then, boot2docker combined with the Windows Docker executable seems to be working quite nicely together. I had a slightly old installation of boot2docker, so I followed the upgrade instructions (which involved stopping the boot2docker VM in VirtualBox then running boot2docker download resulting in Docker 1.4 being available). Then running boot2docker shellinit I got the following output:

> boot2docker shellinit
Writing C:\Users\akent\.boot2docker\certs\boot2docker-vm\ca.pem
Writing C:\Users\akent\.boot2docker\certs\boot2docker-vm\cert.pem
Writing C:\Users\akent\.boot2docker\certs\boot2docker-vm\key.pem
export DOCKER_CERT_PATH=C:\Users\akent\.boot2docker\certs\boot2docker-vm
export DOCKER_TLS_VERIFY=1
export DOCKER_HOST=tcp://192.168.59.103:2376

The export commands would work well under Cygwin, otherwise simply change the ‘export’ commands to ‘set’ and run them to set the environment variables. For example,

set DOCKER_CERT_PATH=C:\Users\akent\.boot2docker\certs\boot2docker-vm
set DOCKER_TLS_VERIFY=1
set DOCKER_HOST=tcp://192.168.59.103:2376

I then used the Windows native docker executable (described in a previous post) and successfully built a local Docker image under Windows and launched it in the boot2docker VirtualBox VM.

> docker.exe run busybox echo hello world
hello world
> docker.exe build .
Sending build context to Docker daemon 3.072 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:14.04
ubuntu:14.04: The image you are pulling has been verified
3fd9d7da: Pulling fs layer
5d1cca71: Pulling fs layer
eddc05dc: Pulling fs layer
4ff06b53: Pulling fs layer
Status: Downloaded newer image for ubuntu:14.04
---> 8eaa4ff06b53
Step 1 : RUN apt-get update && apt-get install -y firefox
...

That is, directly under Windows I could build and run Docker images from the Windows command line. Nice!

While I suspect the new Docker ‘machine’ command might not be quite ready for prime time under Windows, it does look promising as a way to easily create new Docker hosts across a range of hosting providers. Then you can run local Docker commands to deploy applications to the Docker hosts.

Also in the announcement was information about Docker swarm and Docker compose. The Docker compose command seems like a way to define what software should be deployed on which docker hosts, captured in a configuration file. This reduces deployment errors doing deployments by hand. One thing I have not got quite clear in my mind yet however is how to make sure data is not lost during such deployments. You do not want to lose your Magento MySQL database contents for example when deploying a new version of the application. The same is true for uploaded media images. So I am curious still to see how well these technologies support upgrading an existing deployment, without losing data in the process. I have a suspicion that Docker compose might take a little while until its flexible enough to support a good upgrade path.

But I continue to like Docker as a way to have standard images for Redis, MySQL, Varnish etc that you can deploy in different topologies across Docker hosts. Default configurations won’t be as performant as what an expert system integrator or partner can put together, but they can take same of the complexity out of a Magento installation. This is an area I plan to think about some more.

 

One comment

  1. Docker Compose seems a lot like Fig. Indeed there’s evidence to suggest Fig inspired Compose (https://github.com/docker/docker/issues/9459). The trick to not losing your data between deploys (at least with Fig) is to use a Docker Volume Container. If a new deploy replaces the container, the volume (which is actually stored on the Docker host) is attached to any new containers that get created.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: