In this blog post I describe a possible Docker based development pipeline. I use building a Magento 2 web site as an example, but the approach is equally valid for any web site. The following draws together several Docker usage patterns in use today.
Development occurs on your desktop or laptop (not in production!). With Magento 2, projects will typically start with a composer.json project file to download officially published Magento composer packages and any additional extensions to be installed. (Magento will provide a Composer download site as a part of the new Magento Connect site.) On top of this will be added locally developed modules or themes specific to the site.
I run Windows on my laptop with PHP Storm, and have been experimenting with trying to use Docker to run the exact same version of PHP as I would in production. However, frankly I would not recommend any of the combinations I have tried so far. Using a Virtualbox shared file system works, but is REALLY slow. I have tried a suggestion to mount the Docker container file system exposed via Samba from Windows, but it has also been pretty slow for me so far. (PHP Storm takes forever to index all the files on disk.) I have a few more variations to try, but no good news yet. (In response to a tweet I sent asking for good solutions, Aad Mathijssen @aadmathijssen replied May 18 “@akent99 Assuming the situation is similar to using Vagrant, there is none. I did myself a favour to switch to native Linux and OS X setups.”)
One approach not based on a shared file system is using the PHP Storm ability to push saved files when changed via ftp. (Andrii Kasian @MrKAndy tweeted May 18 “@akent99 @docker samba + phpstorm autodeploy.”) This seems like an option, but I worry about changes made outside PHP Storm not being pushed correctly when expected.
So how then to take the web site code base and turn it into a Docker image? The public images I created in the past installed PHP, Apache, and the Magento source code all in the one image. By adding the Magento source code towards the end of the Dockerfile, the previous steps are cached courtesy of the Docker “unioned file system”. So it’s pretty quick to create a new image.
But how should the container be used after that? Ideally it should go through a CICD pipeline where functional tests are run against the container, and then finally the container is deployed into production. While undergoing testing, obviously you don’t want the container hooked up to your production payments processing system – you don’t want tests charging you real money! There is also a question of whether you want the full deployment set up (e.g. with load balancers, Varnish, Redis, etc) or a subset when testing. I believe in being as close as possible to production, but there will always be some environmental differences between your test, staging, and production environments (which ever ones you use).
One approach is to use environment variables as switches to control the Docker container behavior. For example, if you want to test without Varnish, have an environment variable to turn Varnish on/off. This is completely reasonable. However another approach is to use Docker volumes.
Docker volumes allow one container to mount the files exposed by another container running on the same host. (It does not work across networks.) If you remember containerization in Docker is about controlling which other files, processes, network connects etc you can see, mounting a Docker volume is basically allowing one container to see files normally the container security would not let it see. It’s that simple. Remember that all container files are stored on the underlying host operating system. Docker is containerization (restricting access to the host OS) not virtualization (defining an abstraction layer on top of the host OS).
Packaging then is creating a Docker image with just the set of Magento files in it, and making those files available as a Docker volume. A separate Docker image containing PHP and the web server then mounts the Docker volume containing the Magento source code. This means you can easily have different standard container images using say Apache or NginX, or different deployment configurations for test, staging, and production environments, sharing the Docker volume holding the Magento files. This has a slight benefit in decoupling the web server etc configuration from your actual web site. E.g. you can update or patch your web server Docker image without changing the Docker volume holding the Magento code base.
Another little benefit is if you do frequently deployments to production, keeping all the Magento volumes you deploy will take up less disk space than if the Magento source files are in the same container as the web server and other tools. You can use a minimal base Docker image such as ‘scratch’ to copy files into.
Improving Site Security with Docker
Using the Docker volume approach has an additional benefit. Docker supports the ability to mount a volume in read only mode. That means in production you can lock down the code so it cannot be modified, improving security. (This is more secure even than using chmod as when a file system is mounted read only, not even root can modify the files.)
Even further, from Docker 1.5 onwards you can mount the whole OS container read only. This means the whole OS can be easily made read only, not just the Magento code base.
Linux and Magento both need some writable areas on disk, such as for logs and /tmp for scratch files. This can be addressed by mounting additional writable volumes for the selected directories where writable files are required.
The end result is a site where most files can only be modified by deploying a completely new site. It is not possible for a web server security exploit to modify the production files on your site because they are run from within the container. This does not remove all security concerns, but it certainly makes a site more secure against whole classes of potential exploits.
There are multiple ways in which technologies such as Docker can be used as a part of your development pipeline. This post highlighted the use of Docker volumes and read-only file systems to do so while improving your site security.