There has been a lot of buzz around Docker recently and as always I couldn't resist the temptation and I had to try it out. There was another driver behind my curiosity - for a long time now I've been looking for a solution to be able to package up my workshop material to be able to distribute it amongst attendees.
There are lots of introductory articles on Docker and I will only give you a high lever overview.
Docker is an open platform dor developers and sysadmints to build, ship and run distributed applications
With docker you create images. Normally you start off by having a 'base image' which is a 'clean' OS installation. You then install applications on top of it - for example node.js. In other words an image is a set of layers that you've installed.
Once you start a process in Docker from an image it'll become an active container which is a stateful instantiation of an image.
For the rest of the article I am going to assume that you have already installed Docker on Mac OS X. To verify a successful installation you can open up the command line and type in docker -v
which should return the Docker version information.
Docker utilises Boot2Docker (a tool that is installed when you install Docker) which is lightweight Tiny Core Linux installation running inside VirtualBox. This tool is uses as it's going to be the what allows us to run the Docker containers themselves.
To start Boot2Docker you can execute the boot2docker start
in your command line.
(The first them when you do this you'll be asked to setup your environment variables - this is advised. Simply open up your ~/.bash_profile
and copy the export
statements in there.)
Let's start building an image. Images can be built automatically by reading commands and instructions setup in a file called Dockerfile
. This file specifies the base OS and all further installation and setup steps. As part of this article I am going to setup a docker image using the Dockerfile
that will install MarkLogic (a NoSQL document database) along with node.js. The image will then start both a MarkLogic instance as well as node.js and it'll expose 3 different ports - 2 for managing the MarkLogic (database) instance and one to access the node.js application. Under normal circumstances a Docker image starts up one process but with this article I'd like to show you how you can initiate multiple applications inside your images.
Let's first put together a simple node.js/Express application that displays some text. The directory structure should look like this:
- project's root folder
- /src
- app.js
- package.json
- Dockerfile
This is what app.js
contains:
'use strict';
var express = require('express');
var router = express.Router();
var app = express();
app.set('port', 8080);
router.route('/').get(function (req, res) {
res.send('Hello from the Docker container.');
});
app.use('/', router);
app.listen(app.get('port'));
console.log('Magic happens on port ' + app.get('port'));
And this is the content of package.json:
{
"name": "docker",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "Tamas Piros",
"license": "MIT",
"dependencies": {
"express": "^4.11.1",
"nodemon": "^1.3.2"
}
}
The first thing that needs to be specified in a Dockerfile
is the base image using the FROM
keyword:
FROM centos:centos6
After this we can install node.js:
RUN curl -sL https://rpm.nodesource.com/setup | bash -
RUN yum install -y nodejs
Using the ADD
keyword we can add local resources to the image. So of course we need to add the content of the /src
folder:
ADD src/* /src
With the Dockerfile
you need to follow your standard setup logic. So once you've installed node what would be the next step? You'd of course create the source of your application, add a package.json
file, run npm install
and finally run node app.js
, right? This is what we need to replicate now:
WORKDIR /src
RUN npm install
EXPOSE 8080
CMD ["node", "app.js"]
With the WORKDIR
command we set the working directory for any RUN
or CMD
instructions. An alternative way to set this up could have been the following:
RUN cd src/; npm install
EXPOSE 8080
CMD ["node", "app.js"]
This is how the whole Dockerfile
should look like:
FROM centos:centos6
RUN curl -sL https://rpm.nodesource.com/setup | bash -
RUN yum install -y nodejs
ADD src/* /src/
WORKDIR /src
RUN npm install
EXPOSE 8080
CMD ["node", "app.js"]
The CMD
option takes every argument as a separate item in an array. It's also important that I couldn't make it work with single-quotes, only double-quotes seem to do the job.
Time to build the image. Navigate to the root of the project's folder (to the folder where the Dockerfile
lives) and execute this command in your terminal:
docker build -t node-test .
(Please don't forget to run boot2docker start
before as per the prior instructions. Also notice the 'dot' at the end of that line - it's telling docker to build the image from the contents of it's current working folder.)
This will build an image called 'node-test'. If done correctly you should be able to see your commands being executed inside the terminal window. Once you see the 'Sucessfully built .....' message you can also execute docker images
and you should be able to see your image in the list.
Time to start up the image and see it in action. The following command will achieve this:
docker run -p 18080:8080 node-test
This is telling docker to run the image named 'node-test' and with the -p
option we are publishing a container's port to the host - meaning that on your machine you can now access port 18080 that will map to port 8080 inside the container which of course in turn means that you'll see your node.js/Express app. Running the command above should actually result in the display of the console.log() message from app.js.
To verify that the container is running, inside your command prompt you can execute docker ps
which returns all running docker containers.
Fire up your browser and navigate to your docker instance's IP address which you can get by running echo $(boot2docker ip)
in your terminal and on the port specified in the run command you should see your app:
So far so good but we can do better. This container only runs one application. In order to be able to start up more than one app we need to utilise a tool called supervisor (to be more precise we need to utilise a daemon, supervisord). This needs to be installed as well. Let's also add a local .rpm file and install everything.
The project folder structure changes slightly:
project's root folder
/etc
supervisord.conf
/src
app.js
package.json
/tmp
MarkLogic.rpm
Dockerfile
Our updated Dockerfile
now looks like this:
#Use the CentOS 6 base image
FROM centos:centos6
#install node.js
RUN curl -sL https://rpm.nodesource.com/setup | bash -
RUN yum install -y nodejs
#install easy_install and supervisor
RUN yum install -y python-setuptools
RUN easy_install supervisor
#add the MarkLogic rpm from a local folder
ADD tmp/MarkLogic-8.0-20141124.x86_64.rpm /tmp/MarkLogic-8.0-20141124.x86_64.rpm
#install the MarkLogic database
RUN yum -y install /tmp/MarkLogic-8.0-20141124.x86_64.rpm
#add files to image
ADD etc/supervisord.conf /etc/supervisord.conf
ADD src/* /src/
#install npm packages globally
RUN cd /src; npm install -g
#expose two MarkLogic management ports and the node.js port
EXPOSE 8000 8001 8080
#start up the supervisor daemon
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]
The only difference (other than the extra ADD
statements of course) is that we are now starting up the supervisod daemon which takes our configuration file as an argument. The instructions to startup the MarkLogic database as well as the node.js/Express app is of course inside the supervisor configuration file:
[supervisord]
nodaemon=true
[program:node]
command=/bin/bash -c "cd /src && nodemon app.js"
[program:marklogic]
command=/bin/bash -c "/etc/rc.d/init.d/MarkLogic start && tail -F /var/opt/MarkLogic/Logs/ErrorLog.txt"
If you have other commands (such as starting up an Apache server for example) then all you'd have to do is to make sure you install it from rpm by adding the appropriate instruction inside your Dockerfile
and then adding a new entry inside the configuration file above.
Build this image as well by running docker build -t ml-node .
Once you see the successful build message you can run it by executing:
docker run -p 18080:8080 -p 18000:8000 -p 18001:8001 ml-node
Notice how we are binding multiple ports now.
This is a pretty nifty image now but we could make it even better. Remember how, in the supervisor configuration file, we have specified nodemon app.js
instead of node app.js
? The reson behind that was to allow nodemon to capture any changes in code and automatically restart our node app. But, the question I hear you ask is: 'how do you change the source of app.js if it's running inside the container?' Well, the answer is by mounting the folder. Check out this docker run
statement:
docker run -p 18080:8080 -p 18000:8000 -p 18001:8001 -v /path/to/source/on-your-computer/src/app.js:/src/app.js ml-node
I'm mapping the local version of my app.js to the /src/app.js file inside the container. This means that I can now edit my file and the changes will be automatically picked up by the container:
I really liked working with Docker and I look forward to exploring it's other features - I'm sure I will learn a lot more. At the moment what I'm waiting for is proper command line support for the docker cli on Windows enviroments as unfortunately that part is not very well implemented yet.