Skip to main content

Docker running Node.js on Mac OS X

6 min read

Older Article

This article was published 11 years ago. Some information may be outdated or no longer applicable.

I couldn’t resist trying Docker. I’d been hunting for a way to package up workshop material and distribute it to attendees, and Docker looked like it might be the answer.

There are plenty of introductory articles floating around, so here’s just a quick sketch.

Docker is an open platform for developers and sysadmins to build, ship and run distributed applications

With Docker you create images. You start with a ‘base image’ (a clean OS installation), then layer applications on top of it, like Node.js. An image is essentially a stack of those layers.

When you start a process from an image, it becomes a container: a running, stateful instance of that image.

For the rest of this article I’m going to assume you’ve already installed Docker on Mac OS X. Run docker -v in your terminal to verify. You should see version information.

Docker uses Boot2Docker (bundled with the Docker installer), a lightweight Tiny Core Linux running inside VirtualBox. It’s what actually runs the containers.

Fire up Boot2Docker with boot2docker start in your terminal.

(The first time you do this, you’ll be asked to set up environment variables. Open your ~/.bash_profile and paste the export statements in there.)

Let’s build an image. Images can be built automatically from a file called Dockerfile, which spells out the base OS and every installation step. We’re going to set up an image that installs MarkLogic (a NoSQL document database) alongside Node.js. The image will start both a MarkLogic instance and a Node.js app, exposing 3 ports: 2 for managing MarkLogic and 1 for the Node.js application. Normally a Docker image runs a single process, but I want to show how you can spin up multiple applications inside one image.

Let’s start with a simple Node.js/Express app that displays some text. The directory structure should look like this:

- project's root folder

- /src

- app.js
- package.json

- Dockerfile

This is what app.js contains:

'use strict';

var express = require('express');
var router = express.Router();
var app = express();

app.set('port', 8080);

router.route('/').get(function (req, res) {
  res.send('Hello from the Docker container.');
});

app.use('/', router);

app.listen(app.get('port'));

console.log('Magic happens on port ' + app.get('port'));

And this is the content of package.json:

{
  "name": "docker",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "Tamas Piros",
  "license": "MIT",
  "dependencies": {
    "express": "^4.11.1",
    "nodemon": "^1.3.2"
  }
}

The first thing in a Dockerfile is the base image, specified with the FROM keyword:

FROM centos:centos6

Then we install Node.js:

RUN curl -sL https://rpm.nodesource.com/setup | bash -
RUN yum install -y nodejs

The ADD keyword copies local resources into the image. We need the contents of /src:

ADD src/* /src

With the Dockerfile you follow your normal setup logic. After installing Node, what would you do next? Create your source files, add a package.json, run npm install, then node app.js. We replicate that here:

WORKDIR /src
RUN npm install
EXPOSE 8080
CMD ["node", "app.js"]

WORKDIR sets the working directory for any RUN or CMD instructions that follow. An alternative approach:

RUN cd src/; npm install
EXPOSE 8080
CMD ["node", "app.js"]

Here’s the complete Dockerfile:

FROM centos:centos6
RUN curl -sL https://rpm.nodesource.com/setup | bash -
RUN yum install -y nodejs
ADD src/* /src/

WORKDIR /src
RUN npm install
EXPOSE 8080
CMD ["node", "app.js"]

The CMD option takes each argument as a separate item in an array. I couldn’t get single quotes to work here; only double quotes did the job.

Time to build the image. Navigate to your project’s root folder (where the Dockerfile lives) and run:

docker build -t node-test .

(Don’t forget to run boot2docker start first. And notice the dot at the end: it tells Docker to build from the current working folder.)

This builds an image called ‘node-test’. You’ll see your commands executing in the terminal. Once you see the ‘Successfully built …’ message, run docker images to confirm it’s in the list.

Now start the image:

docker run -p 18080:8080 node-test

The -p flag publishes a container’s port to the host. Port 18080 on your machine maps to port 8080 inside the container, which means you’ll hit your Node.js/Express app. Running this command should print the console.log() message from app.js.

To verify the container is running, execute docker ps in another terminal window.

Open your browser and navigate to your Docker instance’s IP address (grab it by running echo $(boot2docker ip)) on the port specified in the run command. You should see your app:

Good so far, but this container only runs one application. To start multiple apps, we need a tool called supervisor (specifically the daemon, supervisord). It needs to be installed too. Let’s also add a local .rpm file and install everything.

The project folder structure changes slightly:

  • project’s root folder

  • /etc

  • supervisord.conf

  • /src

  • app.js

  • package.json

  • /tmp

  • MarkLogic.rpm

  • Dockerfile

The updated Dockerfile:

#Use the CentOS 6 base image
FROM centos:centos6
#install node.js
RUN curl -sL https://rpm.nodesource.com/setup | bash -
RUN yum install -y nodejs
#install easy_install and supervisor
RUN yum install -y python-setuptools
RUN easy_install supervisor
#add the MarkLogic rpm from a local folder
ADD tmp/MarkLogic-8.0-20141124.x86_64.rpm /tmp/MarkLogic-8.0-20141124.x86_64.rpm
#install the MarkLogic database
RUN yum -y install /tmp/MarkLogic-8.0-20141124.x86_64.rpm

#add files to image
ADD etc/supervisord.conf /etc/supervisord.conf
ADD src/* /src/

#install npm packages globally
RUN cd /src; npm install -g

#expose two MarkLogic management ports and the node.js port
EXPOSE 8000 8001 8080

#start up the supervisor daemon
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisord.conf"]

The key difference (beyond the extra ADD statements) is that we’re now starting the supervisord daemon, which takes our configuration file as an argument. The instructions to start MarkLogic and the Node.js/Express app live inside that configuration file:

[supervisord]
nodaemon=true

[program:node]
command=/bin/bash -c "cd /src && nodemon app.js"

[program:marklogic]
command=/bin/bash -c "/etc/rc.d/init.d/MarkLogic start && tail -F /var/opt/MarkLogic/Logs/ErrorLog.txt"

If you need to start other services (like an Apache server), install the rpm via the Dockerfile and add a new entry in the configuration file above.

Build this image with docker build -t ml-node . and then run it:

docker run -p 18080:8080 -p 18000:8000 -p 18001:8001 ml-node

Notice we’re binding multiple ports now.

Here’s where it gets clever. Remember how we specified nodemon app.js instead of node app.js in the supervisor configuration? That lets nodemon watch for code changes and automatically restart the Node app. But how do you change the source of app.js when it’s running inside a container?

You mount the folder. Check out this docker run statement:

docker run -p 18080:8080 -p 18000:8000 -p 18001:8001 -v /path/to/source/on-your-computer/src/app.js:/src/app.js ml-node

I’m mapping the local version of app.js to the /src/app.js file inside the container. Edit the file on your machine, and the container picks up the changes automatically:

I enjoyed working with Docker and I’m looking forward to exploring its other features. At the time of writing, I’m waiting for proper command line support on Windows environments, as that part hasn’t been well implemented yet.