Getting started with Docker and MeanJS

Matthias Lübken

Sep 23, 2014

To get started with a new technology it’s great to have a small, yet non-trivial example. Docker is an open-source project to easily create lightweight, portable, and self-sufficient containers from any application. We are using Docker as our core technology. With this tutorial we demonstrate how to use Docker by dockerizing MeanJS, a popular Node.JS Webstack.

One of the great features of Docker is the low barrier of entry. With the proper Dockerfile, docker and docker-compose installed it’s basically a simple docker-compose up to get everything up and running.

Here’s an ASCII cast for an overview:

Now let’s take a step back and see what this is all about.


MEAN is an abbreviation for:

  • MongoDB: a document database,
  • Express: a web application framework for node,
  • Angular.JS: a client side JavaScript framework and
  • Node.JS: a somewhat popular JavaScript runtime.

Without arguing the different components and possible alternatives it’s easy to say that they are all very popular. If you are a JavaScript developer MEAN gives you a collection of effective and proven tools.

The term MEAN was coined in the MongoDB blog and it characterizes the collection of these tools. So there is no actual “THE Mean stack”. With its growing popularity there have been various projects, which glue these different components together and make it very easy to get started. The most prominent are a Yeoman MEAN generator,, and it’s fork MEAN.JS.

This tutorial discusses the dockerization of but it should be easily applicable to the other MEAN stacks.

Motivation for Dockerization

When starting out with a web application, it’s common to start with one machine. But as the application grows the demand for the different parts changes. And as many software developers know: once something is mixed up it’s hard to split it up again. So why not start with different components from the get go?

When looking at the example MEAN.JS: Why not build a container for the database and the web server and have each running in their own container respectively? In development they could still run on the same host, but when going into production these containers could be put on different machines.

The greatest objection is the increased effort in creating, starting, and maintaining these containers. We believe that the initial effort is very low and the long run benefit very high. Let us show you how easy it is to get started.


If you haven’t done so far, please install Docker. For this tutorial please also install docker-compose, which will make development much easier.

Ensure that both tools are installed:

$ docker version
$ docker-compose --version

Build and run

The easiest way to get started is to clone the MEAN.JS fork with the proper Dockerfile:

$ git clone
$ cd mean

The easiest way to build and and run our two containers is docker-compose. This tool looks at the project’s docker-compose.yml and builds and runs the Docker containers appropriately:

Build the Docker containers with docker-compose:

$ docker-compose build

Run the Docker containers with docker-compose:

$ docker-compose up

That’s it.

Building and running your containers with pure Docker is just a little more complicated. Only the mean container needs to be built since the mongo container uses a predefined image.

Build the container with mean installed:

$ docker build -t mean .

Here -t mean names the built image “mean”.

To get the application running, the MongoDB container needs be started first:

$ docker run -p 27017:27017 -d --name db mongo
  • -p 27017:27017 exposes the MongoDB port so the mean container can connect to it
  • -d runs it as a background process
  • --name db gives this cointainer a name so it can be referenced
  • mongo is the image name that should be run

Next the mean container has to be started and connected to the db:

$ docker run -p 3000:3000 --link db:db_1 mean
  • -p 3000:3000 exposes the web server port 3000
  • -d runs it as a background process
  • --link db:db_1 links the db container

Again. That’s it.

docker-compose actually does the same thing. And to recapitulate, let’s examine docker-compose’s YAML file docker-compose.yml:

  build: . 
    - db
    - "3000:3000"
  image: mongo
    - "27017:27017"

For a detailed explanation see the docker-compose reference.

To the virtual box / boot2docker users: don’t forget to enable port forwarding for the appropriate ports.

VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcp-port3000,tcp,,3000,,3000";

Wether we started with docker-compose or with docker, opening the browser at localhost:3000 should show the MEAN.JS starting site:

Now, let’s examine what the two containers are about.

The database container

The database container doesn’t require custom code. This is because it uses the official predefined mongo image: [](

Here are some commands for fooling around with the mongo image:

# Pull the official image:
$ docker pull mongo
# Run a mongo container in open a bash shell in it:
$ docker run -i -t mongo /bin/bash
# Inspect the different layers
$ docker history mongo
# Show low-level information on a container
$ docker inspect mongo

With databases, the foremost question is: where does it store its data? An examination of the MongoDB Dockerfile shows near the end that it marks /data/db as a volume: VOLUME /data/db.

For detailed information on volumes see the userguide but to get started, you might want to examine the volume and backup the data:

# Run docker inspect and filter for the volumes mounted
$ docker inspect --format '' mean_db_1
# To run a simple backup command start 
$ docker run --volumes-from db -v $(pwd):/backup \
  ubuntu tar cvf /backup/backup.tar /data/db

Mean.JS Dockerfile

The container with the mean application is a little more interesting:

FROM node

WORKDIR /home/mean

# Install Mean.JS Prerequisites
RUN npm install -g grunt-cli
RUN npm install -g bower

# Install Mean.JS packages
ADD package.json /home/mean/package.json
RUN npm install

# Manually trigger bower.
ADD .bowerrc /home/mean/.bowerrc
ADD bower.json /home/mean/bower.json
RUN bower install --config.interactive=false --allow-root

# Make everything available for start
ADD . /home/mean

# currently only works for development
ENV NODE_ENV development

# Port 3000 for server
# Port 35729 for livereload
EXPOSE 3000 35729
CMD ["grunt"]

A Dockerfile is a set of instructions to build and create an image. After each instruction the docker build commits a new layer. If a layer hasn’t changed, it doesn’t need to be rebuilt the next time the build runs. Instead the cached layers are used. This layering system is one of the reasons why Docker is so fast.

Let’s go through a brief overview of the instructions used in this Dockerfile:

  • FROM tells the Docker with which image to start. All Docker images need to have one root image. In this case it is the official node image.
  • WORKDIR sets the current working directory for subsequent instructions
  • RUN starts arbitrary commands during the build process. The first two are installing the tools bower and grunt.
  • ADD adds files from the host file system into the Docker container. In the first instance the package.json is added so that in a subsequent step the packages can be installed via package.json.
  • Note that there are several ADD and RUN instructions. The reason for this are the different layers. For example as long as the package.json is not changed the appropriate layer is not changed. Docker can use the existing layer. So new NPM packages only get installed if needed.
  • ENV sets environment variables.
  • EXPOSE exposes PORTS from the container.
  • CMD sets the command that is executed once the container is run.

Next steps

We hope that this tutorial gives a good impression on how to use Docker with a small web project. In particular with MEAN.JS. A Dockerfile gives an impressive head-start since a developer now only needs a docker-compose up to provision and start it`s containers. At the same time the Dockerfile comprehensively shows how this is achieved and best practices can be shared within a team.

The next steps are to further modularize our application and to get the containers running on a server. This might be part of a future tutorial. Meanwhile get your hands dirty, fool around with the Dockerfile, and leave comments and pull requests on improvement suggestions.

Bringing this setup into production requires some more thought. We at Giant Swarm are working hard to make this next step as seamless as the first one.

You May Also Like

These Stories on Dev