Setting up a Replicated MongoDB using Authentication in Docker on DigitalOcean

Hello there! I recently (.. today) had to setup a replicated MongoDB, dockerized, hosted on VPS‘ on the great site DigitalOcean, and as I only found outdated tutorials online that, more like pushed me along, than walked me through what had to happen, I thought I’d make a more 2015 version; even though we’re leaning towards ’16 (maybe I’ll come back in a couple of months and pretend I’ve updated something).

This tutorial is heavily inspired by Deploy a MongoDB Cluster in 9 steps Using Docker but that one is old and this is much longer and actually works; I did it today, it was unpleasant.

The setup we want is 3 MongoDB’s running in Docker on 3 different VPS’ from DigitalOcean.

Step 1: We’re going to Create 3 droplets with Docker on DigitalOcean.

Step 2: and Start 3 MongoDB’s that know of each other and we can connect to.

As will be apparent by this tutorial, you will not learn much about hostnames. I’m certain there’s a better way to get around the entire “” thing, e.g. by just using the ip’s of the different servers, but I developed this while I was following a pretty outdated tutorial, and when I was done I didn’t have the energy to go back and do it over. Maybe I’ll revisit this, say, in a “couple of months”. I always say, “make work, make pretty”; for now, consider this a tutorial that works.

We’re going to be using Docker 1.8.2 in Ubunty 14.04 and MongoDB 3.0 for this.

Step 1: Create the droplets

(All pictures are probably copyright of DigitalOcean, I took them using the Windows Snipping Tool)

Go to sign up, add credit cards and press Create Droplet!

Choose a suitable name, e.g. Mongo1. Select the size of your droplet, I’ve chosen the smallest as I only need this for a proof of concept.


Select the region; preferably somewhere near where it’s meant to be used. I’m from Denmark, so I chose Amsterdam, but I suppose Frankfurt would’ve done just nice as well.


Select the Image, here you’re going to want to choose the tab called applications and find Docker. At the time of writing it’s version 1.8.2 running on Ubuntu 14.04.


Add an SSH key, either by Selecting one you’ve already added or pasting a new one in the big field. – if you plan to connect from a Windows host through Putty you might want to check out this guide How To Use SSH Keys with PuTTY on DigitalOcean Droplets (Windows users)


Don’t use passwords, it doesn’t “protect” your servers and it’s just so ’90s… They help you make SSH keys in this guide and YES you should add a proper SSH key for just over 2^2048 reasons.

Click on Create Droplet, and you’re off! … Now do it again 2 more times to create Mongo2 and Mongo3!.. Don’t worry, I’ll wait.

Step 2: Configuring the MongoDB’s

Let’s get an overview

There’s a lot to be done, but “luckily” it’s tiny steps.

First we SSH into the Mongo1 droplet. This’ll be our initial primary for the MongoDB replica set and we have to do some configuration here. We have to,

  1. Export Environment Variables that point to all three MongoDB Servers
  2. Start up mongo initially
  3. Add Admin users to this database which will be used used for authenticating users,
  4. Stop the mongo again, and remove it,
  5. Create a keyfile that the different servers will use to authenticate themselves to one another, and copy this keyfile to our host system,
  6. Start the mongo with the keyfile,
  7. Authenticate ourselves with one of the Admin users we created,
  8. Start the Replica set and WAIT

Then we SSH into the Mongo2 and Mongo3 droplets, and on both we,

  1. Export Environment Variables that point to all three MongoDB Servers
  2. Add the keyfile,
  3. Start mongo with the keyfile

Lastly we SSH back into Mongo1 (if you’re not already there…), and

  1. Authenticate ourselves with one of the Admin users we created (skip this if you didn’t close the connection..)
  2. Add references to the two other MongoDB’s

Boom, you’re done. Now let’s get started!


Step 1: Export Environment Variables that point to all three MongoDB Servers

.. okay this is not a tiny step..

Identify the IP addresses of your droplets, on the frontpage of DigitalOcean when you’re logged in, something like,  #Mongo1 #Mongo2  #Mongo3

SSH into your three droplets, using your favorite SSH-ClientPutty will do nicely on a windows computer (see How To Use SSH Keys with PuTTY on DigitalOcean Droplets (Windows users), or if you’ve already created the image without an SSH key, perhaps more important How To Create SSH Keys with PuTTY to Connect to a VPS), while the Terminal will do fine on a Mac or Linux machine. Remember that you need to have your SSH-key added to your host system or the server will deny you access. Export the ip’s into Environment Variables

This is easily done with a script, like “”

EXPORT mongo1=
EXPORT mongo2=
EXPORT mongo3=

Remember to leave an empty line at the bottom of the script, (Why is it recommended to have empty line in the end of file?) and execute it in a Shell like so,

$ chmod 755 # always chmod because history...
$ source # or ".". A "." is a shorthand for "source".

We use source because we want the environment variables to be set, even when we close the SSH connection.

You’re totally going to do this on the Mongo2 and Mongo3 so feel free to look up SCP and copy down that wonderful to your host machine so you can send it to the other servers in a little while.

Step 2: Start up mongo initially

Yay, more snippets!.. We run this in our shell,

$ docker run --name mongo                 \
-v /home/core/mongo-files/data:/data/db   \
--hostname=""          \
-p -d mongo

We Run a container. Name (–name) it mongoAttach a volume (-v, a “folder”) /home/core/mongo-files/data on host to the folder /data/db inside the container such that we get persistent storage; MongoDB stores its data in /data/db and by attaching a volume to this location, MongoDB will instead store its data on the host and it’s not lost when we close the container (documentation on hub.docker mongo). This is useful because it means its actually saving the users we’re adding. We give it a hostnameForward the port on our host127.0.0.1:27017 to a port inside the mongo-container, 27017. Lastly, we want the container to maintain itself on the host so we run it detached (-d). The container is a mongo.

First trap, a lot of the time you’ll just see -p 27017:27017. A lot of the time it didn’t work for me. I couldn’t connect to the container from another docker container before I explicitly added the localhost ip to the assignment. -p looks a bit weird, but don’t worry; Docker parses it like -p IP:host_port:container_port (docs.docker#binding ports).

But try it and see what happens. When you mess up a container-run you can remove it with docker rm -f mongo (mongo: the name we assigned the container).

You should be able to see that you have the container running by typing docker ps -s – this is a useful command, learn it. Hint: use docker ps -a to spot dead containers and get them removed so they don’t take up memory or a container-name you want to use. If you don’t supply a container-name Docker will generate something ridiculous for you..

Step 3: Add Admin users to this database which will be used used for authenticating users

Part 1: First we need to connect to the database from another mongo container

This is done with a cool snippet from Mongo’s Hub.Docker Repository.

$ docker run -it --link some-mongo:mongo --rm mongo sh      \
-c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'

The only difference is that we did not call our other mongo container some-mongo, so we change the snippet a little, into,

$ docker run -it --link mongo:mongo --rm mongo sh           \
-c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'

This runs an interactive container. It’s linked to (–link, “knows about..”) another container named mongo and refers to it by the name mongo. The container is automatically removed (–rm) after use. It’s a mongo container and we execute it with a shell command (sh -c) that tells it to start (exec) a mongo client that connects to the address defined by the environment variables $MONGO_PORT_27017_TCP_ADDR and $MONGO_PORT_27017_TCP_PORT (which are being set by the –link flag) and chooses the database test initially.

Part 2: Adding users

When you’re connected to the MongoDB you’re going to want to switch to the Admin database, using,

$ use admin

Now we can add users. Create a new Site Admin User called siteUserAdmin, (and replace the passwords….)

$ db.createUser( {
    user: "siteUserAdmin",
    pwd: "greatPassword1",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]

and a Root User called siteRootAdmin,

$ db.createUser( {
    user: "siteRootAdmin",
    pwd: "evenGreaterPassword1",
    roles: [ { role: "root", db: "admin" } ]

Now exit the mongo-container with the aptly named command exit.

Step 4: Stop the mongo again, and remove it

$ docker stop mongo
$ docker rm -f mongo

Step 5: Create a keyfile that the different servers will use to authenticate themselves to one another, and copy this keyfile to our host system,

Part 1: generate a keyfile

Stolen brutally from the MongoDB documentation,

$ openssl rand -base64 741 > ~/home/core/mongodb-keyfile
$ chmod 600 ~/home/core/mongodb-keyfile

Part 2: Copy the file to your host system

Seriously, look up SCP. I’ll wait.

Step 6: Start the mongo with the keyfile,

We have a keyfile, and we’re ready to fire up our mongo with the longest one-liner I’ve ever used,

$ docker run --name mongo                      \
-v /home/core/mongo-files/data:/data/db        \
-v /home/core:/opt/keyfile                     \
--hostname=""               \
--add-host${mongo1}       \
--add-host${mongo2}       \
--add-host${mongo3}       \
-p 27017:27017 -d mongo                        \
sh -c "exec mongod                             \
--keyFile /opt/keyfile/mongodb-keyfile         \
--replSet \"rs0\""

A metric tonne of stuff is going on here, and this is also where I differ a lot from the other article, so let’s have a look,

  • Docker run, we run a container,
  • –name mongo, name it mongo,
  • -v /home/core/mongo-files/data:/data/db, attach the volume we previously attached to /data/db to obtain persistant storage,
  • -v /home/core:/opt/keyfile, attach the volume on the host containing the keyfile to a folder inside the container for later reference,
  • –hostname=””, give the container a hostname
  • –add-host${mongo1}, add a reference to this container to the /etc/hosts file inside the container, the reference points to the port residing in the environment variable ${mongo1}; that we set in Step 1. This means it’ll route requests to the right address when it tries to contact our fake address which doesn’t exist.
  • –add-host${mongo2}, same as above,
  • –add-host${mongo3}, same as above,
  • -p 27017:27017, open a port from the host to the container. MongoDB uses port 27017 as standard; by using this we don’t have to set anything else, and the other servers will be able to connect to the container without us configuring them explicitly. NB: we do NOT add the address here.
  • -d mongo, start a detatched container of mongo
  • sh -c “exec mongod, now THIS is where we get really cool. I had a metric sh*t-tonne of problems with the container dying right after launch because “mongodb didn’t have permission to use the file” when I was using the script from the other article, because I was merely passing the flags below to the mongo-container. After sitting with it for a couple of hours and playing around with weird hacks online and linux-file-permissions, I finally came up with the idea of executing the flags inside the container, just like on the hub.docker when they start the mongo-client.

More precisely, I got the error,

2015-11-05T15:36:54.425+0000 I ACCESS error opening file: /opt/keyfile/mongodb-keyfile: Permission denied

You get this because you haven’t got the proper rights to use the file on your host system. If you have the solution for this please let me know! (If you actually have a running, tested, solution. Your “lol, just chmod it” has no power here! – Gandalf)

  • –keyFile /opt/keyfile/mongodb-keyfile, we pass the –keyFile flag to the mongod instance to tell it to use this for authentication,
  • –replSet \”rs0\””, lastly we set a flag that names or helps refer to the replica set. Note that we escape the quotes, because they’re inside the sh -c quotes.

The other article ended its command with,

-d mongo –keyFile /opt/keyfile/mongo-files/mongodb-keyfile –replSet rs0

Whereas we end it with,

-d mongo sh -c “exec mongod –keyFile /opt/keyfile/mongodb-keyfile –replSet \”rs0\””

Note the difference, we’re shifting which user is actually passing the flags to the mongo-instance because it’s executed inside the container!

Also, the fact that they’re pointing to the wrong placement of the keyfile in the first place. Not sure they actually ran the thing.

Step 7: Authenticate ourselves with one of the Admin users we created,

Again, use the infamous snippet from Mongo’s Hub.Docker Repository to start up a mongo

$ docker run -it --link mongo:mongo --rm mongo sh          \
-c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'

and hope for the best. If it doesn’t want to connect (mine didn’t), do the following.

  1. remove the other mongo instance with docker rm -f mongo
  2. start it with the port -p,
  3. connect to it with the mongo client and perform necessary setup.
$ use admin
$ db.auth("siteRootAdmin", "evenGreaterPassword1");

Step 8: Start the Replica set and WAIT…

$ rs.initate()
$ rs.conf()

This will start the replica set with Mongo1 this as the primary.

If you started the container with “-p″ you now have to nuke it again, and start it with just “-p 27017:27017” so the other servers can connect to it.

Mongo2 and Mongo3

These are somewat easy, SSH into, and do the following 3 steps on both servers.

Step 1: Export Environment Variables that point to all three MongoDB Servers

This was step 1 in the section above (Mongo1).

Step 2: Add the keyfile,

In Mongo1:step 5 you copied this to your host system and got familiar with scp, use it and add it to “/home/core”.

Step 3: Start mongo with the keyfile

This example shows what to write for Mongo2, (notice the similarities with Mongo1:step 6 and you’ll surely deduct what to write for Mongo3)

$ docker run --name mongo                      \
-v /home/core/mongo-files/data:/data/db        \
-v /home/core:/opt/keyfile                     \
--hostname=""               \
--add-host${mongo1}       \
--add-host${mongo2}       \
--add-host${mongo3}       \
-p 27017:27017                                 \
-d mongo sh -c "exec mongod                    \
--keyFile /opt/keyfile/mongodb-keyfile         \
--replSet \"rs0\""                             \

That’s all there is to this. No config, no users, no nothing. They get everything from the primary.


Okay, so now you’ve set up the two MongoDB’sMongo2 and Mongo3, all there’s left is to get back in the shell on the Mongo1 server and,

Step 1: Authenticate ourselves with one of the Admin users we created (skip this if you didn’t close the connection..)

$ use admin
$ db.auth("siteRootAdmin", "evenGreaterPassword1");

Add references to the two other MongoDB’s

$ rs.add("")
$ rs.add("")

And boom.. you’re done! You finally have your Replicated MongoDB using Authentication in Docker on DigitalOcean!

Let’s celebrate with a couple of tricks.

$ rs.status()    # You can see the status of a Replica Set from any server with this.
$ rs.stepDown()  # You can demote the "Primary" and a "Secondary" will be 
                 #     automatically promoted. This is run on the "Primary".
$ rs.slaveOk()   # You can "allow reads" with this. This is run on a "Secondary"
                 #     but be cautions of consistency errors, see this link for more

If you’ve read so far, thank you very much. Let me know below what problems you had or if you have any comments on the article!

I only spent five hours to do this setup the first time, and another five to write this article, so at least I hope, with time, somebody’ll save that time now.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.