Managing Terraform and Ansible secrets with Make

We all know that managing secrets when developing infrastructure as code can be really tricky, today we start to see some good solutions as Hasicorp Vault and probably others that are being developed.

This weekend I decided to take a more simple approach, using tools that everyone who is reading this article most probably have already installed on his or her computer. I’m currently using Ubuntu nevertheless any other distribution, WSL or Mac will most probably work.

The problem

On my daily work I need to share code not only with my teammates but also with other departments, such as the development department. We all know that sharing a production password or any other kind of secret can have a very negative impact, not only it would be a security breach but also it can end on the hands of one developer which may have the temptation to access some service that he otherwise would not be able to access it.

On the other hand we also want to make sure that we do not need to input this passwords manually every time we wish to deploy something, our goal is to automate as much as possible, the more we automate the less we may encounter errors.

So the question that was puzzling me was “How can I protect my secrets within my code, keep it simple, and at the same time allows me run it through an automated pipeline using Jenkins, Gitlab or any other tool?”

The tools

As I said earlier I will be using tools that we already have installed on our computer, so the tools I’ve use were:

This project is related to infrastructure as code, so the technologies I’ll be showing here will be Terraform and Ansible, but the concept may be extended to other kind of technologies.

This article does not intent to explain how to write makefiles not even shell scripting, there are already a lot of information on the internet, so if you are not use to read or write this kind of files we may feel somehow lost, but don’t worry try to understand the logic behind hit.
The source code is also available here.

The Vault and the Vault Password

On this approach we will use 2 passwords

  • A master password called VAULT_PASSWORD
  • A encryption password used  by Ansible Vault  stored on a file called VAULT_PASSWORD_FILE

This approach makes Ansible Vault able to read a password from a file instead of passing as an argument of the command itself.

As you may now have guess the VAULT_PASSWORD is the password we use to lock/unlock the vault password file, this password is passed to the make command by a environmental variable with that name.

As you can see the vault part is quite simple in concept, having a master password that opens the vault and allows Ansible Vault to encrypt/decrypt secrets allow us to automate the process, we just make sure that for instance we have an EC2 instance running on AWS very well restricted/protected with this environment variable set and the deployment will run smoothly.

Locking/Unlocking the vault

Let’s now see how it’s done.

Two make targets had been created for this actions, make lock and make unlock, remember that for this to work must have the environment variable set.

VAULT_PASSWORD=my_very_strong_master_password make lock

Let’s see the code now, for now ignore the targets related to encrypt files, we will get into that later:

VAULT_PASSWORD_FILE=security/vault-password
VAULT_UNECRYPTED := $(shell [ -e ${VAULT_PASSWORD_FILE} ] && echo 1 || echo 0 )

# IF VAULT ALREADY OPENED
ifeq ($(VAULT_UNECRYPTED), 1)  
open-vault: 
    @echo "$@: vault already opened, ignoring"
close-vault: 
    @echo "$@: closing vault"
    @gpg --quiet -c --batch --armor --passphrase "${VAULT_PASSWORD}" -o ${VAULT_PASSWORD_FILE}.lock ${VAULT_PASSWORD_FILE}
    @rm ${VAULT_PASSWORD_FILE}
lock: vault-set-env encrypt-ssh-keys encrypt-terraform-files close-vault
unlock: vault-set-env decrypt-ssh-keys decrypt-terraform-files
# IF VAULT CLOSED
else    
open-vault:
    @echo "$@: opening vault"
    @gpg --quiet -d --batch --armor --passphrase "${VAULT_PASSWORD}" -o ${VAULT_PASSWORD_FILE} ${VAULT_PASSWORD_FILE}.lock 
    @rm ${VAULT_PASSWORD_FILE}.lock 
close-vault: 
    @echo "$@: vault already closed, ignoring"
lock: vault-set-env open-vault encrypt-ssh-keys encrypt-terraform-files
    @echo "$@: closing vault"
    @gpg --quiet -c --batch --armor --passphrase "${VAULT_PASSWORD}" -o ${VAULT_PASSWORD_FILE}.lock ${VAULT_PASSWORD_FILE}
    @rm ${VAULT_PASSWORD_FILE} 
unlock: vault-set-env open-vault decrypt-ssh-keys decrypt-terraform-files 
endif

As you can see we from the code we first check if the VAULT_PASSWORD_FILE exists, this may look a little awkward, but it’s very easy to understand, every time we lock the vault we remove the unencrypted version.

@echo "$@: closing vault"
    @gpg --quiet -c --batch --armor --passphrase "${VAULT_PASSWORD}" -o ${VAULT_PASSWORD_FILE}.lock ${VAULT_PASSWORD_FILE}
    @rm ${VAULT_PASSWORD_FILE} 

This means that if a version of the unencrypted file exists, it most probably we have the vault already unlocked, so that is why we make this check on the beginning, depending on if the vault is already unlocked the lock/unlock targets will have a different behavior.

As we could see the vault is a very simple approach, every time we lock it we use the master password to encrypt it using GnuPG, to unlock we revert the process.

Having the vault opened will allow Ansible Vault to easily pick up the password from the file and encrypt/decrypt the secrets.

Encrypting / Decrypting secrets

Now that we already now how to open and close the vault let’s now focus on how we will protect our secrets. In this particular case I’ll be protecting private SSH keys and other terraform var-files.

My project structure looks like this:

Project
|----ansible
|    |----inventory
|    |    |----non-prod
|    |    |----prod
|    |    |----rnd
|    |----roles
|    |    | < my roles here >
|    |----project.yml
|----security
|    |----non-prod
|    |    |----ssh-keys
|    |    |    | < my private ssh keys here >
|    |    |----terraform
|    |    |    | < my private terraform variables here >
|    |----prod
|    |    |----ssh-keys
|    |    |    | < my private ssh keys here >
|    |    |----terraform
|    |    |    | < my private terraform variables here >
|    |----rnd
|    |    |----ssh-keys
|    |    |    | < my private ssh keys here >
|    |    |----terraform
|    |    |    | < my private terraform variables here >
|    |----vault-password
|----terraform
|    |----backend
|    |    |----non-prod
|    |    |    |----config.yml
|    |    |----prod
|    |    |    |----config.yml
|    |    |----rnd
|    |    |    |----config.yml
|    |----vars
|    |    |----non-prod
|    |    |    |---- < my var files >
|    |    |----prod
|    |    |    |---- < my var files >
|    |    |----rnd
|    |    |    |---- < my var files >

Probably not the best, but again this is just a proof of concept. The security folder is were I want to have my secrets encrypted and protected so don’t worry about the rest of the folders for now.

Let’s again look into the code:

SSH_PRIVATE_KEYS=security/${ENV}/ssh-keys
TERRAFORM_SECRETS=security/${ENV}/terraform
VAULT_PASSWORD_FILE=security/vault-password
SSH_KEYS := $(foreach ENV,$(ENVS),$(wildcard ${SSH_PRIVATE_KEYS}/*.key))
TERRAFORM_SECRET_VARS := $(foreach ENV,$(ENVS),$(wildcard ${TERRAFORM_SECRETS}/*.tfvars))

ifeq ($(strip ${SSH_KEYS}),)
encrypt-ssh-keys:
    @echo "$@: no private ssh keys, skipping"
decrypt-ssh-keys:
    @echo "$@: no private ssh keys, skipping"
else
encrypt-ssh-keys:
    @ansible-vault encrypt --vault-password-file=${VAULT_PASSWORD_FILE} ${SSH_KEYS} 2>/dev/null && [ $$? -eq 0 ] && echo "$@ : ${SSH_KEYS}encrypted" || echo "$@ : ${SSH_KEYS}already encrypted, skipping"
decrypt-ssh-keys:
    @ansible-vault decrypt --vault-password-file=${VAULT_PASSWORD_FILE} ${SSH_KEYS} 2>/dev/null && [ $$? -eq 0 ] && echo "$@ : ${SSH_KEYS}decrypted" || echo "$@ : ${SSH_KEYS}already decrypted, skipping"
endif

ifeq ($(strip ${TERRAFORM_SECRET_VARS}),)
encrypt-terraform-files:
    @echo "$@: no terraform secrets, skipping"
decrypt-terraform-files:
    @echo "$@: no terraform secrets, skipping"
else
encrypt-terraform-files:
    @ansible-vault encrypt --vault-password-file=${VAULT_PASSWORD_FILE} ${TERRAFORM_SECRET_VARS} 2>/dev/null && [ $$? -eq 0 ] && echo "$@ : ${TERRAFORM_SECRET_VARS}encrypted" || echo "$@ : ${TERRAFORM_SECRET_VARS}already encrypted, skipping"
decrypt-terraform-files:
    @ansible-vault decrypt --vault-password-file=${VAULT_PASSWORD_FILE} ${TERRAFORM_SECRET_VARS} 2>/dev/null && [ $$? -eq 0 ] && echo "$@ : ${TERRAFORM_SECRET_VARS}decrypted" || echo "$@ : ${TERRAFORM_SECRET_VARS}already decrypted, skipping"
endif

On this targets we take care of encrypting/decrypting the secrets using ansible-vault and asking it to use our password file. As long as we open the vault ansible-vault will be able to encrypt/decrypt the files.

The ifeq sentence allow us to make sure that there are files in the folder to be encrypted/decrypted.

Protecting Git from committing unprotected passwords

As I explained in the beginning of this article, our main goal is to assure that we keep our secrets protected, one risk we might have is if some authorized team member commits to the remote repository unencrypted secrets.

Here is where git hooks come into play, git allows for shell scripts run before committing the code to the remote repository, we just need to make sure we check for unencrypted secrets that are staged and take action before committing them.

Again let’s look into the code:

#!/bin/bash
TERRAFORM_SECRETS="security/*/terraform"
SSH_PRIVATE_KEYS="security/*/ssh-keys"
VAULT_PASSWORD_FILE="security/vault-password"

if [ -z "${VAULT_PASSWORD}" ]; then
    printf "\n${BOLD}${RED}***** CRITICAL ERROR *****${RESET}\n"
    printf "${YELLOW}Please set ${BOLD}VAULT_PASSWORD${RESET}${YELLOW} environment variable before working${RESET}\n"
    printf "${YELLOW}with this project, for more information check README.md!${RESET}\n\n"
    exit 1
fi

exec 1>&2

if [ -e ${VAULT_PASSWORD_FILE}.lock ]; then 

    if gpg --batch --passphrase "${VAULT_PASSWORD}" --trust-model always --quiet --decrypt ${VAULT_PASSWORD_FILE}.lock 2>/dev/null >/dev/null; then
        printf "${BOLD}${GREEN}Right vault password!${RESET}\n"
    else
        printf "${BOLD}${RED}Wrong encrypted vault! Security alert! Please verify your VAULT_PASSWORD!${RESET}"
        exit 1
    fi
else
    VAULT_UNLOCKED=1
    if git diff --name-only --cached ${VAULT_PASSWORD_FILE}; then
        printf "${BOLD}${YELLOW}Found unlocked unlocked vault on commit, removing it!${RESET}\n"
        git rm --cached ${VAULT_PASSWORD_FILE} 2>/dev/null
    fi
fi

for SSH_PRIV_KEY in $(find ${SSH_PRIVATE_KEYS} -type f -name "*.key"); do 

    # check if file is about to be commited

    if git diff --name-only --cached ${SSH_PRIV_KEY} 2>/dev/null >/dev/null; then

        if ssh-keygen -y -e -f ${SSH_PRIV_KEY} 2>/dev/null >/dev/null; then
            printf "${BOLD}${YELLOW}SSH private key not encrypted, removing from current commit (%s)${RESET}\n" "${SSH_PRIV_KEY}"
            git rm --cached ${SSH_PRIV_KEY} 2>/dev/null
            UNENCRYPTED_SSH_KEYS+=("${SSH_PRIV_KEY}")
            ERROR=1
        else
            printf "${BOLD}${GREEN}SSH private key encrypted (%s)${RESET}\n" "${SSH_PRIV_KEY}"
        fi
    fi

done

for TF_SECRET_FILE in $(find ${TERRAFORM_SECRETS} -type f -name "*.tfvars"); do 

    # check if file is about to be commited

    if git diff --name-only --cached ${TF_SECRET_FILE} 2>/dev/null >/dev/null; then

        # Check if the file is an ansible encrypted file
        #
        # 1st step: check for the signature according to ansible format
        # ( https://docs.ansible.com/ansible/latest/user_guide/vault.html#vault-format )
        # 
        if egrep '\$ANSIBLE_VAULT;[0-9]+(\.[0-9]+)+(;([a-z;A-Z,1-9,\.]*))*' --quiet ${TF_SECRET_FILE}; then 

            # 2nd step: check if the lines between header and last line have
            # 80 characters ( 81 if we count the line break )
            # Again according to valut format all of this lines should have
            # 80 characters with the exception of the last line.

            # Get the number of lines and subtract by two, first is header
            # second is last line and also get the number of characters

            NUM_LINES=$(wc -l < ${TF_SECRET_FILE})
            NUM_LINES=$(( ${NUM_LINES} - 2 ))
            NUM_CHARACTERS=$(egrep '\$ANSIBLE_VAULT;[0-9]+(\.[0-9]+)+(;([a-z;A-Z,1-9,\.]*))*' -v ${TF_SECRET_FILE} | egrep -v "$(tail -n 1 ${TF_SECRET_FILE})" | wc -c )
            
            # if the file is correct the NUM_CHARACTERS should be divisible by 81 and it
            # should also be equal to the NUM_LINES;

            CHECK_DIVISION=$(printf "%.3f" $(( 1000 * ${NUM_CHARACTERS} / 81 ))e-3)

            if printf "%s" "${CHECK_DIVISION}" | egrep --quiet "[0-9]+.000"; then

                if [ $(( ${NUM_CHARACTERS} / 81 )) -ne ${NUM_LINES} ]; then
                    printf "${BOLD}${YELLOW}Terraform secret file is not encrypted, removing from current commit (%s)${RESET}\n" "${TF_SECRET_FILE}"
                    git rm --cached ${TF_SECRET_FILE} 2>/dev/null
                    UNENCRYPTED_TF_FILES+=("${TF_SECRET_FILE}")
                    ERROR=1
                else
                    printf "${BOLD}${YELLOW}Terraform secret is encrypted (%s)${RESET}\n" "${TF_SECRET_FILE}"
                fi
            
            else
                printf "${BOLD}${YELLOW}Terraform secret file is not encrypted, removing from current commit (%s)${RESET}\n" "${TF_SECRET_FILE}"
                git rm --cached ${TF_SECRET_FILE} 2>/dev/null
                UNENCRYPTED_TF_FILES+=("${TF_SECRET_FILE}")
                ERROR=1
            fi
        else
            printf "${BOLD}${YELLOW}Terraform secret file is not encrypted, removing from current commit (%s)${RESET}\n" "${TF_SECRET_FILE}"
            git rm --cached ${TF_SECRET_FILE} 2>/dev/null
            UNENCRYPTED_TF_FILES+=("${TF_SECRET_FILE}")
            ERROR=1
        fi

    fi

done

if [ ${ERROR} -eq 1 ]; then 

    # Make sure that we lock all secrets
    printf "${BOLD}${YELLOW}Locking secrets${RESET}\n"
    make lock

    # Add unprotected files to the commit again
    printf "${BOLD}${YELLOW}Adding files again to the commit${RESET}\n"
    for SSH_PRIV_KEY in ${UNENCRYPTED_SSH_KEYS}; do
        printf "${BOLD}${Green}Addind private ssh-key: %s${RESET}\n" "${SSH_PRIV_KEY}"
        git add ${SSH_PRIV_KEY} 2>/dev/null
    done

    for TF_SECRET_FILE in ${UNENCRYPTED_TF_FILES}; do 
        printf "${BOLD}${GREEN}Addind terraform secret: %s${RESET}\n" "${TF_SECRET_FILE}"
        git add ${TF_SECRET_FILE} 2>/dev/null
    done

    printf "${BOLD}${GREEN}Addind locked vault password%s${RESET}\n"
    git add ${VAULT_PASSWORD_FILE}.lock 2>/dev/null

fi

printf "${BOLD}${YELLOW}Commit will now be done${RESET}\n"

The above script is called every time we try to commit something to git, it will look for secret files being committed  which are unencrypted, if it finds some unprotected file it will unstage it and in the end will lock again all files and stage again those files which were unprotected.

The only thing we need to assure is that every team member that uses this repo and have commit rights have this hook installed on is machine. What I’ve come up, thanks to this post, was to create one make target that each team member have to call as soon as they start working on the code.

Conclusion

As I said earlier this is one of the multiple approaches one can take to protect secrets. I just wanted to show that sometimes we can tackle some of more complex issues with simpler solutions then adopting a new technology. The import is that in the end of the day

On the next post I’ll show how I’ve used make to call terraform in a easier way and without the need to add –var-file argument whenever I add a new file, and also how I’ve integrated it with this vault system.

Docker, new server and more

Docker deployed

On the end of 2017, I’ve been exploring the world of containers, after testing and deploying in my own server, I’ve taken the next step and deployed docker on our company server, our GPS tracking system, document management solution, ticketing system and HRM software are fully running as docker containers for 4 months now.

New home

Recently I’ve acquired a VPS to host my own services as the server at home was becoming unsuitable for the job. Being a linux geek I’ve selected to install Ubuntu on this VPS to make the migration easier.

Smooth as silk

As it was expected since the beginning the migration went smooth as possible, after setting up docker and transferring all the files to the new server, it was as easy as running docker-composer up.

Conclusion

Containers make a system administrator life easier, moving from a my own server at home to a new location on VPS not only was fast, but it only took few steps, and few steps means less errors.

Future

In the coming months I will been testing docker in a developer perspective, in our company we are customizing our own version of Odoo and I’ve been developing a on-line PDF signature framework for internal use in the company. This will give me the perspective developer of how containers helps in the process of going from development to production.

 

Hello Docker

This post is not a how-to or tutorial about using Docker, fortunately you have a lot a resources on this topic, this is just my personal experience with this technology, my motivation behind it and my thoughts about it.

Lately I’ve been researching into the containers world, since the appearance of LXC I’ve been curious about this new technology, but until now I didn’t had the time to explore it and since nowadays everyone is talking about Docker so I decide to give it a try.

The motivation

In my latest challenge on the company I work, I’ve chosen virtualization to deploy our ERP software, PMS software and other systems because the software solutions were mostly Windows based, the ERP is even an old dinosaur, it’s an desktop based solution and  we had to deploy a Remote Desktop solution for users to access it,  not the state of art solution but hey when you have a job to do you got to do it well…. but this coming year we will be deploying the HRM and I’m looking into doing things a little bit different.

This new solution it’s web based and runs on Linux ( hurray, feeling happy now ) therefore containers it’s a technology I can look into as it makes more sense than virtualization, my goal is to enable the deployment and development of this new software as smooth as possible making it easy for the IT department to customize and extend it in the near future.

So this is where this journey begins.

The test lab

Before diving into the company servers and start deploying Docker,   I thought it could be a good exercise migrating my own personal server into using containers.

I live in the small island of Principe, unfortunately we do not have cable TV but we do have a pretty good Internet connection and in my home country, Portugal, my ISP  gives me 100mbits of upload bandwidth so this personal server was built to allow me to have quality TV channels streamed into my home here in Principe.

Some months ago after setting up the streaming service hosting this blog and my file sharing service on the same server felt like the way to go. The machine already had Ubuntu 16.04 LTS  installed therefore installing MySQL, Apache, PHP, WordPress and Nextcloud on it was very easy,  automating the SSL certificate generation was also possible using Let’s Encrypt bot and even setting the applications folder and database files on a different hard-drive for backup and scalability purposes made all sense to me ( and also because the OS is installed on a SSD drive )
Everything was working as planned, automatic backups were being done every day and transfered at late night into my external USB drive connected to my Raspberry Pi  here in Principe, which by the way is used as a set-top box to receive my TV channels.

So there it was my own cloud hosted on my personal server in Portugal, and the next step? Using it as a test lab to install Docker and migrate WordPress and Nextcloud to a container based solution, this would not only allow me to solve some issues I had before with apt upgrade, but also for me to do a test drive before doing it on the company server.

Hello Docker

I’ve been playing with Docker on my Macbook for a while now, I’ve been trying to understand how this could relates with my past experience with virtualization, I’m well aware that containerization on it’s core it’s way different from virtualization, but in the concept of separating services on an abstract level  it feels very familiar to me. I’ve been using virtualization not only as a mean to take advantage of the resources we have in the company, but also as a way to separate the network services in a logic manner.

  • The authentication system is a Samba AD controller from which all other systems authenticate using Kerberos or LDAP.
  • A Windows 2012 Server machine is connected to the gateway working as a Remote Desktop Gateway and a reverse-proxy
  • The Terminal Desktop is another Windows 2012 Server from which users connect to launch the ERP application
  • The PMS is a IIS hosted application on a Windows 2016 Server machine which is the frontend for our e-commerce page for each of the hotels.
  • The monitoring framework it’s a Icinga2 appliance

Having all of this machines operating on bare metal would make our infrastructure a more expensive than it actually was, and the way it is designed allow us to migrate it to a cloud infrastructure like AWS or AZURE in the future.

After taking a test-drive with Docker on my machine I could see how it was built for this purpose, launching and composing new services is as easy as creating a Docker composer file, using/building the right images to create complex architectures and develop all kind of business solutions. Today we have all kind of open source solutions for almost everything, building an IT solution that grows on the same direction of the business it supports is like playing with LEGO, give me the right building blocks and I build you what you need.

So with this in mind I moved to migrate my server to the container world, I knew I would have to use a reverse proxy to allow access from the outside to WordPress and Nextcloud, I would also need to automate the SSL certificate generation for my domains.

MAKING it work

First step was installing Docker and after following the instructions on their website everything was ready to go.

https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/

Next I needed to migrate the MySQL server to a container, this step was relatively easy since I’d already had MySQL files on a different location, as I’ve mentioned before, I’ve used a different hard-drive for all my service files, which was already mounted on /srv, including MySQL data files.

After stopping the MySQL server instance in my machine and take advantage of the bind mounts that Docker offers , getting the MySQL up and running was as easy as:

docker container run --publish 3306:3306 --name mysqldb --mount type=bind,source=/srv/databases/mysql,target=/var/lib/mysql --detach mysql:latest

To test if everything was up and running I tried to connect to the server:

mysql --protocol=tcp -h localhost -u root -p

First problem, I wasn’t allowed to connect to the server from 178.17.0.1, but the solution was connecting from inside of the MySQL container and fix the permissions, again Docker allows us to get inside with Bash.

docker container exec -it mysqldb bash

Connected to the server, updated the user permissions, logged out and voilá I was now able to connect to the database from the “outside” world, awesome, the database was up and running on a container, next step was moving the Nextcloud  to a container. here I’ve taken a different road, instead of running the container with the Docker command I’ve preferred to take my chance with Docker compose, this tool is perfect when you need to launch a lot of services at the same time and those services have dependencies.

Let’s compose it

Instead of writing the docker-compose.yml from scratch I’ve found on Github a similar configuration.

https://github.com/gilyes/docker-nginx-letsencrypt-sample

This was the base for my composer instructions, and again having already the application files on /srv made it very easy to migrate.

First dependency, Nginx as a reverse proxy with let’s encrypt automation,  and George Ilyes Github  files already had what I needed for my setup.

 nginx:
   restart: always
   image: nginx
   container_name: nginx
   ports:
     - "80:80"
     - "443:443"
   volumes:
     - /srv/etc/nginx/conf.d:/etc/nginx/conf.d
     - /srv/etc/nginx/conf.d:/etc/nginx/vhost.d
     - /srv/apps/default:/usr/share/nginx/html
     - /srv/etc/nginx/certs:/etc/nginx/certs:ro

There is a image that I want to take a deeper look into and it’s called nginx-get, as far as I could understand it connects to the Docker daemon using the unix socket and monitors when a new containers is launched, when this happens it automatically updates the Nginx configuration, a very useful tool.

nginx-gen:
  restart: always
  image: jwilder/docker-gen
  container_name: nginx-gen
  volumes:
    - /var/run/docker.sock:/tmp/docker.sock:ro
    - /srv/etc/templates/nginx.tmpl:/etc/docker-gen/templates/nginx.tmpl:ro
  volumes_from:
    - nginx
  entrypoint: /usr/local/bin/docker-gen -notify-sighup nginx -watch -wait 5s:30s /etc/docker-gen/templates/nginx.tmpl /etc/nginx/conf.d/default.conf

And finally the Let’s Encrypt automation mechanism, again another useful image.

letsencrypt-nginx-proxy-companion:
  restart: always
  image: jrcs/letsencrypt-nginx-proxy-companion
  container_name: letsencrypt-nginx-proxy-companion
  volumes_from:
    - nginx
  volumes:
    - /var/run/docker.sock:/var/run/docker.sock:ro
    - /srv/etc/nginx/certs:/etc/nginx/certs:rw
  environment:
    - NGINX_DOCKER_GEN_CONTAINER=nginx-gen

So there it was Nginx as a reverse proxy plus the Let’s Encrypt certificates automation, now I was ready to compose my own instructions.

Let’s start with MySQL.

db:
  restart: always
  container_name: mysqldb
  image: mysql
  volumes:
    - /srv/databases/mysql:/var/lib/mysql
  environment:
    - MYSQL_ROOT_PASSWORD

Redis was another Nextcloud dependency for caching, Docker already provides this image.

redis:
  restart: always
  container_name: redis
  image: redis
  volumes:
    - /srv/apps/redis/data:/data

And finally Nextcloud,

cloud:
  restart: always
  image: nextcloud
  container_name: nextcloud
  environment:
    - VIRTUAL_HOST=cloud.mydomain.com
    - VIRTUAL_NETWORK=nginx-proxy
    - VIRTUAL_PORT=80
    - LETSENCRYPT_HOST=cloud.mydomain.com
    - LETSENCRYPT_EMAIL=me@mydomain.com
  links:
   - db
   - redis
  volumes:
   - /srv/apps/nextcloud/apps:/var/www/html/apps
   - /srv/apps/nextcloud/config:/var/www/html/config
   - /srv/apps/nextcloud/data:/var/www/html/data

With everything ready to go the next step was crossing my fingers and run

docker-compose up

And there it was, Docker compose was preparing everything and some seconds later Nextcloud was running on a container.

Getting WordPress was even easier

 blog:
   restart: always
   image: wordpress
   links: 
     - db
   volumes:
     - /srv/apps/blog/wp-content:/var/www/html/wp-content
   environment:
     - VIRTUAL_HOST=blog.mydomain.com
     - VIRTUAL_NETWORK=nginx-proxy
     - VIRTUAL_PORT=80
     - LETSENCRYPT_HOST=blog.mydomain.com
     - LETSENCRYPT_EMAIL=me@mydomain.com
     - WORDPRESS_DB_HOST=db:3306
     - WORDPRESS_DB_USER=myuser
     - WORDPRESS_DB_PASSWORD=mypassword
Conclusion

This test-drive made me realize that containers are in fact a big game changer on the way we deploy applications, in the end of this journey I’ve realized how easy it was from now on for me to design and deploy solutions that depends on multiple services, containers do make our life easier when it comes to allow us to replicate the same conditions on any machine, if I need to hack into WordPress or Nextcloud having exactly the same conditions that are in my server I just need to use the docker-composer.yml and fire it  on my machine, I can even build my own images and use them for my own deployments.

And now, what about virtualization? It’s a fact that you can setup the same kind of automation with virtual machines, scripts do help us in multiple ways and there are tools like Vagrant that makes our life much easy, In my personal opinion, virtual machines will not go away any time soon,  many applications aren’t ready yet to migrate into the container world and there are a lot of services that depends on old technologies that will only be available through a virtualized solution, but containers will probably slowly take control of the cloud and probably virtualization will be left for compatibility purposes.

In the past we have seen some advancements on the hardware to improve the hypervisor overhead, Intel and AMD had made and are still making investments on this technology, so I wouldn’t be surprise if in a near future we start seeing an hybrid solution between containers and hypervisors since it can add an extra layer of security.

WHAT’S NEXT

My next goal is to explore the containers on a VLAN environment, this is another concern in a network environment, especially when you have a network with multiple segmentations and services like IPTV, VoIP, IP Cameras and so on.

Hope you enjoyed this post and I’ll be updating this blog with my personal experiences and challenges.