PEM--/devops-tuts

语言: Nginx

git: https://github.com/PEM--/devops-tuts

OSX上的Meteor Devops,为Ubuntu设置了Docker
Meteor Devops on OSX with Docker set for Ubuntu
README.md (中文)

OSX上的Meteor Devops,Docker设置为Ubuntu 15.04

介绍

在开发过程中使用Meteor是一件容易的事,并将其部署在Meteor上 基础设施是没有道理的,如果你需要,事情可能会开始变得混乱 部署它,保护它并在云上扩展它。特别是如果您的客户 强加给你一个特定的云主权约束。最好的方法 实现轻松部署是使用优秀 Meteor Up工具。但如果失败或 如果您需要在基础架构部署中更进一步, 我建议你开始使用Docker来获取 熟悉这个方便的DevOps工具。

我希望本教程能引导您走上适当的轨道。

本教程中应用的版本

您可能需要为自己的DevOps用例更新本教程,这里是 本教程中使用的完整版本列表:

  • OSX 10.10.5作为开发平台
  • Ubuntu 15.04作为Docker主机系统
  • Debian Jessie 7最新更新为Docker容器系统
  • Docker 1.9.1
  • Docker注册表2
  • Docker Machine 0.5.1
  • Docker Compose 1.5.1
  • VirtualBox 5.0.10
  • 流浪汉1.7.4
  • 流星1.1.0.3
  • NGinx 1.8.0-1
  • NodeJS 0.10.41
  • NPM 3.3.12
  • Mongo 3.0.6 - WiredTiger

Software architecture

为什么Debian Jessie而不是Debian Wheezie?简单,增益30MB   脚印。请注意,我们可以将此教程设置为其他更小的教程   我们的Docker Images的Linux发行版,比如Alpine Linux。但随着时间的推移   在撰写本文时,这些较小的发行版不提供所需的封装   用于安装Meteor(即MongoDB和节点光纤)。

安装工具

如果您在安装Homebrew及其插件Caskroom之前从未这样做过。

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew install caskroom/cask/brew-cask

然后安装VirtualBox和Vagrant:

brew cask install virtualbox vagrant

现在安装Docker及其工具:

brew install docker docker-machine docker-compose

为了简化对VM和服务器的访问,我们使用SSH密钥安装程序:

brew install ssh-copy-id

为了解析和查询Docker生成的JSON,我们使用./jq:

brew install jq

一些文件结构

为了区分Meteor项目和DevOps项目,我们 像这样存储我们的文件:

.
├── app
└── docker

app文件夹包含Meteor源和docker的根目录 文件夹包含DevOps源的根目录。

将您的虚拟机创建为Docker Machine

创建与您的生产环境匹配的Vagrantfile。 在这里,我们使用预先安装了Docker的Ubuntu 15.04。

hosts = {
  "dev" => "192.168.1.50",
  "pre" => "192.168.1.51"
}
Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/vivid64"
  config.ssh.insert_key = false
  hosts.each do |name, ip|
    config.vm.define name do |vm|
      vm.vm.hostname = "%s.example.org" % name
      #vm.vm.network "private_network", ip: ip
      vm.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)", ip: ip
      vm.vm.provider "virtualbox" do |v|
        v.name = name
      end
      vm.vm.provision "shell", path: "provisioning.sh"
    end
  end
end

我在这里提供了2个网络配置。第一个是私人网络   导致2个本地网络无法访问的虚拟机(   只有你当地的OSX)。第二个桥接您的本地OSX网络驱动程序,以便   您的VM可以在LAN中获得公共访问权限。请注意,对于这两个   网络配置,我用过静态IP。

在创建虚拟机之前,我们需要设置provisioning.sh:

#!/bin/bash
# Overriding bad Systemd default in Docker startup script
sudo mkdir -p /etc/systemd/system/docker.service.d
echo -e '[Service]\n# workaround to include default options\nEnvironmentFile=-/etc/default/docker\nExecStart=\nExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS' | sudo tee /etc/systemd/system/docker.service.d/ubuntu.conf
# Set Docker daemon with the following properties:
# * Daemon listen to external request and is exposed on port 2376, the default Docker port.
# * Docker uses the AUFS driver for file storage.
# * Daemon uses Docker's provided certification chain.
# * Dameon has a generic label.
# * Daemon is able to resolve DNS query using Google's DNS.
echo 'DOCKER_OPTS="-H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic --dns 8.8.8.8 --dns 8.8.4.4"'  | sudo tee /etc/default/docker
sudo systemctl daemon-reload
sudo systemctl restart docker
# Enable Docker on server reboot
sudo systemctl enable docker
# Remove and clean unused packages
sudo apt-get autoremove -y
sudo apt-get autoclean -y

现在,我们正在启动虚拟主机并将其声明为Docker Machine:

vagrant up --no-provision

在整个终端会话中,我们需要一些环境变量。 我们将它们存储在local_env.sh文件中,我们逐步填写并填充源文件 每次我们打开一个新的终端会话:

export HOST_IP_DEV='192.168.1.50'
export HOST_IP_PRE='192.168.1.51'
# Use preferably your FQDN (example.org)
export HOST_IP_PROD='YOUR_SITE_FQDN'

如果您使用像我这样的鱼,请使用以下内容:

set -x HOST_IP_DEV '192.168.1.50'
set -x HOST_IP_PRE '192.168.1.51'
# Use preferably your FQDN (example.org)
set -x HOST_IP_PROD 'YOUR_SITE_FQDN'

这应该可以轻松访问以下网络体系结构的所有部分:

打开3个终端会话。在第一个会话中,启动以下命令:

docker-machine -D create -d generic \
  --generic-ip-address $HOST_IP_DEV \
  --generic-ssh-user vagrant \
  --generic-ssh-key ~/.vagrant.d/insecure_private_key \
  dev

在第二个会话中,启动以下命令:

docker-machine -D create -d generic \
  --generic-ip-address $HOST_IP_PRE \
  --generic-ssh-user vagrant \
  --generic-ssh-key ~/.vagrant.d/insecure_private_key \
  pre

现在,在上一个会话中,等待前两个会话被阻止 在下面重复的消息 守护进程尚未响应:拨打tcp XX.XX.XX.XX:2376:连接被拒绝 并发出以下命令:

vagrant provision

这里发生了什么?实际上,Docker for Ubuntu 15.04的当前状态   不支持DOCKER_OPTS。这是由于Ubuntu的过渡 Systemd的新贵。另外,当我们在创建Docker机器时   我们的本地OSX,Docker Machine在主机上重新安装Docker。因此,我们结束了   主机上的螺丝安装无法与外界说话   (导致消息守护程序尚未响应:拨tcp 192.168.33.X:2376:连接被拒绝)。   基本上,vagrant配置脚本会修补两个流浪虚拟服务器。   您可以在生产服务器上重复使用此脚本的内容   创建关联的Docker Machine。为此,您可以使用以下命令: ssh root @ $ HOST_IP_PROD“bash -s”<./provisioning.sh

在最后一节中,我们将完成开发和配置 通过安装Docker Machine并保护其开放端口来预生产主机 简单的防火墙规则。我们使用的脚本名为postProvisioning.sh。

#!/bin/bash
# Install Docker Machine
curl -L https://github.com/docker/machine/releases/download/v0.4.0/docker-machine_linux-amd64 | sudo tee /usr/local/bin/docker-machine > /dev/null
sudo chmod u+x /usr/local/bin/docker-machine

# Install Firewall
sudo apt-get install -y ufw
# Allow SSH
sudo ufw allow ssh
# Allow HTTP and WS
sudo ufw allow 80/tcp
# Allow HTTPS and WSS
sudo ufw allow 443/tcp
# Allow Docker daemon port and forwarding policy
sudo ufw allow 2376/tcp
sudo sed -i -e "s/^DEFAULT_FORWARD_POLICY=\"DROP\"/DEFAULT_FORWARD_POLICY=\"ACCEPT\"/" /etc/default/ufw
# Enable and reload
yes | sudo ufw enable
sudo ufw reload

我们使用简单的SSH命令在两个VM上执行此脚本,如下所示:

ssh -i ~/.vagrant.d/insecure_private_key vagrant@$HOST_IP_DEV "bash -s" < ./postProvisioning.sh
ssh -i ~/.vagrant.d/insecure_private_key vagrant@$HOST_IP_PRE "bash -s" < ./postProvisioning.sh

现在,您可以通过Docker,Vagrant和普通SSH访问您的VM。完成 我们的VM配置,我们将允许完全root访问VM 要求使用密码。为此,您需要一个公钥和私钥SSH密钥 在您的本地计算机上。如果您还没有完成,只需使用以下内容即可 命令:

ssh-keygen -t rsa

现在,使用Vagrant,在每个中复制〜/ .ssh / id_rsa.pub的内容 VM的/root/.ssh/authorized_key。

将您的生产主机引用为Docker Machine

在这个例子中,我们使用OVH的VPS和预安装的Ubuntu 15.04 与Docker。这些VPS起价为每月2.99欧元(约合3.5美元)并附带 有趣的功能,如反DDos,实时监控,......

预安装的VPS附带OpenSSH访问权限。因此,我们将使用 我们的Docker机器的generic-ssh驱动程序就像我们为之做的那样 用于开发和预生产的Vagrant VM。和以前一样,我们正在使用 2个终端会话,以克服Ubuntu 15.04上的Docker安装问题。

在第一个终端会话中,我们设置了没有密码的root SSH访问,如下所示:

ssh-copy-id root@$HOST_IP_PROD
# Now, you should check if your key is properly copied
ssh root@$HOST_IP_PROD "cat /root/.ssh/authorized_keys"
cat ~/.ssh/id_rsa.pub
# These 2 last commands should return the exact same key

我被一些ssh-user-agent问题欺骗了。 Docker没有报道   任何问题,即使在调试模式下,只是退出默认错误代码。   因此,请注意您的公钥在本地计算机上完全相同,   您的VM和生产主机。

接下来,仍然在同一个终端会话中,我们声明我们的生产主机:

docker-machine -D create -d generic \
  --generic-ip-address $HOST_IP_PROD \
  --generic-ssh-user root \
  prod

并在第二个终端会话中,当消息 守护进程尚未响应:拨打tcp X.X.X.X:2376:出现连接被拒绝 在第一次会议上,我们发布:

ssh root@$HOST_IP_PROD "bash -s" < ./provisioning.sh

最后剩下的步骤包括通过启用来巩固我们的安全性 主机上的防火墙并删除旧包:

ssh root@$HOST_IP_PROD "bash -s" < ./postProvisioning.sh

创建自己的注册表

基本上,我们想要实现的是面向坚持的微服务 多层架构:

然后,该架构可以分布在多个服务器的Docker Swarm上 或保持单一。但是在开发中玩多个容器 很快就会痛苦。我们可以利用Docker Compose和本地的强大功能 注册表来加速我们开发Docker镜像。

在第一个终端会话中,激活开发Docker Machine:

eval "$(docker-machine env dev)"
# In Fish
eval (docker-machine env dev)

创建一个Docker Compose文件registry.yml:

registry:
  restart: always
  image: registry:2
  ports:
    - 5000:5000
  environment:
    REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
  volumes:
    - /var/lib/registry:/var/lib/registry

现在,我们将使用开发Docker Machine作为本地注册表:

ssh root@$HOST_IP_DEV "mkdir /var/lib/registry"
docker-compose -f registry.yml up -d

为了使我们的预生产VM可见,我们需要更新我们的默认值 防火墙规则:

ssh root@$HOST_IP_DEV ufw allow 5000

现在我们正在编辑/ etc / default / docker配置文件以添加它 在这个新的开发和预生产VM中,不安全的注册表 旗:

# On 192.168.1.50 & 192.168.1.51, in /etc/default/docker, we add in DOCKER_OPTS:
--insecure-registry 192.168.1.50:5000

我们需要重启我们的Docker守护进程并重新启动Docker注册表 开发VM:

ssh root@$HOST_IP_DEV systemctl restart docker
ssh root@$HOST_IP_PRE systemctl restart docker
eval "$(docker-machine env dev)"

我们在注册表管理中的最后一步是登录您的预生产VM和 您的生产服务器使用您的Docker凭证到Docker Hub。

eval "$(docker-machine env pre)"
docker login
eval "$(docker-machine env prod)"
docker login

请注意,我们的注册表未在LAN外部发布。这使它无法使用   为我们的生产主机。该开发链使用Docker Hub进行发布   你的形象。将此私人注册表公开给外界需要   一些额外的配置,以加强其安全性和服务器   公开暴露IP。虽然您可以完全依赖Docker Hub进行发布   你的图像,推拉到局域网的外部世界很漫长   自Docker 1.6和Docker Registry 2以来,操作虽然变淡了。

建设Mongo

我们的mongo / Dockerfile基于Mongo的官方文件。它增加了 想象一下用于使OPLOG可用的小型ReplicaSet的配置:

# Based on: https://github.com/docker-library/mongo/blob/d5aca073ca71a7023e0d4193bd14642c6950d454/3.0/Dockerfile
FROM debian:wheezy
MAINTAINER Pierre-Eric Marchandet <YOUR_DOCKER_HUB_LOGIN@gmail.com>

# Update system
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
    apt-get upgrade -y -qq --no-install-recommends && \
    apt-get install -y -qq --no-install-recommends apt-utils && \
    apt-get install -y -qq --no-install-recommends \
      ca-certificates curl psmisc apt-utils && \
    apt-get autoremove -y -qq && \
    apt-get autoclean -y -qq && \
    rm -rf /var/lib/apt/lists/*

# Grab gosu for easy step-down from root
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && \
    curl -sS -o /usr/local/bin/gosu -L "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" && \
    curl -sS -o /usr/local/bin/gosu.asc -L "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" && \
    gpg --verify /usr/local/bin/gosu.asc && \
    rm /usr/local/bin/gosu.asc && \
    chmod +x /usr/local/bin/gosu

# Install MongoDB
ENV MONGO_MAJOR 3.0
ENV MONGO_VERSION 3.0.6
RUN groupadd -r mongodb && \
    useradd -r -g mongodb mongodb && \
    apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 492EAFE8CD016A07919F1D2B9ECBEC467F0CEB10 && \
    echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/$MONGO_MAJOR main" > /etc/apt/sources.list.d/mongodb-org.list && \
    apt-get update && \
    apt-get install -y -qq --no-install-recommends \
      mongodb-org=$MONGO_VERSION \
      mongodb-org-server=$MONGO_VERSION \
      mongodb-org-shell=$MONGO_VERSION \
      mongodb-org-mongos=$MONGO_VERSION \
      mongodb-org-tools=$MONGO_VERSION && \
    rm -rf /var/lib/apt/lists/* && \
    rm -rf /var/lib/mongodb && \
    mv /etc/mongod.conf /etc/mongod.conf.orig && \
    apt-get autoremove -y -qq && \
    apt-get autoclean -y -qq && \
    rm -rf /var/lib/apt/lists/* && \
    # Prepare environment for Mongo daemon: Use a Docker Volume container
    mkdir -p /db && chown -R mongodb:mongodb /db

# Launch Mongo
COPY mongod.conf /etc/mongod.conf
CMD ["gosu", "mongodb", "mongod", "-f", "/etc/mongod.conf"]

我们需要为mongo / mongod.conf构建这个Docker镜像的配置文件:

storage:
  dbPath: "/db"
  engine: "wiredTiger"
  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
    collectionConfig:
      blockCompressor: snappy
replication:
  oplogSizeMB: 128
  replSetName: "rs0"
net:
  port: 27017
  wireObjectCheck : false
  unixDomainSocket:
    enabled : true

我们可以构建这个图像并运行它,但我更喜欢使用Docker Compose文件。 这些文件简化了Docker镜像的构建,运行和部署过程 当多个Docker镜像需要工作时,充当项目文件 一起申请。这是最小的docker / docker-compose.yml 我们将在本教程的后续步骤中丰富:

# Persistence layer: Mongo
db:
  build: mongo
  volumes:
    - /var/db:/db
  expose:
    - "27017"

在构建或启动此Docker镜像之前,我们需要准备好 接收并保留Mongo数据的每台主机上的卷:

ssh root@$HOST_IP_DEV "rm -rf /var/db; mkdir /var/db; chmod go+w /var/db"
ssh root@$HOST_IP_PRE "rm -rf /var/db; mkdir /var/db; chmod go+w /var/db"
ssh root@$HOST_IP_PROD "rm -rf /var/db; mkdir /var/db; chmod go+w /var/db"

用于构建我们的Mongo Docker镜像:

docker-compose build db
# Or even faster, for building and running
docker-compose up -d db

一旦它运行,初始化一个实例ReplicaSet进行制作 Oplog拖尾可用:

docker-compose run --rm db mongo db:27017/admin --quiet --eval "rs.initiate(); rs.conf();"

开发容器时的一些有用命令:

# Access to a container in interactive mode
docker run -ti -P docker_db

# Delete all stopped containers
docker rm $(docker ps -a -q)
# Delete all images that are not being used in a running container
docker rmi $(docker images -q)
# Delete all images that failed to build (untagged images)
docker rmi $(docker images -f "dangling=true" -q)

# In Fish
# Delete all stopped containers
docker rm (docker ps -a -q)
# Delete all images that are not being used in a running container
docker rmi (docker images -q)
# Delete all images that failed to build (dangling images)
docker rmi (docker images -f "dangling=true" -q)

建造流星

流星很容易构建。这是一个简单的NodeJS应用程序。我们从创造开始 我们的docker / meteor / Dockerfile:

# Based on: https://github.com/joyent/docker-node/blob/master/0.10/wheezy/Dockerfile
FROM debian:wheezy
MAINTAINER Pierre-Eric Marchandet <YOUR_DOCKER_HUB_LOGIN@gmail.com>

# Update system
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
    apt-get upgrade -qq -y --no-install-recommends && \
    apt-get install -qq -y --no-install-recommends \
      # CURL
      ca-certificates curl wget \
      # SCM
      bzr git mercurial openssh-client subversion \
      # Build
      build-essential && \
    apt-get autoremove -qq -y && \
    apt-get autoclean -qq -y && \
    rm -rf /var/lib/apt/lists/*

# Install NodeJS
ENV NODE_VERSION 0.10.40
ENV NPM_VERSION 3.3.12
RUN curl -sSLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz" && \
    tar -xzf "node-v$NODE_VERSION-linux-x64.tar.gz" -C /usr/local --strip-components=1 && \
    rm "node-v$NODE_VERSION-linux-x64.tar.gz" && \
    npm install -g npm@"$NPM_VERSION" && \
    npm cache clear

# Add PM2 for process management
RUN npm install -g pm2

# Import sources
COPY bundle /app

# Install Meteor's dependencies
WORKDIR /app
RUN (cd programs/server && npm install)

# Launch application
COPY startMeteor.sh /app/startMeteor.sh
CMD ["./startMeteor.sh"]

在构建这个Docker镜像之前,我们需要准备好 接收其settings.json的每个主机上的卷用于存储您的 流星的秘密:

ssh root@$HOST_IP_DEV "mkdir /etc/meteor"
ssh root@$HOST_IP_PRE "mkdir /etc/meteor"
ssh root@$HOST_IP_PROD "mkdir /etc/meteor"

现在使用常规SCP复制每个主机上的settings.json文件。矿 根据我部署Meteor应用程序的目标,略有不同。

# Just an exammple, adapt it to suit your needs
scp ../app/dev.json root@$HOST_IP_DEV:/etc/meteor/settings.json
scp ../app/dev.json root@$HOST_IP_DEV:/etc/meteor/settings.json
scp ../app/prod.json root@$HOST_IP_DEV:/etc/meteor/settings.json

请注意,我们不包含我们的秘密,也不包括我们的代码库   使用.gitgignore文件,也不在我们的Docker Images中。

我们使用共享脚本docker / buildMeteor.sh导入我们的Meteor源 Meteor容器和NGinx容器:

#!/bin/bash
rm -rf meteor/bundle nginx/bundle
cd ../app
meteor build --architecture os.linux.x86_64 --directory ../docker/meteor
cd -
cp -R meteor/bundle nginx

为了避免在我们的Docker镜像中导入太多文件,我们创建了 docker / meteor / .dockerignore文件,删除专用的部分 NGinx将为客户提供服务:

bundle/README
bundle/packages/*/.build*
bundle/packages/*/.styl
bundle/*/*.md*
bundle/programs/web.browser/app

我们最后需要的文件是脚本docker / meteor / startMeteor.sh用于启动 具有我们作为特定卷添加的私有设置的Meteor:

#!/bin/bash
METEOR_SETTINGS=$(cat /etc/meteor/settings.json) pm2 start -s --no-daemon --no-vizion main.js

请注意,我们使用PM2启动Meteor。和我们一样   会看到它,这不是我们使用Docker重启的强制性步骤   我们的Docker镜像中的策略。但是,这个流程管理工具可能是   用于获取NodeJS状态的一些指标。

对于构建和启动,我们正在扩展/docker/docker-compose.yml文件:

# Application server: NodeJS (Meteor)
server:
  build: meteor
  environment:
    MONGO_URL: "mongodb://db:27017"
    MONGO_OPLOG_URL: "mongodb://db:27017/local"
    PORT: 3000
    ROOT_URL: "https://192.168.1.50"
  volumes:
    - /etc/meteor:/etc/meteor
  expose:
    - "3000"

用于构建和启动我们的Meteor Docker镜像:

docker-compose up -d db server

建立NGinx

这取决于我们的前面容器的创建。让我们从docker / nginx / Dockerfile开始:

# Based on: https://github.com/nginxinc/docker-nginx/blob/master/Dockerfile
FROM debian:wheezy
MAINTAINER Pierre-Eric Marchandet <YOUR_DOCKER_HUB_LOGIN@gmail.com>

# Add NGinx official repository
RUN apt-key adv --keyserver pgp.mit.edu --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
RUN echo "deb http://nginx.org/packages/debian/ wheezy nginx" >> /etc/apt/sources.list
ENV NGINX_VERSION 1.8.0-1~wheezy

# Update system
ENV DEBIAN_FRONTEND noninteractive
RUN groupadd -r www && \
    useradd -r -g www www && \
    apt-get update && \
    apt-get upgrade -qq -y --no-install-recommends && \
    apt-get install -qq -y --no-install-recommends \
      ca-certificates nginx=${NGINX_VERSION} && \
    apt-get autoremove -qq -y && \
    apt-get autoclean -qq -y && \
    rm -rf /var/lib/apt/lists/*

# Forward request and error logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log

# Configuration files
COPY nginx.conf /etc/nginx/nginx.conf
COPY conf host-specific /etc/nginx/conf/

# Mount points for volumes
RUN mkdir -p /etc/certs /var/cache/nginx /var/tmp

# Source
# Raw source files exposed as HTTP and HTTPS
COPY raw /www/
# Project files exposed as HTTPS
COPY  bundle/programs/web.browser/*.js \
      bundle/programs/web.browser/*.css \
      bundle/programs/web.browser/packages \
      bundle/programs/web.browser/app \
      /www/

# Ensure proper rights on static assets
RUN chown -R www:www /www /var/cache /var/tmp

# Launch NGinx
COPY startNginx.sh /startNginx.sh
RUN chmod u+x /startNginx.sh
CMD ["/startNginx.sh"]

与Meteor容器一样,我们使用相同的导入脚本。这次, 我们使用相同的技术删除容器的服务器部分 泊坞窗/ nginx的/ .dockerignore:

bundle/README
bundle/packages/*/.build*
bundle/packages/*/.styl
bundle/*/*.md*
bundle/programs/server

为了构建它,我们增强了pour docker / docker-compose.yml文件:

# Front layer, static file, SSL, proxy cache: NGinx
front:
  build: nginx
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "dev"
  volumes:
    - /etc/certs:/etc/certs
    - /var/cache:/var/cache
    - /var/tmp:/var/tmp
  ports:
    - "80:80"
    - "443:443"

我们的NGinx需要在/ etc / certs中的主机上设置证书。为了 生产主机,您需要来自证书颁发机构的SSL证书 知道浏览器供应商。对于开发和预生产主机, 我们可以使用我们在主机上创建的自签名证书:

ssh root@$HOST_IP_DEV "mkdir -p /etc/certs; openssl req -nodes -new -x509 -keyout /etc/certs/server.key -out /etc/certs/server.crt -subj '/C=FR/ST=Paris/L=Paris/CN=$HOST_IP_DEV'"
ssh root@$HOST_IP_PRE "mkdir -p /etc/certs; openssl req -nodes -new -x509 -keyout /etc/certs/server.key -out /etc/certs/server.crt -subj '/C=FR/ST=Paris/L=Paris/CN=$HOST_IP_PRE'"

我们需要在每个主机上暴露2个额外的卷,一个用于NGinx的缓存和 另一个用于NGinx临时文件:

ssh root@$HOST_IP_DEV "mkdir /var/cache; chmod go+w /var/cache; mkdir /var/tmp; chmod go+w /var/tmp"
ssh root@$HOST_IP_PRE "mkdir /var/cache; chmod go+w /var/cache; mkdir /var/tmp; chmod go+w /var/tmp"
ssh root@$HOST_IP_PROD "mkdir -p /etc/certs; mkdir /var/cache; chmod go+w /var/cache; mkdir /var/tmp; chmod go+w /var/tmp"

在我们的Docker Container中,我们已经导入了Meteor的静态部分 将通过HTTPS公开的应用程序。我们的NGinx服务器也将充当 HTTP中的静态文件服务器。只需将静态资产放入docker / nginx / raw 那个文件夹。

虽然为我们的Meteor应用程序提供HTTP文件没有兴趣,但是在没有保护的情况下公开一些静态资产可能很有用(这是SSL证书提供者需要的时间)。

我们现在需要配置文件。这种配置主要是 从HTML5的NGinx服务器样板中分叉和定制。 我不会解释所有这些,只是Meteor的有趣部分 和我们的多主机配置需要。我们的入口点是docker / nginx / nginx.conf。

# Run as a less privileged user for security reasons.
user www www;
# How many worker threads to run;
# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes 1;
# Maximum open file descriptors per process;
# should be > worker_connections.
worker_rlimit_nofile 8192;
events {
  # When you need > 8000 * cpu_cores connections, you start optimizing your OS,
  # and this is probably the point at which you hire people who are smarter than
  # you, as this is *a lot* of requests.
  worker_connections 8000;
}
# Default error log file
# (this is only used when you don't override error_log on a server{} level)
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
# Main configuration
http {
  # Hide nginx version information.
  server_tokens off;
  # Proxy cache definition
  proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=one:8m max_size=3000m inactive=600m;
  proxy_temp_path /var/tmp;
  # Define the MIME types for files.
  include conf/mimetypes.conf;
  default_type application/octet-stream;
  # Update charset_types due to updated mime.types
  charset_types text/xml text/plain text/vnd.wap.wml application/x-javascript application/rss+xml text/css application/javascript application/json;
  # Format to use in log files
  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
  # Default log file
  # (this is only used when you don't override access_log on a server{} level)
  access_log /var/log/nginx/access.log main;
  # How long to allow each connection to stay idle; longer values are better
  # for each individual client, particularly for SSL, but means that worker
  # connections are tied up longer. (Default: 65)
  keepalive_timeout 20;
  # Speed up file transfers by using sendfile() to copy directly
  # between descriptors rather than using read()/write().
  sendfile        on;
  # Tell Nginx not to send out partial frames; this increases throughput
  # since TCP frames are filled up before being sent out. (adds TCP_CORK)
  tcp_nopush      on;
  # GZip Compression
  include conf/gzip.conf;
  # Error pages redirections
  error_page 404 /404.html;
  error_page 500 502 503 504  /50x.html;
  # HTTP server
  server {
    # Server name
    include conf/servername.conf;
    # Protocol HTTP
    listen [::]:80 ipv6only=on;
    listen 80;
    # Static files with fallback to HTTPS redirect
    include conf/staticfile-with-fallback.conf;
    # Redirect non-SSL to SSL
    location @fallback {
      rewrite  ^ https://$server_name$request_uri? permanent;
    }
  }
  # Upstream server for the web application server and load balancing
  include conf/upstream-server-and-load-balancing.conf;
  # Upgrade proxy web-socket connections
  include conf/websocket-upgrade.conf;
  # HTTPS server
  server {
    # Server name
    include conf/servername.conf;
    # Protocols HTTPS, SSL, SPDY
    listen [::]:443 ipv6only=on ssl spdy;
    listen 443 ssl spdy;
    # SSL configuration
    include conf/ssl.conf;
    # SPDY configuration
    include conf/spdy.conf;
    # Static files with fallback to proxy server
    include conf/staticfile-with-fallback.conf;
    # Proxy pass to server node with websocket upgrade
    location @fallback {
      include conf/proxy-pass-and-cache.conf;
    }
  }
}

根据哪个主机启动NGinx,我们需要一种方法来设置正确的 服务器名称。为此,我们创建了3个文件:

  • 泊坞窗/ nginx的/特定主机/服务器名,dev.conf:
# Server name
server_name  192.168.1.50;
  • 泊坞窗/ nginx的/特定主机/服务器名,pre.conf:
# Server name
server_name  192.168.1.51;
  • 泊坞窗/ nginx的/特定主机/服务器名,prod.conf:
# Server name (the real FQDN of your production server)
server_name  example.org;

为了访问通过HTTP公开的静态文件,我们使用简单的声明root 前面的,我们使用@fallback函数,以防没有找到文件。 这是在docker / nginx / staticfile-with-fallback.conf中声明的:

# Serve static file and use a fallback otherwise
location / {
  charset utf-8;
  root /www;
  # Basic rules
  include conf/basic.conf;
  # Try static files and redirect otherwise
  try_files $uri @fallback;
  # Expiration rules
  include conf/expires.conf;
}

在我们主要配置的HTTP部分中,您可以看到流量是 通过URL重写技术重定向到HTTPS。我们的SSL配置 docker / nginx / conf / ssl.conf使用公开的Docker Volume / etc / certs:

# SSL configuration
ssl on;
# SSL key paths
ssl_certificate /etc/certs/server.crt;
ssl_certificate_key /etc/certs/server.key;
# Trusted cert must be made up of your intermediate certificate followed by root certificate
# ssl_trusted_certificate /path/to/ca.crt;
# Optimize SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 1m;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# Protect against the BEAST and POODLE attacks by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla (Intermediate Set)
# - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DES-CBC3-SHA:!ADH:!AECDH:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
# OCSP stapling...
ssl_stapling on;
ssl_stapling_verify on;
# DNS resolution on Google's DNS and DynDNS
resolver 8.8.8.8 8.8.4.4 216.146.35.35 216.146.36.36 valid=60s;
resolver_timeout 2s;
# HSTS (HTTP Strict Transport Security)
# This header tells browsers to cache the certificate for a year and to connect exclusively via HTTPS.
add_header Strict-Transport-Security "max-age=31536000;";
# This version tells browsers to treat all subdomains the same as this site and to load exclusively over HTTPS
#add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
add_header X-Frame-Options DENY;

我们还在docker / nginx / conf / spdy.conf中为我们的HTTPS配置添加了SPDY:

# SPDY configuration
add_header Alternate-Protocol  443:npn-spdy/3;
# Adjust connection keepalive for SPDY clients:
spdy_keepalive_timeout 300; # up from 180 secs default
# enable SPDY header compression
spdy_headers_comp 9;

HTTP / 2支持正在进行中。当集成到NGinx时,此配置将   升级以利用它。

没有设置SSL和SPDY,我们可以提供通过HTTPS公开的静态文件 与之前相同的配置为HTTP。但这一次,后备机制 将流量重定向到我们的Meteor应用程序(我们的服务器容器)。 如果没有找到静态文件,则使用a将流量发送到我们的Meteor应用程序 带缓存的代理:

proxy_http_version 1.1;
proxy_pass http://server;
proxy_headers_hash_max_size 1024;
proxy_headers_hash_bucket_size 128;
proxy_redirect off;
# Upgrade proxy web-socket connections
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forward-Proto http;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_cache one;
proxy_cache_key prj$request_uri$scheme;
proxy_cache_bypass $http_upgrade;
# Expiration rules
if ($uri != '/') {
  expires 30d;
}

我们的代理缓存需要将HTTPS连接升级到WSS。这是实现的 在我们的docker / nginx / conf / upstream-server-and-load-balancing.conf中:

# Upstream server for the web application server
upstream server {
  # server is included in each dynamic /etc/hosts by Docker
  server server:3000;
  # Load balancing could be done here, if required.
}

为了指导我们的NGinx采用适当的配置,我们使用了 简单的环境变量HOST_TARGET,可以是dev,pre或 使用此变量的prod和脚本docker / nginx / startNginx.sh:

#!/bin/bash
if [ ! -f /etc/nginx/conf/servername.conf ]
then
  ln -s /etc/nginx/conf/servername-$HOST_TARGET.conf /etc/nginx/conf/servername.conf
fi
nginx -g "daemon off;"

和以前的其他容器一样,我们构建它并使用以下命令启动它:

docker-compose up -d

您现在应该有一个完整的开发主机。

应用程序日志

在启动,停止,刷新我们的服务时,Docker会生成一个日志 对于您可以在CLI中轻松访问的每个容器:

docker-compose logs
# Or only for the db
docker-compose logs db
# Or only for the server
docker-compose logs server
# Or only for the server and the front...
docker-compose logs server front
# ...

正如你所看到的,它可能开始有点冗长。不过,你可以检查一下 任何像这样的尾部的Docker容器日志:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                      NAMES
82a7489e41a0        docker_front        "/startNginx.sh"         4 hours ago         Up 4 hours          0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   docker_front_1
4b0656669213        docker_server       "./startMeteor.sh"       27 hours ago        Up 4 hours          3000/tcp                                   docker_server_1
fe6a7238328a        docker_db           "mongod -f /etc/mongo"   45 hours ago        Up 4 hours          27017/tcp                                  docker_db_1
1a878c646094        registry:2          "/bin/registry /etc/d"   46 hours ago        Up 46 hours         0.0.0.0:5000->5000/tcp                     docker_registry_1

$ docker logs --tail 4 -f docker_db_1
2015-09-03T12:20:49.298+0000 I NETWORK  [initandlisten] connection accepted from 172.17.0.64:49051 #22 (18 connections now open)
2015-09-03T12:20:49.314+0000 I NETWORK  [initandlisten] connection accepted from 172.17.0.64:49052 #23 (19 connections now open)
2015-09-03T12:20:49.315+0000 I NETWORK  [initandlisten] connection accepted from 172.17.0.64:49053 #24 (20 connections now open)
2015-09-03T16:36:13.666+0000 I QUERY    [conn10] g...

Docker日志不是常规/ var / log条目。它们特定于每个 你的容器。根据需要快速填满磁盘存在重大风险 在你登录用法。幸运的是,自Docker 1.8以来,可以添加特定的日志驱动程序 到我们的运行容器。我们在这里使用logrotate,但你可以设置一个 ELK堆栈的特定服务器 或任何其他您最喜欢的日志解决方案。用于配置我们的logrotate 每个主机,为Docker添加一个新配置:

ssh root@$HOST_IP_DEV "echo -e '/var/lib/docker/containers/*/*.log {  \n  rotate 7\n  daily\n  compress\n  size=1M\n  missingok\n  delaycompress\n  copytruncate\n}' > /etc/logrotate.d/docker"
ssh root@$HOST_IP_PRE "echo -e '/var/lib/docker/containers/*/*.log {  \n  rotate 7\n  daily\n  compress\n  size=1M\n  missingok\n  delaycompress\n  copytruncate\n}' > /etc/logrotate.d/docker"
ssh root@$HOST_IP_PROD "echo -e '/var/lib/docker/containers/*/*.log {  \n  rotate 7\n  daily\n  compress\n  size=1M\n  missingok\n  delaycompress\n  copytruncate\n}' > /etc/logrotate.d/docker"

现在,我们正在更新docker / docker-compose.yml并设置我们的Docker 容器使用json文件日志驱动程序,以便它不会被埋没 在/ var / lib / docker / containers / [CONTAINER ID] / [CONTAINER_ID] -json.log中:

# Persistence layer: Mongo
db:
  build: mongo
  log_driver: "json-file"
  volumes:
    - /var/db:/db
  expose:
    - "27017"
# Application server: NodeJS (Meteor)
server:
  build: meteor
  log_driver: "json-file"
  environment:
    MONGO_URL: "mongodb://db:27017"
    MONGO_OPLOG_URL: "mongodb://db:27017/local"
    PORT: 3000
    ROOT_URL: "https://192.168.1.50"
  volumes:
    - /etc/meteor:/etc/meteor
  expose:
    - "3000"
# Front layer, static file, SSL, proxy cache: NGinx
front:
  build: nginx
  log_driver: "json-file"
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "dev"
  volumes:
    - /etc/certs:/etc/certs
    - /var/cache:/var/cache
    - /var/tmp:/var/tmp
  ports:
    - "80:80"
    - "443:443"
  log_driver: "json-file"

要获取此新的日志记录配置,只需发出以下命令:

# This stops the current running containers
docker-compose stop
# This rebuild all images
docker-compose build
# This starts all containers
docker-compose up -d

推送到您的本地注册表

当您对容器的开发感到满意时,您可以节省 您的Docker镜像到您的本地注册表中,以便将它们部署到预生产中。

对于Mongo:

docker tag -f docker_db $HOST_IP_DEV:5000/mongo:v1.0.0
docker push $HOST_IP_DEV:5000/mongo:v1.0.0
docker tag -f docker_db $HOST_IP_DEV:5000/mongo:latest
docker push $HOST_IP_DEV:5000/mongo:latest

对于流星:

docker tag -f docker_server $HOST_IP_DEV:5000/meteor:v1.0.0
docker push $HOST_IP_DEV:5000/meteor:v1.0.0
docker tag -f docker_server $HOST_IP_DEV:5000/meteor:latest
docker push $HOST_IP_DEV:5000/meteor:latest

对于NGinx:

docker tag -f docker_front $HOST_IP_DEV:5000/nginx:v1.0.0
docker push $HOST_IP_DEV:5000/nginx:v1.0.0
docker tag -f docker_front $HOST_IP_DEV:5000/nginx:latest
docker push $HOST_IP_DEV:5000/nginx:latest

在预生产中部署

对于部署到生产,我们将重构docker / docker-compose.yml 有点避免重复Docker Compose文件,具体取决于主机 你在玩耍。

我们创建一个docker / common.yml文件,该文件集中用于所有主机的值:

# Persistence layer: Mongo
db:
  build: mongo
  log_driver: "json-file"
  volumes:
    - /var/db:/db
  expose:
    - "27017"
# Application server: NodeJS (Meteor)
server:
  build: meteor
  log_driver: "json-file"
  environment:
    MONGO_URL: "mongodb://db:27017"
    MONGO_OPLOG_URL: "mongodb://db:27017/local"
    PORT: 3000
  volumes:
    - /etc/meteor:/etc/meteor
  expose:
    - "3000"
# Front layer, static file, SSL, proxy cache: NGinx
front:
  log_driver: "json-file"
  build: nginx
  volumes:
    - /etc/certs:/etc/certs
    - /var/cache:/var/cache
    - /var/tmp:/var/tmp
  ports:
    - "80:80"
    - "443:443"

现在,我们可以重构docker / docker-compose.yml来仅设置剩余的 开发所需的Docker命令:

# Persistence layer: Mongo
db:
  extends:
    file: common.yml
    service: db
# Application server: NodeJS (Meteor)
server:
  extends:
    file: common.yml
    service: server
  links:
    - db
  environment:
    ROOT_URL: "https://192.168.1.50"
# Front layer, static file, SSL, proxy cache: NGinx
front:
  extends:
    file: common.yml
    service: front
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "dev"

现在,为了简化预生产主机上的部署,我们正在使用我们的 docker / deploy-pre.yml文件中的常见配置,可以简化pull和 推出您的服务:

# Persistence layer: Mongo
db:
  image: 192.168.1.50:5000/mongo:v1.0.0
  extends:
    file: common.yml
    service: db
  restart: always
# Application server: NodeJS (Meteor)
server:
  image: 192.168.1.50:5000/meteor:v1.0.0
  extends:
    file: common.yml
    service: server
  links:
    - db
  environment:
    ROOT_URL: "https://192.168.1.51"
  restart: always
# Front layer, static file, SSL, proxy cache: NGinx
front:
  image: 192.168.1.50:5000/nginx:v1.0.0
  extends:
    file: common.yml
    service: front
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "pre"
  restart: always

将Docker Machine连接到您的预生产主机,启动您的服务 并确保应用了ReplicationSet创建:

eval "$(docker-machine env pre)"
docker-compose -f deploy-pre.yml up -d
docker-compose -f deploy-pre.yml run --rm db mongo db:27017/admin --quiet --eval "rs.initiate(); rs.conf();"

一旦您对容器满意,就可以制作它们了 可用于您的生产服务器。

推送到Docker Hub

现在我们回到我们的开发主机上发布这些容器 公共Docker Hub:

eval "$(docker-machine env dev)"

我们发布了Mongo的容器:

docker tag -f docker_db YOUR_DOCKER_HUB_LOGIN/mongo:v1.0.0
docker push YOUR_DOCKER_HUB_LOGIN/mongo:v1.0.0
docker tag -f docker_db YOUR_DOCKER_HUB_LOGIN/mongo:latest
docker push YOUR_DOCKER_HUB_LOGIN/mongo:latest

对于流星:

docker tag -f docker_server YOUR_DOCKER_HUB_LOGIN/meteor:v1.0.0
docker push YOUR_DOCKER_HUB_LOGIN/meteor:v1.0.0
docker tag -f docker_server YOUR_DOCKER_HUB_LOGIN/meteor:latest
docker push YOUR_DOCKER_HUB_LOGIN/meteor:latest

对于NGinx:

docker tag -f docker_front YOUR_DOCKER_HUB_LOGIN/nginx:v1.0.0
docker push YOUR_DOCKER_HUB_LOGIN/nginx:v1.0.0
docker tag -f docker_front YOUR_DOCKER_HUB_LOGIN/nginx:latest
docker push YOUR_DOCKER_HUB_LOGIN/nginx:latest

在生产中部署

与预生产中的部署一样,我们正在利用这些功能 Docker Compose用于简化Docker容器的拉动和运行。 为此,我们创建了一个docker / deploy-prod.yml文件:

# Persistence layer: Mongo
db:
  image: YOUR_DOCKER_HUB_LOGIN/mongo:v1.0.0
  extends:
    file: common.yml
    service: db
  restart: always
# Application server: NodeJS (Meteor)
server:
  image: YOUR_DOCKER_HUB_LOGIN/meteor:v1.0.0
  extends:
    file: common.yml
    service: server
  links:
    - db
  environment:
    ROOT_URL: "https://YOUR_SITE_FQDN"
  restart: always
# Front layer, static file, SSL, proxy cache: NGinx
front:
  image: YOUR_DOCKER_HUB_LOGIN/nginx:v1.0.0
  extends:
    file: common.yml
    service: front
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "prod"
  restart: always

在运行生产中的所有内容之前,我们必须提取图像。在...后面 场景让我们的用户不会注意到变化,那么我们就会停止当前的 运行容器,启动我们的新容器并通过ReplicaSet配置完成。

eval "$(docker-machine env prod)"
docker-compose -f deploy-prod.yml pull
docker stop "$(docker ps -a -q)"
docker-compose -f deploy-prod.yml up -d
docker-compose -f deploy-prod.yml run --rm db mongo db:27017/admin --quiet --eval "rs.initiate(); rs.conf();"

常问问题

当我使用tap:i18n我的翻译文件不可用?

只需将TAPi18n Bundler添加到您的 流星项目。

链接

本教程的来源:  Github的存储库  博客文章

本教程使用的信息:  家酿  Caskroom  轻松将公共SSH密钥发送到远程服务器  Docker文档  Ubuntu上的Docker安装  安全的Docker  UFW + Docker的危险  OpenSSL Howto  使用Systemd控制和配置Docker  如何在Ubuntu 15.04上配置Docker(解决方法)  Ulexus / Meteor:Meteor的Docker容器  OVH的VPS SSD  您的Docker Hub帐户  为Meteor创建单个实例MongoDB副本集  jq是一个轻量级且灵活的命令行JSON处理器  如何将环境变量添加到nginx.conf  MongoDB配置选项  MongoDB示例YAML文件  流星oplog尾随的魔力  Docker:群众的容器 - 使用Docker  如何在Nginx上为Ubuntu 14.04创建SSL证书  SSL和Meteor.js * HTML5的NGinx服务器样板

更进一步:  使用Nghttp2和Nginx进行HTTP / 2和强TLS部署  使用Nginx和NGHTTP2的HTTP / 2.0  Docker上的安全MongoDB服务器  Docker在主机和容器中没有root用户

本文使用googletrans自动翻译,仅供参考, 原文来自github.com

en_README.md

Meteor Devops on OSX with Docker set for Ubuntu 15.04

Introduction

While using Meteor in development is an easy task and deploying it on Meteor's
infrastructure is a no brainer, things may start to get messy if you need to
deploy it, secure it and scale it on your cloud. Especially if your customer
imposes you a specific constraint on cloud sovereignty. The best way to
achieve easy deployment is using the excellent
Meteor Up tool. But if it fails or
if you need to go a bit further in your infrastructure deployment,
I recommend that you start using Docker to get
familiar with this handy DevOps tool.

I hope that this tutorial will lead you on the appropriate tracks.

Versions applied in this tutorial

As you may need to update this tutorial for your own DevOps use cases, here is
the complete list of versions used in this tutorial:

  • OSX 10.10.5 as the development platform
  • Ubuntu 15.04 as Docker host system
  • Debian Jessie 7 with latest updates as Docker container system
  • Docker 1.9.1
  • Docker Registry 2
  • Docker Machine 0.5.1
  • Docker Compose 1.5.1
  • VirtualBox 5.0.10
  • Vagrant 1.7.4
  • Meteor 1.1.0.3
  • NGinx 1.8.0-1
  • NodeJS 0.10.41
  • NPM 3.3.12
  • Mongo 3.0.6 - WiredTiger

Software architecture

Why Debian Jessie instead of Debian Wheezie? Simple, a gain of 30MB of
footprint. Note that we could have set this tutorial on other even smaller
Linux distributions for our Docker Images, like Alpine Linux. But as time of
this writing, these smaller distributions do not offer the package required
for installing Meteor (namely, MongoDB and node-fibers).

Installing the tooling

If you have never done it before install Homebrew and its plugin Caskroom.

ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew install caskroom/cask/brew-cask

Then install VirtualBox and Vagrant:

brew cask install virtualbox vagrant

Now install Docker and its tools:

brew install docker docker-machine docker-compose

For easing the access to VM and servers, we are using an SSH key installer:

brew install ssh-copy-id

For parsing and querying JSON produced by Docker, we are using ./jq:

brew install jq

Some file structure

For differentiating the Meteor project from the DevOps project, we
store our files like so:

.
├── app
└── docker

The app folder contains the root of Meteor sources and the docker
folder contains the root of DevOps sources.

Create your virtual machines as Docker Machine

Create a Vagrantfile that matches your production environment.
Here, we are using an Ubuntu 15.04 with Docker pre-installed.

hosts = {
  "dev" => "192.168.1.50",
  "pre" => "192.168.1.51"
}
Vagrant.configure(2) do |config|
  config.vm.box = "ubuntu/vivid64"
  config.ssh.insert_key = false
  hosts.each do |name, ip|
    config.vm.define name do |vm|
      vm.vm.hostname = "%s.example.org" % name
      #vm.vm.network "private_network", ip: ip
      vm.vm.network "public_network", bridge: "en0: Wi-Fi (AirPort)", ip: ip
      vm.vm.provider "virtualbox" do |v|
        v.name = name
      end
      vm.vm.provision "shell", path: "provisioning.sh"
    end
  end
end

I've provided 2 network configurations here. The first one is a private network
leading to 2 virtual machines that are not accessible to your local network (
only your local OSX). The second bridges your local OSX network driver so that
your VMs gain public access within your LAN. Note that for both of these
network configurations, I've used static IPs.

Before creating our virtual machine, we need to setup a provisioning.sh:

#!/bin/bash
# Overriding bad Systemd default in Docker startup script
sudo mkdir -p /etc/systemd/system/docker.service.d
echo -e '[Service]\n# workaround to include default options\nEnvironmentFile=-/etc/default/docker\nExecStart=\nExecStart=/usr/bin/docker -d -H fd:// $DOCKER_OPTS' | sudo tee /etc/systemd/system/docker.service.d/ubuntu.conf
# Set Docker daemon with the following properties:
# * Daemon listen to external request and is exposed on port 2376, the default Docker port.
# * Docker uses the AUFS driver for file storage.
# * Daemon uses Docker's provided certification chain.
# * Dameon has a generic label.
# * Daemon is able to resolve DNS query using Google's DNS.
echo 'DOCKER_OPTS="-H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic --dns 8.8.8.8 --dns 8.8.4.4"'  | sudo tee /etc/default/docker
sudo systemctl daemon-reload
sudo systemctl restart docker
# Enable Docker on server reboot
sudo systemctl enable docker
# Remove and clean unused packages
sudo apt-get autoremove -y
sudo apt-get autoclean -y

Now, we are starting our virtual hosts and declare it as a Docker Machine:

vagrant up --no-provision

Throughout this terminal sessions, we need some environment variables.
We store them in a local_env.sh file that we fill step by step and source
each time we open a new terminal session:

export HOST_IP_DEV='192.168.1.50'
export HOST_IP_PRE='192.168.1.51'
# Use preferably your FQDN (example.org)
export HOST_IP_PROD='YOUR_SITE_FQDN'

If you are using Fish like me, use the following content:

set -x HOST_IP_DEV '192.168.1.50'
set -x HOST_IP_PRE '192.168.1.51'
# Use preferably your FQDN (example.org)
set -x HOST_IP_PROD 'YOUR_SITE_FQDN'

This should provide an easy access to all parts of the following network architecture:
Network architecture

Open 3 terminal sessions. In the first session, launch the following commands:

docker-machine -D create -d generic \
  --generic-ip-address $HOST_IP_DEV \
  --generic-ssh-user vagrant \
  --generic-ssh-key ~/.vagrant.d/insecure_private_key \
  dev

In the second session, launch the following commands:

docker-machine -D create -d generic \
  --generic-ip-address $HOST_IP_PRE \
  --generic-ssh-user vagrant \
  --generic-ssh-key ~/.vagrant.d/insecure_private_key \
  pre

Now, in the last session, wait for the 2 previous sessions to be blocked
on the following repeated message
Daemon not responding yet: dial tcp XX.XX.XX.XX:2376: connection refused
and issue the following command:

vagrant provision

What's going on here? Actually, the current state of Docker for Ubuntu 15.04
doesn't support DOCKER_OPTS. This is due to the transition in Ubuntu from
upstart to Systemd. Plus, when we are creating our Docker Machine in
our local OSX, Docker Machine re-install Docker on the host. Thus, we end up
with a screwed installation on the host unable to speak to the outside world
(leading to the message Daemon not responding yet: dial tcp 192.168.33.X:2376: connection refused).
Basically, the vagrant provisioning script patches both vagrant virtual servers.
You can reuse the content of this script on your production server when you
create the associated Docker Machine. For this, you can use the following command:

ssh root@$HOST_IP_PROD "bash -s" < ./provisioning.sh

In this last section, we will finish our configuration of our development and
pre-production hosts by installing Docker Machine and securing their open ports
with simple firewall rules. The script that we are using is named postProvisioning.sh.

#!/bin/bash
# Install Docker Machine
curl -L https://github.com/docker/machine/releases/download/v0.4.0/docker-machine_linux-amd64 | sudo tee /usr/local/bin/docker-machine > /dev/null
sudo chmod u+x /usr/local/bin/docker-machine

# Install Firewall
sudo apt-get install -y ufw
# Allow SSH
sudo ufw allow ssh
# Allow HTTP and WS
sudo ufw allow 80/tcp
# Allow HTTPS and WSS
sudo ufw allow 443/tcp
# Allow Docker daemon port and forwarding policy
sudo ufw allow 2376/tcp
sudo sed -i -e "s/^DEFAULT_FORWARD_POLICY=\"DROP\"/DEFAULT_FORWARD_POLICY=\"ACCEPT\"/" /etc/default/ufw
# Enable and reload
yes | sudo ufw enable
sudo ufw reload

We execute this script on both VM using simple SSH commands like so:

ssh -i ~/.vagrant.d/insecure_private_key vagrant@$HOST_IP_DEV "bash -s" < ./postProvisioning.sh
ssh -i ~/.vagrant.d/insecure_private_key vagrant@$HOST_IP_PRE "bash -s" < ./postProvisioning.sh

Now you can access your VM either via Docker, Vagrant and plain SSH. To finish
our VM configuration, we are going to allow full root access to the VM without
requiring to use password. For that, you need a public and a private SSH keys
on your local machine. If you haven't done it before simply use the following
command:

ssh-keygen -t rsa

Now, using Vagrant, copy the content of your ~/.ssh/id_rsa.pub in each of the
VM's /root/.ssh/authorized_key.

Reference your production host as a Docker Machine

In this example, we are using a VPS from OVH with a pre-installed Ubuntu 15.04
with Docker. These VPS starts at 2.99€ (around $3.5) per month and comes with
interesting features such as Anti-DDos, real time monitoring, ...

Preinstalled VPS comes with an OpenSSH access. Therefore, we will be using
the generic-ssh driver for our Docker Machine just like we did for the
Vagrant VM for development and pre-production. And like before, we are using
2 terminal sessions to overcome the Docker installation issue on Ubuntu 15.04.

In the first terminal session, we setup a root SSH access without password like so:

ssh-copy-id root@$HOST_IP_PROD
# Now, you should check if your key is properly copied
ssh root@$HOST_IP_PROD "cat /root/.ssh/authorized_keys"
cat ~/.ssh/id_rsa.pub
# These 2 last commands should return the exact same key

I've been tricked by some ssh-user-agent issue there. Docker wasn't reporting
any issue even in debug mode and was just exiting with a default error code.
So, be careful that your public key is exactly the same on your local machine,
your VM and your production host.

Next and still on the same terminal session, we declare our production host :

docker-machine -D create -d generic \
  --generic-ip-address $HOST_IP_PROD \
  --generic-ssh-user root \
  prod

And on the second terminal session, when the message
Daemon not responding yet: dial tcp X.X.X.X:2376: connection refused appears
on the first session, we launch:

ssh root@$HOST_IP_PROD "bash -s" < ./provisioning.sh

The last remaining step consists into solidifying our security by enabling
a firewall on the host and removing the old packages:

ssh root@$HOST_IP_PROD "bash -s" < ./postProvisioning.sh

Creating your own registry

Basically, what we want to achieve is micro-services oriented to stick to
a multi-tiers architecture:
Docker architecture

This architecture could then be spread over a Docker Swarm of multiple servers
or kept on a single one. But playing with multiple containers in development
is quickly a pain. We can leverage the power of Docker Compose and a local
registry to fasten our development of Docker images.

In your first terminal session, activate your development Docker Machine:

eval "$(docker-machine env dev)"
# In Fish
eval (docker-machine env dev)

Create a Docker Compose file registry.yml:

registry:
  restart: always
  image: registry:2
  ports:
    - 5000:5000
  environment:
    REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
  volumes:
    - /var/lib/registry:/var/lib/registry

Now, we will use the development Docker Machine as our local registry:

ssh root@$HOST_IP_DEV "mkdir /var/lib/registry"
docker-compose -f registry.yml up -d

For making it visible to our preproduction VM, we need to update our default
firewall rules:

ssh root@$HOST_IP_DEV ufw allow 5000

Now we are editing our /etc/default/docker configuration file for adding this
insecure registry in both our development and preproduction VM with this new
flag:

# On 192.168.1.50 & 192.168.1.51, in /etc/default/docker, we add in DOCKER_OPTS:
--insecure-registry 192.168.1.50:5000

We need to restart our Docker daemon and restart the Docker registry on the
development VM:

ssh root@$HOST_IP_DEV systemctl restart docker
ssh root@$HOST_IP_PRE systemctl restart docker
eval "$(docker-machine env dev)"

Our final step in the registry management is to login your preproduction VM and
your production server to Docker Hub using your Docker credential.

eval "$(docker-machine env pre)"
docker login
eval "$(docker-machine env prod)"
docker login

Note that our registry isn't published outside our LAN. This makes it unusable
for our production host. This development chain uses Docker Hub for publishing
your images. Exposing this private registry to the outside world would require
some additional configurations to tighten its security and server with a
publicly exposed IP. While you could solely rely on Docker Hub for publishing
your images, pushing and pulling to the outside world of your LAN are lengthy
operations though lighten since Docker 1.6 and Docker Registry 2.

Building Mongo

Our mongo/Dockerfile is based on Mongo's official one. It adds to the
picture the configuration of small ReplicaSet for making OPLOG available:

# Based on: https://github.com/docker-library/mongo/blob/d5aca073ca71a7023e0d4193bd14642c6950d454/3.0/Dockerfile
FROM debian:wheezy
MAINTAINER Pierre-Eric Marchandet <YOUR_DOCKER_HUB_LOGIN@gmail.com>

# Update system
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
    apt-get upgrade -y -qq --no-install-recommends && \
    apt-get install -y -qq --no-install-recommends apt-utils && \
    apt-get install -y -qq --no-install-recommends \
      ca-certificates curl psmisc apt-utils && \
    apt-get autoremove -y -qq && \
    apt-get autoclean -y -qq && \
    rm -rf /var/lib/apt/lists/*

# Grab gosu for easy step-down from root
RUN gpg --keyserver ha.pool.sks-keyservers.net --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && \
    curl -sS -o /usr/local/bin/gosu -L "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture)" && \
    curl -sS -o /usr/local/bin/gosu.asc -L "https://github.com/tianon/gosu/releases/download/1.2/gosu-$(dpkg --print-architecture).asc" && \
    gpg --verify /usr/local/bin/gosu.asc && \
    rm /usr/local/bin/gosu.asc && \
    chmod +x /usr/local/bin/gosu

# Install MongoDB
ENV MONGO_MAJOR 3.0
ENV MONGO_VERSION 3.0.6
RUN groupadd -r mongodb && \
    useradd -r -g mongodb mongodb && \
    apt-key adv --keyserver ha.pool.sks-keyservers.net --recv-keys 492EAFE8CD016A07919F1D2B9ECBEC467F0CEB10 && \
    echo "deb http://repo.mongodb.org/apt/debian wheezy/mongodb-org/$MONGO_MAJOR main" > /etc/apt/sources.list.d/mongodb-org.list && \
    apt-get update && \
    apt-get install -y -qq --no-install-recommends \
      mongodb-org=$MONGO_VERSION \
      mongodb-org-server=$MONGO_VERSION \
      mongodb-org-shell=$MONGO_VERSION \
      mongodb-org-mongos=$MONGO_VERSION \
      mongodb-org-tools=$MONGO_VERSION && \
    rm -rf /var/lib/apt/lists/* && \
    rm -rf /var/lib/mongodb && \
    mv /etc/mongod.conf /etc/mongod.conf.orig && \
    apt-get autoremove -y -qq && \
    apt-get autoclean -y -qq && \
    rm -rf /var/lib/apt/lists/* && \
    # Prepare environment for Mongo daemon: Use a Docker Volume container
    mkdir -p /db && chown -R mongodb:mongodb /db

# Launch Mongo
COPY mongod.conf /etc/mongod.conf
CMD ["gosu", "mongodb", "mongod", "-f", "/etc/mongod.conf"]

We need a configuration file for this Docker image to be built mongo/mongod.conf:

storage:
  dbPath: "/db"
  engine: "wiredTiger"
  wiredTiger:
    engineConfig:
      cacheSizeGB: 1
    collectionConfig:
      blockCompressor: snappy
replication:
  oplogSizeMB: 128
  replSetName: "rs0"
net:
  port: 27017
  wireObjectCheck : false
  unixDomainSocket:
    enabled : true

We could build this image and run it, but I prefer using a Docker Compose file.
These file eases the process of build, run and deploys of your Docker images
acting as a project file when multiple Docker images are required to work
together for an application. Here's the minimal docker/docker-compose.yml
that we will enrich in the next steps of this tutorial:

# Persistence layer: Mongo
db:
  build: mongo
  volumes:
    - /var/db:/db
  expose:
    - "27017"

Before building or launching this Docker image, we need to prepare the
volume on each host that receives and persists Mongo's data:

ssh root@$HOST_IP_DEV "rm -rf /var/db; mkdir /var/db; chmod go+w /var/db"
ssh root@$HOST_IP_PRE "rm -rf /var/db; mkdir /var/db; chmod go+w /var/db"
ssh root@$HOST_IP_PROD "rm -rf /var/db; mkdir /var/db; chmod go+w /var/db"

For building our Mongo Docker image:

docker-compose build db
# Or even faster, for building and running
docker-compose up -d db

And once it's running, initialize a single instance ReplicaSet for making
Oplog tailing available:

docker-compose run --rm db mongo db:27017/admin --quiet --eval "rs.initiate(); rs.conf();"

Some useful commands while developing a container:

# Access to a container in interactive mode
docker run -ti -P docker_db

# Delete all stopped containers
docker rm $(docker ps -a -q)
# Delete all images that are not being used in a running container
docker rmi $(docker images -q)
# Delete all images that failed to build (untagged images)
docker rmi $(docker images -f "dangling=true" -q)

# In Fish
# Delete all stopped containers
docker rm (docker ps -a -q)
# Delete all images that are not being used in a running container
docker rmi (docker images -q)
# Delete all images that failed to build (dangling images)
docker rmi (docker images -f "dangling=true" -q)

Building Meteor

Meteor is fairly easy to build. It's a simple NodeJS app. We start by creating
our docker/meteor/Dockerfile:

# Based on: https://github.com/joyent/docker-node/blob/master/0.10/wheezy/Dockerfile
FROM debian:wheezy
MAINTAINER Pierre-Eric Marchandet <YOUR_DOCKER_HUB_LOGIN@gmail.com>

# Update system
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
    apt-get upgrade -qq -y --no-install-recommends && \
    apt-get install -qq -y --no-install-recommends \
      # CURL
      ca-certificates curl wget \
      # SCM
      bzr git mercurial openssh-client subversion \
      # Build
      build-essential && \
    apt-get autoremove -qq -y && \
    apt-get autoclean -qq -y && \
    rm -rf /var/lib/apt/lists/*

# Install NodeJS
ENV NODE_VERSION 0.10.40
ENV NPM_VERSION 3.3.12
RUN curl -sSLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.gz" && \
    tar -xzf "node-v$NODE_VERSION-linux-x64.tar.gz" -C /usr/local --strip-components=1 && \
    rm "node-v$NODE_VERSION-linux-x64.tar.gz" && \
    npm install -g npm@"$NPM_VERSION" && \
    npm cache clear

# Add PM2 for process management
RUN npm install -g pm2

# Import sources
COPY bundle /app

# Install Meteor's dependencies
WORKDIR /app
RUN (cd programs/server && npm install)

# Launch application
COPY startMeteor.sh /app/startMeteor.sh
CMD ["./startMeteor.sh"]

Before building this Docker image, we need to prepare the
volume on each host that receives its settings.json used for storing your
secrets in Meteor:

ssh root@$HOST_IP_DEV "mkdir /etc/meteor"
ssh root@$HOST_IP_PRE "mkdir /etc/meteor"
ssh root@$HOST_IP_PROD "mkdir /etc/meteor"

Now copy your settings.json files on each hosts using a regular SCP. Mine
are slightly different depending on the target where I deploy my Meteor apps.

# Just an exammple, adapt it to suit your needs
scp ../app/dev.json root@$HOST_IP_DEV:/etc/meteor/settings.json
scp ../app/dev.json root@$HOST_IP_DEV:/etc/meteor/settings.json
scp ../app/prod.json root@$HOST_IP_DEV:/etc/meteor/settings.json

Note that we do not include our secrets, nor in our code repository by
using a .gitgignore file, nor in our Docker Images.

We import our Meteor sources using a shared script docker/buildMeteor.sh for
the Meteor container and the NGinx container:

#!/bin/bash
rm -rf meteor/bundle nginx/bundle
cd ../app
meteor build --architecture os.linux.x86_64 --directory ../docker/meteor
cd -
cp -R meteor/bundle nginx

In order to avoid importing too much files in our Docker image, we create
a docker/meteor/.dockerignore file which removes the parts dedicated to
the clients wich will be serverd by NGinx:

bundle/README
bundle/packages/*/.build*
bundle/packages/*/.styl
bundle/*/*.md*
bundle/programs/web.browser/app

Our last required file is a script docker/meteor/startMeteor.sh for starting
Meteor with the private settings that we add as a specific volume:

#!/bin/bash
METEOR_SETTINGS=$(cat /etc/meteor/settings.json) pm2 start -s --no-daemon --no-vizion main.js

Note that we launch Meteor with PM2. As we
will see it, it's not a mandatory step as we are using Docker's restart
policy in our Docker images. However this process management utility could be
used to get some metrics on NodeJS's status.

For building and launching, we are extending our /docker/docker-compose.yml file:

# Application server: NodeJS (Meteor)
server:
  build: meteor
  environment:
    MONGO_URL: "mongodb://db:27017"
    MONGO_OPLOG_URL: "mongodb://db:27017/local"
    PORT: 3000
    ROOT_URL: "https://192.168.1.50"
  volumes:
    - /etc/meteor:/etc/meteor
  expose:
    - "3000"

For building and launching our Meteor Docker image:

docker-compose up -d db server

Building NGinx

It's up to our front container to be created. Let's start with our docker/nginx/Dockerfile:

# Based on: https://github.com/nginxinc/docker-nginx/blob/master/Dockerfile
FROM debian:wheezy
MAINTAINER Pierre-Eric Marchandet <YOUR_DOCKER_HUB_LOGIN@gmail.com>

# Add NGinx official repository
RUN apt-key adv --keyserver pgp.mit.edu --recv-keys 573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
RUN echo "deb http://nginx.org/packages/debian/ wheezy nginx" >> /etc/apt/sources.list
ENV NGINX_VERSION 1.8.0-1~wheezy

# Update system
ENV DEBIAN_FRONTEND noninteractive
RUN groupadd -r www && \
    useradd -r -g www www && \
    apt-get update && \
    apt-get upgrade -qq -y --no-install-recommends && \
    apt-get install -qq -y --no-install-recommends \
      ca-certificates nginx=${NGINX_VERSION} && \
    apt-get autoremove -qq -y && \
    apt-get autoclean -qq -y && \
    rm -rf /var/lib/apt/lists/*

# Forward request and error logs to Docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log

# Configuration files
COPY nginx.conf /etc/nginx/nginx.conf
COPY conf host-specific /etc/nginx/conf/

# Mount points for volumes
RUN mkdir -p /etc/certs /var/cache/nginx /var/tmp

# Source
# Raw source files exposed as HTTP and HTTPS
COPY raw /www/
# Project files exposed as HTTPS
COPY  bundle/programs/web.browser/*.js \
      bundle/programs/web.browser/*.css \
      bundle/programs/web.browser/packages \
      bundle/programs/web.browser/app \
      /www/

# Ensure proper rights on static assets
RUN chown -R www:www /www /var/cache /var/tmp

# Launch NGinx
COPY startNginx.sh /startNginx.sh
RUN chmod u+x /startNginx.sh
CMD ["/startNginx.sh"]

Like the Meteor container, we are using the same import script. This time,
we remove the server part of our container using the same technic in the
docker/nginx/.dockerignore:

bundle/README
bundle/packages/*/.build*
bundle/packages/*/.styl
bundle/*/*.md*
bundle/programs/server

For building it, we enhance pour docker/docker-compose.yml file:

# Front layer, static file, SSL, proxy cache: NGinx
front:
  build: nginx
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "dev"
  volumes:
    - /etc/certs:/etc/certs
    - /var/cache:/var/cache
    - /var/tmp:/var/tmp
  ports:
    - "80:80"
    - "443:443"

Our NGinx requires certificates set on the hosts in /etc/certs. For the
production host, you require SSL certificates from a certificate authority
know by the browser vendors. For the development and the preproduction hosts,
we can use self signed certificate that we create on our hosts:

ssh root@$HOST_IP_DEV "mkdir -p /etc/certs; openssl req -nodes -new -x509 -keyout /etc/certs/server.key -out /etc/certs/server.crt -subj '/C=FR/ST=Paris/L=Paris/CN=$HOST_IP_DEV'"
ssh root@$HOST_IP_PRE "mkdir -p /etc/certs; openssl req -nodes -new -x509 -keyout /etc/certs/server.key -out /etc/certs/server.crt -subj '/C=FR/ST=Paris/L=Paris/CN=$HOST_IP_PRE'"

We need 2 additional volumes exposed on each host, one for NGinx's cache and
another one for NGinx temporary files:

ssh root@$HOST_IP_DEV "mkdir /var/cache; chmod go+w /var/cache; mkdir /var/tmp; chmod go+w /var/tmp"
ssh root@$HOST_IP_PRE "mkdir /var/cache; chmod go+w /var/cache; mkdir /var/tmp; chmod go+w /var/tmp"
ssh root@$HOST_IP_PROD "mkdir -p /etc/certs; mkdir /var/cache; chmod go+w /var/cache; mkdir /var/tmp; chmod go+w /var/tmp"

In our Docker Container, we have already imported the static part of our Meteor
app that will be exposed through HTTPS. Our NGinx server will also act as a
static file server in HTTP.Simply put your static assets in the docker/nginx/raw
folder for that.

While serving HTTP file for our Meteor application has no interest, it could be
usefull to expose some static assets without protection (this is sometime required
by SSL certificate provider).

We now need the configuration files our front. This configuration is mostly
forked and customized from HTML5's boilerplate for NGinx servers.
I will not explained all of them, simply the interesting parts that Meteor
and our multi hosts configuration require. Our entry points is docker/nginx/nginx.conf.

# Run as a less privileged user for security reasons.
user www www;
# How many worker threads to run;
# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes 1;
# Maximum open file descriptors per process;
# should be > worker_connections.
worker_rlimit_nofile 8192;
events {
  # When you need > 8000 * cpu_cores connections, you start optimizing your OS,
  # and this is probably the point at which you hire people who are smarter than
  # you, as this is *a lot* of requests.
  worker_connections 8000;
}
# Default error log file
# (this is only used when you don't override error_log on a server{} level)
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
# Main configuration
http {
  # Hide nginx version information.
  server_tokens off;
  # Proxy cache definition
  proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=one:8m max_size=3000m inactive=600m;
  proxy_temp_path /var/tmp;
  # Define the MIME types for files.
  include conf/mimetypes.conf;
  default_type application/octet-stream;
  # Update charset_types due to updated mime.types
  charset_types text/xml text/plain text/vnd.wap.wml application/x-javascript application/rss+xml text/css application/javascript application/json;
  # Format to use in log files
  log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
  # Default log file
  # (this is only used when you don't override access_log on a server{} level)
  access_log /var/log/nginx/access.log main;
  # How long to allow each connection to stay idle; longer values are better
  # for each individual client, particularly for SSL, but means that worker
  # connections are tied up longer. (Default: 65)
  keepalive_timeout 20;
  # Speed up file transfers by using sendfile() to copy directly
  # between descriptors rather than using read()/write().
  sendfile        on;
  # Tell Nginx not to send out partial frames; this increases throughput
  # since TCP frames are filled up before being sent out. (adds TCP_CORK)
  tcp_nopush      on;
  # GZip Compression
  include conf/gzip.conf;
  # Error pages redirections
  error_page 404 /404.html;
  error_page 500 502 503 504  /50x.html;
  # HTTP server
  server {
    # Server name
    include conf/servername.conf;
    # Protocol HTTP
    listen [::]:80 ipv6only=on;
    listen 80;
    # Static files with fallback to HTTPS redirect
    include conf/staticfile-with-fallback.conf;
    # Redirect non-SSL to SSL
    location @fallback {
      rewrite  ^ https://$server_name$request_uri? permanent;
    }
  }
  # Upstream server for the web application server and load balancing
  include conf/upstream-server-and-load-balancing.conf;
  # Upgrade proxy web-socket connections
  include conf/websocket-upgrade.conf;
  # HTTPS server
  server {
    # Server name
    include conf/servername.conf;
    # Protocols HTTPS, SSL, SPDY
    listen [::]:443 ipv6only=on ssl spdy;
    listen 443 ssl spdy;
    # SSL configuration
    include conf/ssl.conf;
    # SPDY configuration
    include conf/spdy.conf;
    # Static files with fallback to proxy server
    include conf/staticfile-with-fallback.conf;
    # Proxy pass to server node with websocket upgrade
    location @fallback {
      include conf/proxy-pass-and-cache.conf;
    }
  }
}

Depending on which host launches NGinx, we need a method to set a proper
sever name. For this, we create 3 files:

  • docker/nginx/host-specific/servername-dev.conf:
# Server name
server_name  192.168.1.50;
  • docker/nginx/host-specific/servername-pre.conf:
# Server name
server_name  192.168.1.51;
  • docker/nginx/host-specific/servername-prod.conf:
# Server name (the real FQDN of your production server)
server_name  example.org;

For accessing the static files exposed over HTTP, we use simply declare the root
of the front and we use a @fallback function in case no file has been found.
This is declared in the docker/nginx/staticfile-with-fallback.conf:

# Serve static file and use a fallback otherwise
location / {
  charset utf-8;
  root /www;
  # Basic rules
  include conf/basic.conf;
  # Try static files and redirect otherwise
  try_files $uri @fallback;
  # Expiration rules
  include conf/expires.conf;
}

In our HTTP part of our main configuration, you can see that the trafic is
redirected to HTTPS via URL rewriting technic. Our SSL configuration
docker/nginx/conf/ssl.conf uses the exposed Docker Volume /etc/certs:

# SSL configuration
ssl on;
# SSL key paths
ssl_certificate /etc/certs/server.crt;
ssl_certificate_key /etc/certs/server.key;
# Trusted cert must be made up of your intermediate certificate followed by root certificate
# ssl_trusted_certificate /path/to/ca.crt;
# Optimize SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type "shared between all Nginx workers"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout 1m;
# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default
# Protect against the BEAST and POODLE attacks by not using SSLv3 at all. If you need to support older browsers (IE6) you may need to add
# SSLv3 to the list of protocols below.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
# Ciphers set to best allow protection from Beast, while providing forwarding secrecy, as defined by Mozilla (Intermediate Set)
# - https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DES-CBC3-SHA:!ADH:!AECDH:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
# OCSP stapling...
ssl_stapling on;
ssl_stapling_verify on;
# DNS resolution on Google's DNS and DynDNS
resolver 8.8.8.8 8.8.4.4 216.146.35.35 216.146.36.36 valid=60s;
resolver_timeout 2s;
# HSTS (HTTP Strict Transport Security)
# This header tells browsers to cache the certificate for a year and to connect exclusively via HTTPS.
add_header Strict-Transport-Security "max-age=31536000;";
# This version tells browsers to treat all subdomains the same as this site and to load exclusively over HTTPS
#add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
add_header X-Frame-Options DENY;

We have also added SPDY to our HTTPS configuration in the docker/nginx/conf/spdy.conf:

# SPDY configuration
add_header Alternate-Protocol  443:npn-spdy/3;
# Adjust connection keepalive for SPDY clients:
spdy_keepalive_timeout 300; # up from 180 secs default
# enable SPDY header compression
spdy_headers_comp 9;

HTTP/2 support is on its way. When integrated to NGinx, this configuration will
be upgraded for taking advantage of it.

No that SSL and SPDY are set, we can serve the static file exposed via HTTPS with
the same configuration as before for HTTP. But this time, the fallback mecanism
redirect the trafic to our Meteor application (our server container).
If no static file is found, the trafic is send to our Meteor application using a
proxy with cache:

proxy_http_version 1.1;
proxy_pass http://server;
proxy_headers_hash_max_size 1024;
proxy_headers_hash_bucket_size 128;
proxy_redirect off;
# Upgrade proxy web-socket connections
proxy_set_header Upgrade $http_upgrade; # allow websockets
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forward-Proto http;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forward-Proto http;
proxy_set_header X-Nginx-Proxy true;
proxy_cache one;
proxy_cache_key prj$request_uri$scheme;
proxy_cache_bypass $http_upgrade;
# Expiration rules
if ($uri != '/') {
  expires 30d;
}

Our proxy cache needs to upgrade the HTTPS connections to WSS. This is achieved
in our docker/nginx/conf/upstream-server-and-load-balancing.conf:

# Upstream server for the web application server
upstream server {
  # server is included in each dynamic /etc/hosts by Docker
  server server:3000;
  # Load balancing could be done here, if required.
}

For directing our NGinx on the appropriate configuration, we use an
simple environment variables HOST_TARGET that can be dev, pre or
prod and a script docker/nginx/startNginx.sh for using this variable:

#!/bin/bash
if [ ! -f /etc/nginx/conf/servername.conf ]
then
  ln -s /etc/nginx/conf/servername-$HOST_TARGET.conf /etc/nginx/conf/servername.conf
fi
nginx -g "daemon off;"

Like before for the other containers, we build it and launch it with:

docker-compose up -d

You should now have a full development host.

Application logging

When launching, stopping, refreshing our services, Docker produces a log
for each container that you can easily access in your CLI:

docker-compose logs
# Or only for the db
docker-compose logs db
# Or only for the server
docker-compose logs server
# Or only for the server and the front...
docker-compose logs server front
# ...

As you can see it, it can start to be a bit verbose. Still, you can inspect
any Docker container log with a tail like this:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                      NAMES
82a7489e41a0        docker_front        "/startNginx.sh"         4 hours ago         Up 4 hours          0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   docker_front_1
4b0656669213        docker_server       "./startMeteor.sh"       27 hours ago        Up 4 hours          3000/tcp                                   docker_server_1
fe6a7238328a        docker_db           "mongod -f /etc/mongo"   45 hours ago        Up 4 hours          27017/tcp                                  docker_db_1
1a878c646094        registry:2          "/bin/registry /etc/d"   46 hours ago        Up 46 hours         0.0.0.0:5000->5000/tcp                     docker_registry_1

$ docker logs --tail 4 -f docker_db_1
2015-09-03T12:20:49.298+0000 I NETWORK  [initandlisten] connection accepted from 172.17.0.64:49051 #22 (18 connections now open)
2015-09-03T12:20:49.314+0000 I NETWORK  [initandlisten] connection accepted from 172.17.0.64:49052 #23 (19 connections now open)
2015-09-03T12:20:49.315+0000 I NETWORK  [initandlisten] connection accepted from 172.17.0.64:49053 #24 (20 connections now open)
2015-09-03T16:36:13.666+0000 I QUERY    [conn10] g...

Docker logs are not regular /var/log entries. They are specific to each ones of
your container. There's an important risk to fill up your disk pretty fast depending
on you log usages. Fortunately, since Docker 1.8, specific log driver can be added
to our running container. We are using logrotate here but you could setup a
specific server for an ELK stack
or any other of your favorite log solution. For configuring our logrotate on
each host, add a new configuration for Docker:

ssh root@$HOST_IP_DEV "echo -e '/var/lib/docker/containers/*/*.log {  \n  rotate 7\n  daily\n  compress\n  size=1M\n  missingok\n  delaycompress\n  copytruncate\n}' > /etc/logrotate.d/docker"
ssh root@$HOST_IP_PRE "echo -e '/var/lib/docker/containers/*/*.log {  \n  rotate 7\n  daily\n  compress\n  size=1M\n  missingok\n  delaycompress\n  copytruncate\n}' > /etc/logrotate.d/docker"
ssh root@$HOST_IP_PROD "echo -e '/var/lib/docker/containers/*/*.log {  \n  rotate 7\n  daily\n  compress\n  size=1M\n  missingok\n  delaycompress\n  copytruncate\n}' > /etc/logrotate.d/docker"

Now, we are updating our docker/docker-compose.yml and set our Docker
containers to use the json-file log driver so that it doesn't stay buried
in /var/lib/docker/containers/[CONTAINER ID]/[CONTAINER_ID]-json.log:

# Persistence layer: Mongo
db:
  build: mongo
  log_driver: "json-file"
  volumes:
    - /var/db:/db
  expose:
    - "27017"
# Application server: NodeJS (Meteor)
server:
  build: meteor
  log_driver: "json-file"
  environment:
    MONGO_URL: "mongodb://db:27017"
    MONGO_OPLOG_URL: "mongodb://db:27017/local"
    PORT: 3000
    ROOT_URL: "https://192.168.1.50"
  volumes:
    - /etc/meteor:/etc/meteor
  expose:
    - "3000"
# Front layer, static file, SSL, proxy cache: NGinx
front:
  build: nginx
  log_driver: "json-file"
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "dev"
  volumes:
    - /etc/certs:/etc/certs
    - /var/cache:/var/cache
    - /var/tmp:/var/tmp
  ports:
    - "80:80"
    - "443:443"
  log_driver: "json-file"

For taking this new logging configuration, just issue the following commands:

# This stops the current running containers
docker-compose stop
# This rebuild all images
docker-compose build
# This starts all containers
docker-compose up -d

Push to your local registry

When your are satisfied with the development of your container, you can save
your Docker images into your local registry for deploying them to preproduction.

For Mongo:

docker tag -f docker_db $HOST_IP_DEV:5000/mongo:v1.0.0
docker push $HOST_IP_DEV:5000/mongo:v1.0.0
docker tag -f docker_db $HOST_IP_DEV:5000/mongo:latest
docker push $HOST_IP_DEV:5000/mongo:latest

For Meteor:

docker tag -f docker_server $HOST_IP_DEV:5000/meteor:v1.0.0
docker push $HOST_IP_DEV:5000/meteor:v1.0.0
docker tag -f docker_server $HOST_IP_DEV:5000/meteor:latest
docker push $HOST_IP_DEV:5000/meteor:latest

For NGinx:

docker tag -f docker_front $HOST_IP_DEV:5000/nginx:v1.0.0
docker push $HOST_IP_DEV:5000/nginx:v1.0.0
docker tag -f docker_front $HOST_IP_DEV:5000/nginx:latest
docker push $HOST_IP_DEV:5000/nginx:latest

Deployment in pre-production

For deploying to production, we are going to refactor our docker/docker-compose.yml
a bit to avoid repetition on Docker Compose file depending on the host that
you're tageting.

We create a docker/common.yml file which centralized value used for all hosts:

# Persistence layer: Mongo
db:
  build: mongo
  log_driver: "json-file"
  volumes:
    - /var/db:/db
  expose:
    - "27017"
# Application server: NodeJS (Meteor)
server:
  build: meteor
  log_driver: "json-file"
  environment:
    MONGO_URL: "mongodb://db:27017"
    MONGO_OPLOG_URL: "mongodb://db:27017/local"
    PORT: 3000
  volumes:
    - /etc/meteor:/etc/meteor
  expose:
    - "3000"
# Front layer, static file, SSL, proxy cache: NGinx
front:
  log_driver: "json-file"
  build: nginx
  volumes:
    - /etc/certs:/etc/certs
    - /var/cache:/var/cache
    - /var/tmp:/var/tmp
  ports:
    - "80:80"
    - "443:443"

Now, we can refactor our docker/docker-compose.yml to only set the remaining
Docker command required for development:

# Persistence layer: Mongo
db:
  extends:
    file: common.yml
    service: db
# Application server: NodeJS (Meteor)
server:
  extends:
    file: common.yml
    service: server
  links:
    - db
  environment:
    ROOT_URL: "https://192.168.1.50"
# Front layer, static file, SSL, proxy cache: NGinx
front:
  extends:
    file: common.yml
    service: front
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "dev"

Now for easing the deployment on the pre-production hosts, we are using our
common configuration in a docker/deploy-pre.yml file that ease the pull and
launch of your services:

# Persistence layer: Mongo
db:
  image: 192.168.1.50:5000/mongo:v1.0.0
  extends:
    file: common.yml
    service: db
  restart: always
# Application server: NodeJS (Meteor)
server:
  image: 192.168.1.50:5000/meteor:v1.0.0
  extends:
    file: common.yml
    service: server
  links:
    - db
  environment:
    ROOT_URL: "https://192.168.1.51"
  restart: always
# Front layer, static file, SSL, proxy cache: NGinx
front:
  image: 192.168.1.50:5000/nginx:v1.0.0
  extends:
    file: common.yml
    service: front
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "pre"
  restart: always

Connect Docker Machine to your pre-production host, start your services
and ensure that your ReplicationSet creation is applied:

eval "$(docker-machine env pre)"
docker-compose -f deploy-pre.yml up -d
docker-compose -f deploy-pre.yml run --rm db mongo db:27017/admin --quiet --eval "rs.initiate(); rs.conf();"

Once you are satisfied with you containers, it's time to make them
available to your production server.

Push to Docker Hub

Now we go back on our development host for publishing these container on
the public Docker Hub:

eval "$(docker-machine env dev)"

And we publish our containers for Mongo:

docker tag -f docker_db YOUR_DOCKER_HUB_LOGIN/mongo:v1.0.0
docker push YOUR_DOCKER_HUB_LOGIN/mongo:v1.0.0
docker tag -f docker_db YOUR_DOCKER_HUB_LOGIN/mongo:latest
docker push YOUR_DOCKER_HUB_LOGIN/mongo:latest

For Meteor:

docker tag -f docker_server YOUR_DOCKER_HUB_LOGIN/meteor:v1.0.0
docker push YOUR_DOCKER_HUB_LOGIN/meteor:v1.0.0
docker tag -f docker_server YOUR_DOCKER_HUB_LOGIN/meteor:latest
docker push YOUR_DOCKER_HUB_LOGIN/meteor:latest

For NGinx:

docker tag -f docker_front YOUR_DOCKER_HUB_LOGIN/nginx:v1.0.0
docker push YOUR_DOCKER_HUB_LOGIN/nginx:v1.0.0
docker tag -f docker_front YOUR_DOCKER_HUB_LOGIN/nginx:latest
docker push YOUR_DOCKER_HUB_LOGIN/nginx:latest

Deployment in production

Like the deployment in pre-production, we are leveraging the capabilities
of Docker Compose for easing the pulling and running of Docker containers.
For this, we create a docker/deploy-prod.yml file:

# Persistence layer: Mongo
db:
  image: YOUR_DOCKER_HUB_LOGIN/mongo:v1.0.0
  extends:
    file: common.yml
    service: db
  restart: always
# Application server: NodeJS (Meteor)
server:
  image: YOUR_DOCKER_HUB_LOGIN/meteor:v1.0.0
  extends:
    file: common.yml
    service: server
  links:
    - db
  environment:
    ROOT_URL: "https://YOUR_SITE_FQDN"
  restart: always
# Front layer, static file, SSL, proxy cache: NGinx
front:
  image: YOUR_DOCKER_HUB_LOGIN/nginx:v1.0.0
  extends:
    file: common.yml
    service: front
  links:
    - server
  environment:
    # Can be: dev, pre, prod
    HOST_TARGET: "prod"
  restart: always

Before running everything in production, we must pull our images. Behind the
scene so that our users doesn't notice the changes, then we will stop our current
running containers, launch our new ones and finish by a ReplicaSet configuration.

eval "$(docker-machine env prod)"
docker-compose -f deploy-prod.yml pull
docker stop "$(docker ps -a -q)"
docker-compose -f deploy-prod.yml up -d
docker-compose -f deploy-prod.yml run --rm db mongo db:27017/admin --quiet --eval "rs.initiate(); rs.conf();"

FAQ

When I use tap:i18n my translation file are not available?

Simply add TAPi18n Bundler to your
Meteor project.

Links

Sources for this tutorial:
Github's repository
Blog article

Informations used for this tutorial:
Homebrew
Caskroom
Easy sending your public SSH key to your remote servers
Docker documentation
Docker Installation on Ubuntu
Secure Docker
The dangers of UFW + Docker
OpenSSL Howto
Control and configure Docker with Systemd
How to configure Docker on Ubuntu 15.04 (workaround)
Ulexus/Meteor: A Docker container for Meteor
VPS SSD at OVH
Your Docker Hub account
Creating a single instance MongoDB replica set for Meteor
jq is a lightweight and flexible command-line JSON processor
How to add environment variables to nginx.conf
MongoDB configuration options
MongoDB sample YAML files
The magic of Meteor oplog tailing
Docker: Containers for the Masses -- using Docker
How To Create an SSL Certificate on Nginx for Ubuntu 14.04
SSL and Meteor.js
* HTML5's boilerplate for NGinx servers

Going further:
Deploying HTTP/2 and Strong TLS with Nghttp2 and Nginx
HTTP/2.0 with Nginx & NGHTTP2
Un serveur MongoDB sécurisé sur Docker
Docker sans utilisateur root sur l'hôte et dans les containers