Deploying Docker with OpsWorks

Using Docker to run several self-contained web applications on a single server is ground-breaking; but how easy is it to deploy OpsWorks, or for that matter Chef? I hope to show just how easy it can be. This article assumes you already have your LAMP web application Docker images built as well as their Nginx proxy image. Each LAMP container should expose its own hostname(s) in a fashion similar to my earlier blog post, Nginx Proxy Through Linked Containers.

If you haven't already, read my previous post Why Deploy Docker with OpsWorks?. It's a good primer on what you'd get out of OpsWorks to aid in your Docker venture.

OpsWorks & Docker

For this post, all of my sample recipes can be located on my GitHub sample cookbook repository.

Building an Image

Docker needs a recent version of the linux Kernel to run properly. Updating the kernel generally requires a reboot, something that Chef doesn't like to do during a build. So before beginning, it's easiest to build an Amazon Machine Image (AMI) that you'll use later.

You can create a new AMI via the instructions below, or you can use my public AMI, with an id of ami-9282eba2, "Ubuntu 12.04 Docker Ready".

From the EC2 management console, start by launching a new image, using Ubuntu 12.04 LTS as your flavor. The other defaults are fine for this, just ensure you use a security profile that will allow you to SSH in.

Once the instance has launched and you have SSH'd in, we'll run the Docker Ubuntu Installation steps (below) to update the kernel to 3.8. This image will be used by instances running Docker, so keep the modifications minimal.

# install the backported kernel
sudo apt-get update  
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

# reboot
sudo reboot  

Once rebooted, SSH in again. We don't want to actually install Docker on this instance so Chef can manage specific versions later, but we want to make it easier by adding all the necessary keys and repository.

# Add the repository to your APT sources
sudo su  
echo deb https://get.docker.io/ubuntu docker main > /etc/apt/sources.list.d/docker.list  
# Then import the repository key
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9  
apt-get update  

From EC2, stop the instance and create an AMI for it. Use a memorable name that will help you easily search for it in OpsWorks.

Create your Cookbook Repository

While one could use the Docker API to accomplish these tasks, I found it quicker to just use the CLI I already knew. The API version of these recipes is something I'll tackle another day -- maybe when Docker is production ready.

Firstly, if you don't already have one, make yourself a cookbook repo on GitHub or other repository host. OpsWorks will load all your cookbooks in from one repository source. Sadly, one of my gripes with OpsWorks is that there is only one cookbook repository per Stack. If you want to include an open source recipe or cookbook, you'll need to commit a copy of it to your cookbook (possibly using Git subtrees). To this end, I also recommend namespacing your cookbooks in your repo as to allow an easy way to distinguish yours vs 3rd party cookbooks in your repository. For example, instead of naming your cookbook "webapp", name it "my_webapp".

Create your Project's Cookbook

Now that you have your repo, time to start building a cookbook to deploy your project. My projects have been Apache and PHP based, so this will lean more towards that approach. It's not a full LAMP stack as I don't build MySQL. Being that we're in AWS, I save time and lower complexity by taking advantage of AWS' RDS.

Create a folder called "my_webapp". This is now our cookbook for this sole project. Inside of it, you'll need another folder called "recipes".

Recipe: Install Docker

First and foremost is a simple recipe to install Docker. If you want to maintain version consistency when creating instances, I recommend locking Docker at a certain version.

Also, installing Docker on Ubuntu with apt-get automatically adds it to your start list, so there's no need to do this with Chef. I had problems at one point using the service enable feature of Chef where Docker didn't restart on reboot, so I am avoiding it for the time being.

Create a file inside your recipes folder called "setup.rb". Open it and add the following. Note the explicit version.

package "docker" do  
  package_name "lxc-docker-0.9.0"
  action :install
end  

If your Docker containers are hosted on a private, secured Registry, this is a good recipe to also install your .dockercfg file. That's a topic for another post, however.

Remove Running Containers

All of my containers are volatile, so in the event I need to restart or rebuild them, I don't bother restarting the containers, I just remove and recreate them. This is so I can more easily add containers to the mix via linking on container start, and so that load balancing instances of containers is possible.

I have a recipe that safely stops and removes all running containers. I use it a lot for various tasks ran by other recipes, so I put it in a reusable location. A recipe called kill_containers.rb. Note the escaped \#. This is needed or else Chef will try to parse those variables instead of them being parsed at runtime, which is what we need for this to work.

script "kill_all_containers" do  
  interpreter "ruby"
  user "root"
  code <<-EOH
    `docker ps -q`.split("\n").each do |container_id|
      `docker stop \#{container_id}`
    end
    `docker ps -a -q`.split("\n").each do |container_id|
      `docker rm \#{container_id}`
    end
  EOH
end  

Pull New Images

Pulling images with Docker is not the fastest thing I've ever witnessed. However, it is one of Docker's stronger features. In a production environment, you don't want currently running code (or containers) to be touched while deploying new code, as deploying can be a lengthy process. Docker can pull the latest version of your app's image while the current one hums along untouched. So, prior to any container stopping, you'll want to pull all your new images. A recipe that gets put high on Chef's run-list: pull_images.rb. This recipe assumes you'll have a Custom JSON value for :my_apps - we'll get to that later - but this variable has two parts, the app's name and its repo. In this recipe, the name is ignored. For public images, the repo would be something like mynamespace/myimage:mytag.

# Pull each of our defined apps
node[:my_apps].each do |name, image|  
  script "pull_app_#{name}_image" do
    interpreter "bash"
    user "root"
    code <<-EOH
      docker pull #{image}
    EOH
  end
end  
# Pull latest Nginx
script "pull_nginx_image" do  
  interpreter "bash"
  user "root"
  code <<-EOH
    docker pull #{node[:my_nginx]}
  EOH
end  

Run New Containers

Lastly, you need a recipe that starts containers based on your new images. Simply name this one run_containers.rb.

# Run each app. We don't expose any ports since Nginx will handle all incoming traffic as a proxy
node[:my_apps].each do |name, image|  
  script "run_app_#{name}_container" do
    interpreter "bash"
    user "root"
    code <<-EOH
      docker run -d --name=#{name} #{image}
    EOH
  end
end

# Run Nginx, linking it to all the other Apache containers, and expose its port.
script "run_nginx_container" do  
  links = ''
  node[:my_apps].keys.each do |name|
    # prefixed with "app_" here for ease of discovery
    # see: http://jaredmarkell.com/nginx-proxy-through-linked-docker-containers/
    links += " --link=#{name}:app_#{name}"
  end
  interpreter "bash"
  user "root"
  code <<-EOH
    docker run -d -p 80:80 --name=nginx #{links} #{node[:my_nginx]}
  EOH
end  

Create an ELB

Creating an ELB is not part of OpsWorks, so its best to build it before getting into OpsWorks. In the EC2 console, click on Load Balancers, then Create Load Balancer. Give it a name. I try to name mine based on what I will name my Stack and Layers by. So if the Stack is named "My Platform QA", and the Layer will be "My WebApps", then a good name for the ELB would be "My Platform QA WebApps".

At the bare minimum, you'll want an HTTP 80 -> 80 forwarding Listener, but you can also set up your HTTPS 443 -> 80 forwarding here, and install or choose your existing SSL certificate.

OpsWorks ELB Listeners

The health check can be tricky at first. You have a couple options. You can build a configuration in Nginx to respond to a "ping" command, or add a URI that all of your containers would have, like "/" or "/robots.txt". Note that the health check does not use a Host header that Nginx is listening for, so it could randomly resolve to any of your LAMP containers depending on how you have Nginx setup. The return value doesn't matter, so long as it returns a 200 OK response. This health check tells the ELB if an instance is ready to join the group, so it needs to indicate that the containers are up and running at a bare minimum. It helps to keep your Healthy count check low so you don't have to wait so long initially to see an instance in the ELB. But this can be increased after going to production.

OpsWorks ELB Health Check

Don't add any instances, as OpsWorks will do this for you.

Create your Stack

Now that we have our recipes and ELB, we can create our Stack in OpsWorks. Create a new Stack using most defaults, except choose "Use Custom AMI", supply your Cookbook repository information, ensure you have SSH key pair chosen, and create your Custom JSON. The Custom JSON will need to supply the name and image of each web app Image, as well as the Nginx Image. It should look something like this:

{
  "my_apps": {
    "app1": "myimage:latest",
    "app2": "myimage2:latest"
  },
  "my_nginx": "mynginximage:latest"
}

If you're using a private Docker Registry, you could prefix your image names to also have your Registry url, myregistry.example.com:443/myimage:latest. Don't forget you'll need your .dockercfg set up or Opsworks will fail to authenticate with your Registry.

Stack's Security Group

I won't get into detail here, but using OpsWorks' default Security Groups are generally fine. We will be using "Custom" Layers which will open SSH, HTTP and HTTPS to the world on your instances. If you are behind an ELB on that layer, you could tighten it down to just SSH for your private IP, and HTTP only open to the ELB. This way, any traffic coming to the instances should be from you or the ELB. Likewise, since the ELB can route your HTTPS traffic and convert it to HTTP on its way to the Instance, the instances generally don't need port 443 open. Again, these are nice to haves, but the OpsWorks' security groups also do a great job for their ease of setup.

Create your Layer

For any layer you plan to use Docker instances on, you want to use the "Custom" layer type. Give it a name and continue.

Edit the layer, and add our recipes to the mix.
Custom Docker Recipes

Optionally, you could also add your ELB here. If you're using HTTPS, as mentioned above I recommend using an ELB for that purpose. For load-balanced instances, you'll also need your ELB.

Create an Instance

For the newly created Layer, it's time to add an Instance. Go to the Instances panel and click to add an Instance for the Layer.

Fill out this form to fit your needs. Your first instance should be a 24/7 instance. After you have successfully deployed your first 24/7 instance, you could create a Load-based instance and modify the Layer as needed.

Don't forget to choose your Docker AMI. Unfortunately, you can't currently set a default image on a Stack, so you have to choose the Docker AMI every time you create an Instance.

Creating an Instance in OpsWorks for Docker

Start your instance and wait for the magic. If there are any typos in the recipes or the Stack's Custom JSON, it could fail the launch. In that event, I've found it easier to work with by removing all your custom recipes, and boot a new instance bare. It will set up all the OpsWorks components less yours. Then you can manually fire off your recipes through the Stack, and watch their success or failure at a much quicker pace than having to wait for the Instance to boot each time.

Load Based Instancing

Note that if you used an ELB, it will still take a couple minutes for the Instance to be registered. You can SSH into your instance and check your containers that route as well. Note that the SSH link provided by OpsWorks will be wrong. Instead of ec2-user@ipaddress, it should be ubuntu@ipaddress.

No Need for an App

Since your apps are contained in your Docker Images, you don't need to create an App in OpsWorks. Deployment happens through your custom recipes which fire off on Instance boot, and can be fired off manually through OpsWorks' Stack Command Execution feature, or through an OpsWorks' API request.

Conclusion

I hope this helps you get started with Docker and OpsWorks for a powerful and affordable architecture. Enjoy the ease of Docker image building with the auto-management of OpsWorks for a worry-free system.

Docker is nearing production ready status and I can't wait to hone this process as it gets there. Continuous Integration and Deployment will be a breeze with this setup; especially after tying it into Docker's API.