Why Deploy Docker with OpsWorks?

OpsWorks & Docker

I love Docker, and I love Amazon Web Services' OpsWorks. However, OpsWorks utilizes Chef, and I find Chef clunky. Or at least OpsWorks' version of Chef Server. Apart from Chef, there are a lot of nice things to gain from OpsWorks. I've merged the best of OpsWorks with Docker and made a mostly pain-free development cycle of deployment.

I'll go into detail on how you can deploy Docker with OpsWorks in another post. This article focuses on why you'd still want to use OpsWorks if you don't need most of what Chef provides.

This list is in no way exhaustive, but the parts that I find ever so valuable. The more that OpsWorks can handle for me, the more time I can concentrate on making rock-solid Docker containers.

Management of environments through Stacks.

When managing environments through EC2, you'll find it quickly gets out of hand. You can name your instances in a way that helps determine which environment they belong to, but it still looks like a pile of legos on the floor when trying to find the instances you need to manage.

Stacks provide a nice layer of abstraction above EC2. I use Stacks as a way to manage environments. Create a QA stack, make it work like you expect, then clone the stack to create a Staging stack, then a Production stack. Each stack will manage its own instances, so you won't have to touch EC2.
OpsWorks Stacks

Easy load balancing and instance auto-healing.

OpsWorks makes using ELBs (Elastic Load Balancers) trivial. After creating a layer, create and add a new ELB, and you're off. If you are expecting moderate traffic, enable load-based instancing on the layer in OpsWorks. Create 2 instances, one a 24/7 instance and the other a load-based instance and the ELB will auto-start your second instance when the first is under heavy load. The best part? You only pay for the time they are running.
Instance in Cron Layer

Auto-healing is another nice feature that will determine if an instances is going bad and will remove it from the ELB and load up one of your backup (load-based) instance in its place until the problem resolves.

Scheduled task singularity.

A problem I've had to deal with before is the problem of scheduled tasks (crontab jobs) running on too many instances in a load-balanced environment. In the event you have a crontab that should only run on one instance (a job that maybe does some work and stores the data in MySQL - why run on multiple instances?), OpsWorks to save the day. Create a layer specifically for your crontab tasks and add just one of your instances to it. An instance in OpsWorks can belong to more than one layer, and this allows just one of your production instances to run the crontab tasks while the backup servers are none the wiser.
Instance in Cron Layer

Deployment API.

If Continuous Deployment is your thing, you can use OpsWorks' API to run chef recipes, like deploying, from your Continuous Integration platform. One of my favorite solutions has been to use CodeShip to Deploy to OpsWorks. Once our code had been reviewed and merged in a GitHub pull request, code ship grabbed the code, ran tests, then deployed to OpsWorks. Even with Docker containers, this can be a breeze.

Order, not Chaos.

If you've done any IT ever, you've been in a scenario where you need to debug a server that you didn't build. The first question you usually ask is "I wonder what tweaks the creator of this server made to the machine without documenting it." Small tweaks to a .conf somewhere, custom ssh keys hidden in some folder in the wrong place.

With OpsWorks (especially with Docker), there's none of that. Every instance is built programmatically. This means you can add a new instance in OpsWorks, boot it up, and everything will install and launch the exact same way as the previous instances were. I also treat the instances as volatile -- meaning, in the event I need to, I can throw away an instance and replace it with a newly built one. One example of why this is nice is in the event you get hacked. First thing you do is find out how they got in. Second is plug that hole (hopefully programmatically, not just a kludge on a running instance), or close that port, etc. With OpsWorks instances being volatile, you can create and boot a new instance. When it is done and ready, shut down and throw away the previously hacked instance. Sure beats trying to find every file they touched or malware they installed.

Conclusion

I will work to create a post about how to deploy Docker with OpsWorks, but hopefully this whets your appetite. While there are a growing number of services providing Docker container deploying and hosting, there's not quite like doing it yourself. OpsWorks can provide that while still being easy on the learning curve (Chef aside).

Stay tuned.