Today we announced our support for Docker containers to deploy Enterprise Agents. For those of you that have Enterprise Agents installed or who may be using Docker, you may be wondering if containers are a good deployment option for you. In this post we’ll cover this news and give you insights into whether you should consider Docker deployment by explaining our journey managing agents in both virtualized and Docker environments.
Enterprise Agents Now Support Docker Containers
Enterprise Agents are software probes located within the corporate network that provide visibility into service availability and network performance by correlating network behaviour with application performance. You can deploy Enterprise Agents as Linux packages or as a virtual appliance, and now we’re announcing our support for deployments using Docker containers. This means you can pick and choose the deployment model that suits you the best.
Docker provides a unifying platform that allows Enterprise Agents to be deployed across different Linux distributions. Enterprise Agents on Docker increase operational efficiency when it comes to deploying and managing large clusters; deployment of large numbers of Enterprise Agents can be automated with container orchestrators like Kubernetes or Docker Swarm. Containers can also be dynamically spawned or rebuilt across virtualized infrastructure, providing high availability and significantly reducing maintenance window times to optimize IT operations. With infrastructure service providers like AWS, Azure or Google Compute Engine natively supporting Docker, it also opens up the possibility of deploying Enterprise Agents within an IaaS network.
Container. VM. What’s the Difference?
Since its inception, a common question concerning container technology has been: what is the difference between virtual machines and containers? It’s not surprising that there is a widespread misunderstanding that containers and VM’s are the same. Rather than reiterate what’s already been written about elsewhere, let me instead direct you to the following blog written by Docker’s Mike Coleman: Containers are not VMs. The blog also happens to include a very interesting analogy!
In short, with VMs the hypervisor abstracts the actual physical hardware on which it is hosted. VMs have their own operating system, file system, storage, CPU, network adapters, etc. Containers, on the other hand, share the operating system which makes them lightweight and faster to install. Multiple containers can share the same operating system, while each VM is tied to its own operating system, as shown in Figure 1.
Evolving Towards Docker
How did it all start? At ThousandEyes, adoption of Docker originated as an internal project within the operations team who were looking to more efficiently deploy and maintain our constellation of Cloud Agents that exist across the world.
Cloud Agents use the same software building blocks as Enterprise Agents, but instead provide visibility into the network from an external perspective. Today there are over 1,000 agents deployed in more than 117 cities across 40 countries in Tier 2 and Tier 3 ISP networks. This number is constantly increasing, so the team was looking for a better way to maintain and manage these agents in preparation for continued growth. Cloud Agents have been in production for over 5 years now, giving us ample opportunity to validate different technology trends in server virtualization.
As with any technology adoption, it was an evolution before we arrived at Docker as the de facto choice! In the remaining section of the blog we review our journey through various technologies highlighting the pro’s and con’s relative to our Cloud Agent production environment.
Our Experience with Virtualization
When we started, Virtual Private Server (VPS) was the easiest available option. As the number of agents installed on VPS increased, so did the issues resulting from multi-tenancy. The performance drops grew frequent, making it harder maintain and troubleshoot the agents. The next logical progression was to move to a virtualized environment that was controllable. We decided to go ahead with VMware’s ESXi Hypervisor. ESXi installations were easily provided by most hosting vendors and barely needed any input from our end. At first, this was very successful. Agents were more stable and we ended up deploying over 400 agents on ESXi. However, with the growing number of agents we needed a configuration management and provisioning system and this proved to be cost prohibitive with ESXi. This led us to our third virtualization platform, Xen.
As a lot of internal development projects were using Xen, we leveraged our familiarity with the platform to deploy our Cloud Agents. In addition to being cost efficient, Xen gave us a better control of the environment. However, it was difficult to get a homogenous set up from hosting providers. Difference in Xen versions, host distributions and manual network changes due to architectural differences among providers proved challenging.
This led us to Docker. While Docker did come with a steep learning curve, the benefits overwhelmed the challenges we faced. With Docker we could now deploy agents within seconds, maintaining and automating Docker clusters with Puppet. We were pleased when our rate of deploying Cloud Agents spiked with Docker. We found that we were installing over 33 Cloud Agents/month with Docker compared to only 15 Cloud Agents/month with Xen. That’s twice the efficiency!
In Figure 2, we map the progression of different virtualization technologies to their operational strengths and weaknesses that we witnessed in our Cloud Agent environment. Each of the virtualization technologies have their own strengths better suited to individual environments. In our environment where scale, automation and manageability were key concerns, the economics of OS abstraction proved more beneficial than hardware abstraction.
Deploying with Docker — Lessons Learned
Early adoption of new technology is always challenging and comes with a learning curve. However, we learned a ton of lessons while migrating to Docker for our own deployments and hope the following approaches and workarounds can help you as you do the same.
Running Multiple Services per Container
Docker has alway been marketed as a single service container and advocated as running only one application inside it. Even though we were replacing a complete VM with a Docker instance, our application stack still needed a few services. The simplicity offered by Docker was so powerful that we decided to experiment with the idea of running multiple services inside it, against generally received wisdom. We leveraged PHUSION and the concept of ‘baseimage’ to run multiple services within Docker. We did consider the option to move to LXC, but wanted to stay away from the notion of maintaining and running multiple operating systems.
Adding Public IP Addressing
ThousandEyes Cloud Agents infrastructure is built on public IP addresses, and out-of-the-box support for public IPs did not exist with Docker containers. In our previous implementations with other virtualization technologies, we used bridged configurations on the host and the agent VM (for example, Xenbr0 bridge for Xen hypervisors).
With Docker, we were able to use a simple yet powerful tool called pipework. One of the earliest members of the Docker ecosystem, pipework is a single script that provides the ability to configure container networking in a simple way. For example, adding an interface to a container and connecting it to a bridge on host can be done by just executing the following command.
pipework br0 -i eth0 container1 ip/netmask@gateway
With over 800 publicly accessible agents in 100 cities, security is an important concern. Despite general concerns over weak security of Docker, we were able to incorporate a simpler and more effective security methodology. Instead of running an entire Virtual Machine, we only run the critical components in Docker. This reduced the overhead of managing security updates for rest of the components in a VM. Even pushing security updates became easier. We simply rebuild the image to install new updates.
To further strengthen this model, we also controlled the privileges of the Docker containers by running them with specific capabilities, making sure that action of one container doesn’t affect the entire host and rest of them.
By mastering Docker through Cloud Agents we learned that Docker can be used as a unifying platform that can be leveraged for Enterprise Agents as well. Enterprise Agents as Docker containers are versatile and independent of the environment hosted on that makes operations and managing the applications efficient. With the advent of automation tools and Docker management engines large number of Enterprise Agents can be deployed quickly. We hope that the flexibility to deploy Enterprise Agents on Docker will prove beneficial to you as well.