Docker has been a huge focus of IT conversation lately because it both boosts the number of applications you can power with the same servers and simplifies packaging and shipment of apps.
- Why the Rush to Docker?
- Docker’s Relationship with LXC
- Summary & Conclusion
“Docker’s literally incredible. I’ve never been able to setup server networks for clients so quickly.” – tweet from Linux sysadmin Oliver Dunk, Jul 21, 2015.
Last year, Docker was one of the technologies that really got everyone’s attention, kind of exploding onto the scene with many businesses starting to use it for the first time – including three major financial institutions, according to Docker VP James Turnbull. It’s amazing to think that banks, of all organizations, were willing to adopt version 1.0 of an application since security is so paramount to them.
Well, it’s kind of shocking and it kind of isn’t, because the open-source Docker did quickly develop some important relationships – with Redhat, Canonical, and even Microsoft (compelling particularly because Microsoft is, of course, proprietary).
Why the Rush to Docker?
What is it that is fundamentally driving everyone to Docker and containers in general? Parallels virtualization chief James Bottomley says that the reason people are switching to Docker has to do with the nature of virtual machine hypervisors. Hypervisors are “based on emulating virtual hardware, [which means] they’re fat in terms of system requirements,” he notes.
With containers, operating systems are shared, allowing them to use resources more efficiently. Rather than going the route of virtualization, containers use one Linux instance as their basis. With this tactic, organizations are able to “leave behind the useless 99.9% VM junk,” explains Bottomley, “leaving you with a small, neat capsule containing your application.”
The impact of this different way of building systems is profound. If you have a container environment that you have configured properly, you can potentially improve the amount of server application instances by 300-500% over KVM or Xen virtual servers.
Containers may sound like a revolutionary concept, but they actually aren’t. The technological approach has been around since at least FreeBSD Jails, which first appeared in 2000.
Actually, Steven J. Vaughan-Nichols of ZDNet points out that you have likely been a user of container systems for quite a while without knowing it. “Google has its own open-source, container technology lmctfy (Let Me Contain That For You),” he explains. “Anytime you use some of Google functionality — Search, Gmail, Google Docs, whatever — you are issued a new container.”
Docker’s Relationship with LXC
Docker was actually built using Linux Containers (LXC), OS-level virtualization through which you can run various containers through one control host. The main factor that separates VMs from containers is that the level of abstraction for a hypervisor is the entire computer while the level of abstraction for a container system is the OS kernel.
Hypervisors have a distinct advantage here, as you might notice. You aren’t stuck with a single OS or kernel. Your Docker containers, on the other hand, are all sharing the same OS and the same kernel.
You don’t necessarily need multiple operating systems, obviously. If you just want to get a bunch of apps running on the smallest number of physical servers, Docker makes sense.
With Docker, Cloud Hosting providers and data centers are able to slash their utility and equipment costs.
Docker has been able to popularize the container approach in part because it’s improved the security and simplicity of container environments. Plus, interoperability is enhanced by its association with major companies – such as Google, Canonical, and Red Hat – on its open source element libcontainer.
Bottomley notes that Docker is also useful for the packing and shipping of apps. You are immediately able to move your app wherever you need to get it.
In this way, Docker really found a way to meet a need of the typical enterprise. Enterprises want their apps to be portable, and they want to be able to distribute them effectively, but that process is often a source of inconsistency, says 451 Research analyst Jay Lyman. “Just as GitHub stimulated collaboration and innovation by making source code shareable,” he notes, “Docker Hub, Official Repos and commercial support are helping enterprises answer this challenge by improving the way they package, deploy and manage applications.”
Finally, it’s easy to deploy Docker containers in a cloud scenario. You can easily integrate it with typical DevOps environments seamlessly (Ansible, Puppet, etc.) or use it as a standalone. The main reason it’s so popular is simplification, says Ben Lloyd Pearson via opensource.com. You can do local development within a system that is identical to a live server; deploy various development environments from your host that each use their own software, OS, and settings; easily run tests on various servers; and create an identical set of configurations, so that collaborative work isn’t ever hindered by parameters of the local host.
Summary & Conclusion
In summary, there are three basic reasons for Docker’s success, according to Vaughan-Nichols. First, “[i]t can get more applications running on the same hardware than other technologies.” Second, “it makes it easy for developers to quickly create ready-to-run container applications.” And finally, “it makes managing and deploying applications much easier.”
Everyone is interested in Docker, and it’s easy to see why. So how do you get started? With a one-click app, you can be up in 30 seconds. At Atlantic.Net, we offer SSD Docker Hosting that utilizes international data centers and bills per second so you are never overcharged.