The Economics Of Cloud Computing Are, In A Word, Confusing


CIO Central Guest, Contributor

June 10, 2015

The economics of cloud computing are, in a word, confusing.

While the cloud has solved some of the inefficiencies that have long plagued companies managing their own servers and datacenters, others remain. In some ways, the cloud has even created its own problems. It hasn’t been clear how, or if, cloud computing providers will be able to resolve their problems. But change is in the air, and it smells a lot like containers. For the purposes of this discussion, containers are a feature of the Linux operating system that isolate resources and allow multiple Linux systems (or groups of Linux systems) to run on a single Linux host. The resulting “containers” are similar in theory to hypervisor-based virtual machines (think what VMware provides) but are often much smaller in terms of compute footprint and with the computational overhead of a hypervisor layer. Driven by open source technologies (and companies) such as Docker, containers are finally catching on and showing CIOs a new way of delivering on the cloud’s promises of business agility and cost-savings.

A recipe for confusion and costs

For the better part of a decade, some very smart folks have spilled untold gallons of ink (sometimes on actual paper) debating the actual financial impact of adopting cloud computing. They ask heady questions such, “When is it better to rent versus buy servers?” and “What are the pros and cons of moving IT budgets from capital to operational expenditures?”

They haven’t yet reached a consensus, and probably never will.

That’s because so much of the discussion focuses, rightly, on the world of infrastructure as a service, or IaaS. It’s the largest cloud computing market in terms of revenue — home to big-name cloud providers such as Amazon Web Services, Microsoft Azure, Google Compute Engine and Rackspace — and also the most confusing.

For example, Amazon Web Services, the world’s largest cloud provider, offers the following classes of cloud servers:

  • Standard on-demand instances billed by the hour.
  • Reserved instances that cost less than on-demand ones, but require long-term commitments and upfront payments.
  • Spot instances, which are essentially unused capacity that can be had for pennies on the dollar (and disappear in an instant) depending on how much users are willing to bid for them.

In each class of instances, there are dozens of different configurations to choose from, each one differing in terms of the number of CPU cores, memory and local storage attached to it.

While some cloud-native companies such as Pinterest seem to have mastered this complexity it has no doubt overwhelmed the plurality. At the very least, it has led to a glut of options that’s underutilized at worst, and misused at best. It’s no wonder that, according to some estimates, average utilization of cloud servers ranges between 7 percent and 20 percent.

That’s not much better than the paltry utilization rates often cited for legacy datacenters, if it’s better at all. Yes: good, old-fashioned overprovisioning, persists even in the cloud computing era. It might be less costly and less noticeable — cloud users don’t need to lease datacenter space and buy hundreds of HP servers upfront to meet demand for the five days per year that it spikes — but every overprovisioned server is money spent on nothing.

Other times, cloud customers are battling a relatively new type of waste called cloud sprawl. Instances are turned on, say for a demonstration or to test a new service, and are never turned off. The servers just sit there doing nothing until someone realizes what’s going on.

While the macro effect of all this IaaS innovation has been overwhelmingly positive — alone, the sheer number of startups that exist because of on-demand access to resources justifies its existence — the effects aren’t always optimal at the micro level.

Containers mean consolidation

Broadly speaking, application containers represent a solution to these types of utilization problems — both in the cloud and in the datacenter. They take consolidation even further than virtual machines before them, because hundreds if not thousands of small tasks, isolated in containers, can run inside a host server without the overhead of a separate guest OS for each one.

Even if they run longer than they should, a container consuming a fraction of a server’s resources is more cost-effective than an entire server or virtual machine sitting there doing nothing.

This is because of how containers work. Today, most are cordoned sections on the Linux operating system that isolate applications and their resources — often much less than those partitioned to a traditional VM, and without the computational overhead of a hypervisor — from the rest of the machine. Popular technologies such as Docker are essentially user-friendly platforms and specialized file formats that tie into lower-level Linux container technologies.

With a good resource-management system in place, developers, data scientists and others launching new services don’t even have to worry about where to deploy their containers. The system knows what resources are available in every machine under its command, and will launch new workloads wherever there is room. It’s like how a good bagboy can sort your groceries and pack each bag tightly, and safely, despite the different shapes of the items.

This is how Google is able to launch billions of containers per day to power nearly everything running inside its datacenters. Manually creating, configuring and launching all those containers would be a nightmare, especially in cases where many containers need to work together as part of a distributed system or micro-services architecture. Google is able to operate its massive infrastructure so efficiently because there is precious little human effort and computational waste as new workloads are containerized and automatically placed wherever there’s room.

Coming soon to a cloud near you

So it’s a good thing that container automation is coming soon to a cloud (or datacenter) near you. For example, Apache Mesos, a open source project inspired by Google’s scheduling systems, is catching on among some of the world’s largest tech companies. These include Twitter, Apple, Airbnb and eBay. [Disclosure: My company is a vendor of Mesos software.]

Among its capabilities is that Mesos will automatically launch new workloads wherever there is room on a cluster of machines it manages. To ensure maximum efficiency, availability and isolation of resources, it launches all workloads, from web services to Hadoop jobs, either as user-specified Docker containers or as generic Linux containers. Mesos doesn’t care whether the host server is bare metal, a traditional VM or a cloud image as long as it’s running Linux.

A level up the stack, divorced from the core management of server resources, Google has open sourced a Docker-centric take on its resource-management platform, called Kubernetes, and also offers a commercial version called Container Engine on its cloud platform. Amazon Web Services has its new EC2 Container Service for managing Docker containers. And Docker is working on its own system, called Swarm.

There is another Docker-like container library being pushed by a startup called CoreOS. Microsoft, VMware, Pivotal, IBM, Rackspace and just about every tech company that matters has expressly embraced containers by supporting Docker, CoreOS’s rkt or their own versions.

All that cloud complexity is suddenly a good thing

For cloud computing users, the advent of commercially viable containers and resource automation opens up a plethora of new opportunities. It might be easier, for example, to commit to long-term contracts for reserved instances because higher utilization rates mean reserving only 10 machines instead of 50. Because containerized workloads will share resources, it should also be easier to parse the mass of cloud-server configurations in order to find the right balance of CPU, memory and storage for the host machine(s).

Really innovative users might start going crazy with options like Amazon’s Spot Instances and Google’s Preemptible VMs, which let users rent resources for pennies on the dollar — just as long as they’re unused and no one else is willing to pay more for them. For jobs that can handle unpredictable starting and stopping, or a loss of state, a single beefy cloud server packed with containers could do a lot of work for well under $1 per hour.

Cloud computing has been a remarkably positive force for transforming IT, and now it looks like containers will be a force to change the cloud for the better. Anything that helps companies harness all the variety the cloud has to offer, while also helping them optimize the amount they’re spending, can only be a good thing.

This article was written by CIO Central Guest from Forbes and was legally licensed through the NewsCred publisher network.

Comment this article

Great ! Thanks for your subscription !

You will soon receive the first Content Loop Newsletter