One of the most aggravating things about technology is that, too often, there’s no one right answer. We spend years swinging on a pendulum, trying to solve problems whose solutions all have drawbacks.
It’s frustrating – especially for left-brained engineers – because there are areas where there is a right, or at least a widely accepted, answer. Ethernet? Yeah, we’ve all agreed on that. TCP/IP? Yep, we’re good.
But in other ways, we’re still wandering in the woods, trying to figure out what works best. The glass house gave way to client-server computing. Terminals were tossed on the dustbin of history, but we still find value in virtualized desktops. At the same time, the increasing popularity of laptops meant that the boundaries of where employees logged on became amorphous. Now the boom in smaller, smarter mobile has pushed us back toward consolidated data centers.
But we still aren’t sure what those look like: is it a private cloud? A public cloud? A hybrid cloud? If so, what are the parameters for what goes where, and when? Do we use blade or rack servers? And what about microservers, which bring competition to Intel in the processor space? That doesn’t even account for different kind of capital vs. operational options, such as modular data centers and outsourced facilities. If you want to get a quick overview of all your options – confusing though they might be – check out this CRN piece from earlier this month, 10 Data Center Predictions For 2014 or this Data Center Knowledge piece from this week, Five Great Ways to Optimize Your Data Center.
This is why I find the data center space so fascinating, bringing to mind the old saw about the ideogram for crisis and opportunity being the same.
Complicating the issue – just as software-defined networking is complicating the network space – is the concept of the software-defined data center, promulgated by vendors as diverse as VMware, BMC, Symantec, Oracle, and Hewlett-Packard.
Gartner analyst Henrique Cecci touches on the “software-defined everything” issue in this SearchDataCenter Q&A published last week. He says, “In terms of cost efficiency, you can manage your workloads in a simple, less expensive way because you don’t need to physically manage it; you can manage with software, not hardware.” On the other hand, one of the reasons I think we keep looking for new solutions to old questions is because it’s so hard to manage and monitor everything that goes on in a highly distributed data center.
The value of software in the data center, proponents say, is the ability to automate everything – to make the data center a finely tuned machine that runs smoothly and scales to any scenario. We’re a long way away from that integrated utopia, however. Anyone who thinks software can solve all our problems should look at the challenge of writing mobile applications that run on any platform, and then dial back the hype.
The challenge here is that CIOs face a dizzying array of possibilities in terms of hardware and software. IT Business Edge’s Arthur Cole ties together the concept of SDDC and enterprise architecture in a column earlier this month.
Cole notes that “with many enterprises still working out how to effectively implement basic server virtualization, the details of this still-theoretical enterprise architecture may seem like something for the back burner. But … decisions made today, even those regarding lowly physical infrastructure, can have a significant impact on the kinds of architectures available to you tomorrow.” (Disclosure: Cole links to my earlier story on enterprise architecture.)
The problem is that with all these options, CIOs could face “paralysis by analysis” and not investigate technologies that could, for them, for now, provide that one right answer.
Email CIO Next Community Manager Howard Baldwin if you’re a CIO who wants to spout off in an opinion piece on a technology-related issue like data centers.