Hyperconverged IT systems (or hyper-converged networks if you prefer a hyphen and a rather less generic term than ‘systems’) are the new darlings of the software management community. Well, they will be if the major vendors behind these products and methodologies have their way.
Often spoken of in the same breath as the (dreaded) so-called ‘business and technology transformation process’, hyperconvergence is a logical bedfellow for the cloud computing, dashboard (and console-view) management and software-defined worlds that it emanates from. But what is it?
Sexy infrastructure management, surely not?
Trying to make infrastructure management sexy is tough — so let’s try by working out how this all started. Well now, first the Earth cooled and then all the dinosaurs came. Sometime after the ice age, the French Renaissance and the invention of hot dogs the modern age of networking arrived sometime around teatime in the late 1950s. Not long afterwards we started plugging personal computers into those networks… and the rest is history.
But early ‘computer networks’ were almost an oxymoron; yes they were networked, but only in silos depending on often quite distinct administrative groups according to their central role and function. Cloud computing has of course lured us with tantalising promises of a truly global converged network world; but until we can set out a jurisdiction above and beyond network defined tasks devoted to storage, compute (processing) and management, then there’s not much point in a network existing in the first place.
Software to describe, prescribe and ascribe
But we’re getting ahead of ourselves, of course cloud is capable of knowing the difference between various elements of the network and much of this comes down to what we now call software-defined architecture. The ability to describe, prescribe and ascribe converged operations inside the network through software is how we come to the term software-defined in the first place.
But this is only convergence, what about hyperconvergence?
These are systems where we can say there is a pre-engineering of individual components of system architecture such that they can be integrated and made to operate and perform together as we wish. Crucially then, hyperconverged systems exist inside a modular blocks of software-centric architecture where compute, storage, networking and virtualisation resources all come together as one inside a single ‘commodity’ hardware box.
So who is doing hyperconvergence? SimpliVity waves a hyperconverged infrastructure and software-defined datacenter flag over the recent release of its OmniStack Data Virtualization Platform. The firm’s technology works with Cisco UCS to ‘assimilate’ eight to twelve core datacenter functions on Cisco Unified Computing Systems C240 rack-mount servers, including the hypervisor, compute, storage, network switching, backup, replication, cloud gateway and caching.
“We are not the ‘first to market,’ however, we offer the most complete solution,” claims Doron Kempel, chairman and CEO at SimpliVity Corporation. “As we engaged with customers and partners across the globe, demand for an integrated solution with Cisco UCS became a recurring theme. SimpliVity is committed to delivering the best of both worlds: on one hand, x86 cloud economics, reducing TCO by 3x; on the other hand, tier-1 enterprise capabilities: performance, data-efficiency, data protection and global unified management.”
The must have in this market is a) the ability to deliver hyperconverged infrastructure and b) a reduction in complexity for current datacenter architectures with c) significant cost savings i.e. not altogether an easy combination of feats to pull of simultaneously.
Managing resource utilisation inside hyperconverged
CiRBA Inc is aiming to line up alongside those technologies vying to help hyperconverged software-defined infrastructure control with its new support for Amazon Web Services and IBM Softlayer. The firm says that without analytics to assess application requirements against infrastructure capabilities, organisations cannot make smart decisions about how to manage their own internal infrastructure resources, let alone expand into external cloud.
“Even the most sophisticated spreadsheets and manual processes are fundamentally incapable of answering the complex question of where to host workloads. Cirba automates the process and eliminates hosting risk by ensuring all the critical criteria are considered, including resource utilisation, technical requirements, compliance, redundancy, storage requirements and even software licensing,” said Andrew Hillier, co-founder and CTO of CiRBA.
Looking to VMware, the firm’s CTO Chris Wolf has explained that the company brands its hyperconverged offerings as “EVO” because its sees them as an evolutionary technology. If you can stomach the marketing spin there, EVO is a preconfigured, pre-integrated software-defined datacenter (SDDC) stack that is now offered by several VMware hardware partners.
Do we rip and replace to get to get to hyperconverged?
So if we buy the new spin, do we need to rip and replace to get to get to hyperconverged? No, that’s not how it works.
“Our hyperconverged EVO offerings should not be seen as a replacement, but rather as a complement. Converged infrastructure solutions that leverage traditional enterprise infrastructure architectures such as networked-based storage will remain a staple for critical business applications for a very long time,” said VMware’s Wolf.
What we are really talking about here at the end of the day is taking ‘jobs’ or ‘tasks’ including server control, storage blocks and network intelligence along with security provisioning and maintenance and providing that as a commodity service supplied in a pre-engineered hardware and software combination stack that is maintained, managed and supported by a vendor. Hyperconvergence done right will (theoretically) reduce OpEx costs and allow a firm to concentrate on its own applications and core competencies. This is part of our journey to cloud, it should be casual bedtime reading at the very least.