Invisible Infostructure #3 – Build, Release, Run, Repeat
Enterprises are raving about DevOps: the agile, perfect fusion between IT Development and IT Operations, supercharging deployments whilst keeping near-perfect quality. By using one platform with a unified and highly automated ‘train’ of tools, DevOps teams develop applications, test, integrate, and package them for deployment. Plus, they promote to live in a continuous, uninterrupted flow – if necessary releasing many times a day without fail. It requires a thorough understanding of what components constitute a state-of-the-art DevOps platform. Also, it needs a true mastery of the agile approach at the business level. If done well, it eats away all barriers of the solution life cycle, bringing experts together in high-productivity teams that ‘never sleep’.
The speed of application development, and notably application change, is increasing — particularly in ‘3rd platform’ areas around Cloud, mobile, social, and real real-time data. The typical Car and Scooter dynamics require going through the entire solutions life cycle in days, hours, or even minutes.
But at the same time, the necessity of a rock-solid quality of solutions is paramount: our very business performance depends on Digital and we cannot afford mistakes just because we’re in a hurry.
Here’s the conundrum: we want it ultra-fast and of the highest quality, whilst being totally in control, delivering top in class quality.
A new, exciting approach addresses this apparent oxymoron: DevOps. A portmanteau between ‘Development’ and ‘Operations’, it’s a concept that connects developers, quality assurance, and infrastructural operations in such a way that the entire build, release, run repeat process operates as a continuously producing factory.
The team centric DevOps ethos tears down traditional silos to tightly integrate business, development and operations to drive agility and service delivery excellence across the entire lifecycle. It features clear roles, responsibilities, inputs, and outputs and as such, requires a mature, established governance to get there.
The main aim of DevOps (check out our definitional white paper) is to revolutionize the change process, de-risk IT deployments, delete the stereotypical “but it worked on my system”, and eliminate the silos between developers, testers, release managers, and system operators.
And these DevOps promises are high – increase agility by factor 30, speed up by factor of up to 8000, increase reliability by factor 2 and 12 times faster mean time to recove. That would drive market share and productivity by a factor 2, which could lead to 50% increase in market capitalization.
However DevOps is still a young capability and due to an unbalanced focus on tools, many DevOps implementations fail to deliver on the promise. DevOps implementation can become a large people change program and the key to success is to find a fine balance between people, process and tools (see our DevOps : don’t be left behind deck).
The tools and products that are being developed in this space, all focus on automation to maximize predictability, visibility, and flexibility, while keeping an eye on stability and integrity. With the advent of open-source and Virtual Lego, a DevOps team can simply construct any environment, it needs. It’s an area that many tend to focus on first: to create a train of specialized tools that allow for an almost automatic execution of the solution life cycle — all the way from change requests via versioning, development, integration, testing, configuration, packaging to deployment on the live-run environment.
Examples of these ‘tool train’ components include Docker, Puppet and Chef, as well as newer entrants like VMware’s vRealize Code Stream. In addition to these tools, many vendors are now focusing on the DevOps platforms to run the entire “tool train”. IBMs BlueMix – operated on SoftLayer – includes a large number of predefined tools that can be used out-of-the-box to hook tools together. Even mainframe capabilities are being opened, with Bluemix providing CICS interfaces.
Let’s look at an example. Imagine you’re a developer creating a code for SuSE Linux, who’s developing a 3rd platform-based application, using a Cloud-based development environment. In order to test your application, you need to move the code plus configuration information to a separate unit-test environment and once tested, the application needs to be installed in a ‘user acceptance test’ environment. Once users have OK’d the app, it requires a last performance and security test — all before it can be deployed on a live environment, which sits in a hybrid Cloud.
Before the era of DevOps, you would have requested each and every move and construction via an Environment Manager, using an internal (and maybe only PC-supported) proprietary change management system, taking days and sometimes weeks. Not to forget, all the issues that would arise from subtle differences between the approach of various “sysadmins” involved, resulting in a divergent nightmare of test and target platforms.
Fast forward: in a DevOps team, you use the expressway. You work with the same UI and tools as all your colleagues in a tight, multi-disciplinary team; you have the ability to create, deploy, and destroy an environment using standard templates and blueprints, increasing the ability to shortcut fault analysis to your code, only. You can kick-off full, pre-defined install sequences, eradicating the need to manually install anything. What’s even better is, it supports any target platform, be it Unix, Linux, Windows, or even Mac or mainframe, installed either on- or off-premise, virtualized or non-virtualized.
You’re not cutting corners here. You simply benefit from the highest degree of automation and standardization to repeat the entire solutions life cycle over and over and over again, at supersonic speed.
It requires mastering agility at all stages and it needs perfectly aligned teams with committed specialists from all crucial disciplines: developers, testers, and operations. Now would be a good time to get them acquainted. And probably it makes sense to start exploring the new approach, only in the most suitable areas first: mobile and Cloud-based hybrid applications first, rather than the critical, core-applications space.
Once in flow, an optimally tuned DevOps team can set a shining example to the rest of the enterprise.
Build, Release, Run, Repeat. All before lunch. What if the business could do that too?
Expert: Gunnar Menzel
Part of Capgemini’s TechnoVision 2016 update series. See the full overview here.
This article was written by Ron Tolido from CapGemini: CTO Blog and was legally licensed through the NewsCred publisher network.