Digital start-ups born into the cloud era can quickly disrupt traditional business models and grow unencumbered by legacy IT platforms. It’s a different story for many established enterprises. They can’t always avoid moving large, complex, highly integrated legacy applications to the cloud, which can be far from straightforward.
Yet the benefits of migration often (but not always) outweigh the difficulties. So one of the challenges is how to manage the transition with minimum impact on business performance. And it’s not just about “minimizing” the impact during transition, but about making that transition unnoticeable to the end users altogether. It doesn’t matter whether the system is used by 100 or 100,000 users; they simply want it to work and for their technology to deliver competitive advantage, whatever platform it’s on.
De-risking the migration
This appears to be easier said than done. Or is it? We have found that it is possible to de-risk the transition, and we offer three pragmatic tips for doing so. Indeed, it’s how we approached some of the biggest mission-critical migrations with zero business disruption that we’ve delivered for clients.
Getting the desired results came down to a lot of careful and detailed up-front planning. Building an IT system can be analogous to building a new house. The foundations, plumbing, and electrical are among the first things to be built, and way ahead of the people moving in. Moving legacy systems to the cloud can be a bit like trying to do major renovations and rebuilding the foundations, plumbing and electrical while the occupants are still living in the house. People are stressed, there’s dust and debris everywhere, and the work takes much longer to do than if you’d started from a green field.
Here are the three tips we’ve used to make things a little less stressful:
Tip Number 1 – plan up front for a phased migration
Where possible build your green field cloud platform before starting your migration. Work out the “unit of migration” and then adopt an incremental approach, moving one bit of the legacy system at a time. This might be a case of moving the solution’s technology tiers, or you could move system functionality to the new hosting capability. The downside can be that you pay for two lots of hosting for a period, but in practice the price of cloud hosting means the impact on the business case is probably very small and more than offset by the benefits of moving users onto a proven service.
How you define the “unit of migration” is important. There are often practical limitations, such as network latency and data dependencies, that drive you to migrate multiple systems or bits of multiple systems together at the same time. There is no “one size fits all” approach, but in general we find that migration by system component and then user groups (agreed upon by the business) is a good place to start. That’s because it allows you to plan effectively around business events.
Tip Number 2 – identify the system service performance
Of course, Tip 1 is a fairly simple starting point and there’s much more to ensuring your transition from legacy to cloud doesn’t disrupt business performance. The second tip therefore drills deeper into the practicalities of how cloud migration has an impact on service levels and user experience.
Before, during and after you begin the transition you need to understand the business service performance of the system against a baseline. How is the system being used? What applications are used the most? How long do different transactions take to complete, and which are the most common functions used?
This is a highly granular understanding of performance that’s very foreign to the traditional availability management. In the past it was just “Is the system up or down?” “What is the memory and CPU usage?” But a smooth migration requires a much greater level of insight into the user experience and system performance, particularly when only parts of a solution are migrated to the cloud.
The higher level of granularity will enable you to better plan priorities and grapple with any data challenges before they make an impact on the business. This is especially important if you take an incremental approach based on system components. For example, if part of your data is housed in a Data Center in Amsterdam and another part is in the UK, what impact will this have on transaction response times if you need to access the different data types simultaneously, and will the application even work? You must understand the performance characteristics of the service you are migrating before you start, as well as during and afterwards. Use early technical proving to validate that your emerging solutions will perform.
Tip Number 3 – don’t just “lift and shift”
Adopting a “lift and shift” approach, whereby minimal changes are made to the application stack that you are moving to the cloud, is perfectly sensible in many scenarios. However while the “bonnet is open,” making changes to the application stack and engineering processes can help you yield a lot more of the benefits and, in fact, is often necessary to make legacy systems work on commodity platforms. The more aspects of the system you can commoditize and deployment mechanisms you can automate in the process of moving to the cloud, the more you can amplify the flexibility, cost, vendor-lock in, and agility benefits. When changing the plumbing, you might as well add a new bathroom suite!
Thinking in a different way like this can stretch to other aspects of the migration. For example, instead of an extended testing period prior to go live, is there an opportunity to gradually migrate users, thus opening up the possibility of testing in live, which could deliver significant risk and cost reductions? In other words, use the opportunity to challenge more traditional or established ways of doing things.
You can read more about Capgemini’s Cloud Advise services on our Cloud Strategy page.
This article was written by Robert Kingston from Capgemini: Capping IT Off and was legally licensed through the NewsCred publisher network.