How historical data helps you manage your IT infrastructure

Author

Chris Churchey

February 15, 2017

When Ross joined a technology reseller as a system administrator, he felt like he was making decisions about the IT infrastructure in the dark. The company had not been monitoring their infrastructure, and he did not have the data he needed to do his job well.

Of course, Ross wanted to be able to fix problems that caused slowdowns and downtime. More importantly, his goal was to make the infrastructure more resilient and, thus, prevent issues from occurring. He resorted to creating scripts and exporting the data but he found the data added up fast, filling up his storage and causing more problems.

Ross looked at various monitoring solutions and discovered they usually had a limit on how long you could keep the data. He found one solution that enabled him to keep historical data for as long as he liked. Even better, it stored the information in the cloud, so he did not have to worry about its effect on the company’s infrastructure. He convinced leadership to invest in it by explaining the benefits of having historical information at his fingertips. They include the ability to do the following:

1. Perform before and after analyses

padding-left: 60px;In technology, the only thing that remains constant is change.

padding-left: 60px;If you’re like other IT leaders, you’re frequently making technological investments. After all, you need to accommodate growth and incorporate more advanced technology into your environment as it becomes available. But do you know how much value you derive from these investments? Do you know how they impact other areas of your infrastructure?

padding-left: 60px;For example, we’ve seen two entirely different results after a company installs flash storage. One company may be thrilled because it flattens out high latency periods that were causing slowdowns. Likely, they have workloads that are read-intensive with high IOPs. Another business may think flash isn’t working. By looking at their before and after data, they’re likely to discover, however, that it’s the increased speed of IOPs that has saturated their CPU or network, forming a new bottleneck.

padding-left: 60px;Before and after analyses are also useful when you move to cloud hosting.  If you have data on the applications you transfer to the cloud before the move, during your trial period and after your migration, you can determine whether performance is improving, degrading or remaining constant. If there’s a downturn in performance after your trial, you can discuss it with our cloud vendor and make sure they are meeting the requirements in your service level agreement (SLA).

2. Forecast technology needs

padding-left: 60px;To increase technology efficiencies, you need to forecast your needs accurately. Because data is often not available that provides, for example, a clear picture of storage usage and capacity, IT departments have a tendency to over-provision. Unfortunately, because it raises both capital and operating costs, it’s an expensive way to ensure performance.

padding-left: 60px;To predict the future, you need data from the past that will help you determine trends. You should be able to use whatever data is relevant, whether it’s the past three years that show steady growth, or the past three months after you picked up that new large customer.

3. Disaster recovery

padding-left: 60px;Let’s say servers that are critical to production go down when a broken pipe floods your data center. You need to get the applications on those servers up and running quickly by moving them to your backup servers. You’re not sure, however, whether you have enough capacity. If you have a monitoring solution that keeps historical data (that’s not in the same data center!), you can check the history to determine your memory and CPU needs and whether the applications will fit on non-production servers in another location.

4. Troubleshooting and root cause analysis

padding-left: 60px;Without historical data on your IT environment, it’s difficult to determine the cause of slowdowns and downtime. Teams can spend hours and days getting to the heart of the issue. If, however, you have a history of the performance of your servers, storage, SAN, and applications, as well as dashboards that present the information at-a-glance, you can quickly spot what changed and when the change occurred and address the issue.

To ensure excellent performance going forward, you need to understand what happened in the past. It will help you conduct before and after analyses, forecast your technology needs, recover from disasters and troubleshoot causes of slowdowns and downtime. When searching for an infrastructure performance management tool, don’t settle for less data than you need. And, ideally, look for one that will store your data in the cloud, so you don’t have to worry about its impact on your IT environment.

 

This article was written by Chris Churchey from CIO and was legally licensed through the NewsCred publisher network.

Great ! Thanks for your subscription !

You will soon receive the first Content Loop Newsletter