From a high-level perspective IT budgets have two parts: maintenance and strategic initiatives. Maintenance tends to grow at the expense of strategic initiatives and, left unchecked, ultimately stifles innovation. In mid to large IT organizations, this has resulted in the emergence of Application Portfolio Management or APM. APM is a disciplined approach to aligning enterprise applications to maximize business value while minimizing lifecycle ownership costs.
A lack of APM results in uncontrolled application growth, which is sometimes called application sprawl. See Appendix 1 for typical causes of application sprawl and Appendix 2 for the resulting problems. Note that APM is a continuous process and not an objective.
The essentials of APM are:
- An inventory of enterprise applications. This can be as simple as a spreadsheet, or it can be portfolio management software.
- Regular review and analysis of enterprise applications, e.g. using a quadrant analysis and evaluating applications against business requirements.
- Execution, which is acting on the results of the analysis. This includes
- Retiring obsolete applications
- Replacing applications that have high maintenance costs or poor functional fit
- Consolidating multiple applications where there are significant functional overlaps.
This article focuses on the review and analysis of applications, which culminates in software rationalization projects. It covers cloud and off-the-shelf software, and applications developed in-house.
A simple way to identify applications in need of rationalization is to plot them on the quadrant chart below in terms of ownership costs and business value. See Appendix 3 for examples of direct and indirect ownership costs. Applications in the bottom left quadrant are prime candidates for rationalization.
Application Review Quadrant Chart
In each case, estimate the ROI of the replacement, which is the basis of the business case for undertaking a software rationalization project. Start with the highest ROI projects because they will bring the greatest return for the effort.
The software rationalization project
The core of a software rationalization project measures how well applications meet the needs of the organization, which means developing a comprehensive requirements profile that accurately captures those needs. Existing or potential new applications are measured against this frame of reference.
The requirements list
For the purposes of this article, a requirement is defined as an organizational need expressed in a quantifiable way. For any given type of software, most organizations have very similar requirements. What makes each organization unique is how important the individual requirements are to them.
There are three main parts to building a comprehensive list of requirements:
- Asking users, analyzing current business processes
- External sources of requirements like purchased lists or RFPs found on the web etc.
- Reverse engineering features from potential products into requirements.
When developing software it is not possible to collect all requirements, but when buying software it is. Requirements are the foundation for selecting best-fit software, and those requirements need to be well written and in enough detail.
Functional requirements specify what the application must do, and most people start here. Reverse engineer features from existing applications into requirements to capture current functionality. Ask users where these existing applications could be improved and capture their answers as requirements.
Next, look at potential replacement applications. Be sure to include the market leaders in the appropriate software category, which does not necessarily mean “big name” products. When considering mid-level systems, include the mid-level market leaders. Reverse engineer the features of those products into requirements, a critical step in developing a comprehensive requirements list because it captures unknown requirements and the latest advances in the market. It ensures existing applications are compared with the best the market has to offer.
Other requirement types
Many other requirement types should be considered when rationalizing software. Sometimes these are called non-functional requirements. Examples are:
- Compliance requirements: vendor compliance, quality, standards (ISO, SoX, HIPPA, 21 CFR Part 11, etc.), vendor standard operating procedures (SOPs), audit trails, tracking end user training.
- Contractual requirements: Legal, license, performance, contract terms and termination (All contracts eventually end; make sure there is a graceful way to exit).
- Security requirements, especially in the case of cloud or hosted applications: physical and logical security, configuration, security testing & audits, logging & reporting, authentication and passwords, encryption.
- System requirements: performance, monitoring, integration, configuration, compatibility, architecture (front & back end), user management, backups.
- Training requirements: content, delivery, training management.
- Usability requirements: user interface, navigation, searching, user help and searching user help, languages.
- Vendor requirements: due diligence, implementation, support, payment arrangements, and application ecosystem: things like user groups, add-on products from other vendors.
Note: some of the above examples apply only to cloud or vendor hosted applications.
Rate requirements for importance
Requirements must be rated for importance to the organization. For the purposes of traceability (sometimes called a traceability matrix), be sure to record who wants each requirement, why they want it and how important it is to them. Employees rate requirements in their areas, for example, the Finance team rates financial requirements, the IT team rates security and usability requirements, and so on.
When rating requirements for importance, consider how important each requirement is now, and how important it will be in the next 3 to 5 years. Organizational subject matter experts provide invaluable input here. If there are no people with experience in specific areas, it pays to use outside help. For example, if software-licensing costs will run into tens of millions of dollars hire a licensing specialist to help develop those requirements and negotiate the deal.
The output of this process is a comprehensive requirements profile that accurately and adequately captures the needs of the organization. Current and potential replacement software will be rated against this reference standard.
Rate applications against the requirements profile
Once the requirements profile is complete, the next step is to evaluate current and potential replacement applications against that profile. This evaluation objectively measures how well those applications meet organizational needs. Knowledgeable users should rate current applications because they know these applications and their limitations.
RFPs and RFIs
Ratings for potential replacement applications are usually done by the vendors in the form of an RFI (or RFP). One of the challenges is getting vendors to respond. One way to improve responses is to reduce the amount of work the vendor needs to do, for example by using two rounds of RFIs. With the first round, send out only showstopper requirements; usually this is about 10% of the total number. Since there is much less work, more vendors will respond. Shortlist based on the RFI responses and send out the full RFI to only the top 6 or so vendors from the first round.
Scoring RFIs or RFPs
When vendors return completed RFIs, the potential applications must be scored. A useful technique is to normalize scores: If an application fully meets every requirement, then that application would score 100%. The advantage of normalized scores is that they provide an intuitive measure of how well applications meet organizational needs.
The Fit Score is defined as the Normalized Score above but excludes requirements against which the application is not rated. By definition, the Fit Score equals the Normalized Score when an application is rated against all requirements.
The Fit Score distils and entire evaluation into one number that is used to rank applications. The advantage of the Fit Score is that applications can be compared before they are fully evaluated. By observing Fit Score trends when about 50 percent of the requirements have been rated, applications that clearly will not make the shortlist can be dropped from the evaluation.
The Gap Analysis is where current and potential new applications are evaluated against the requirements profile, and ranked by Fit Score. While it is unnecessary to fully rate every application in the evaluation, potential winning candidates should be rated against all showstopper and critical requirements, and against about 90 percent of all requirements in total. Once the gap analysis is complete and applications are ranked by Fit Score things start to get interesting.
The Fit Score objectively measures how well applications meet the requirements profile, and allows them to be compared and ranked. For example, take a post-merger scenario where an organization is deciding between two existing CRM applications, or if both CRM applications should be replaced by a market leader like Salesforce.
- If both existing CRMs have a very high Fit Score, e.g. > 95 percent, then it does not matter which is selected – both will do a good job.
- If one CRM has a significantly lower Fit Score, e.g. < 80 percent while the other has a high Fit Score of > 95 percent, pick the CRM with the highest score.
- If both existing CRMs have a relatively low Fit Score, e.g. < 75 percent, and something like Salesforce has a high Fit Score like > 90 percent, then it may be worth selecting SalesForce.
- If all applications have relatively low Fit Scores, e.g. < 75 percent, then the scope of the evaluation needs to be adjusted. Alternatively, other applications that could better meet the requirements (and were not evaluated) may need to be considered.
- Although this would not apply to the CRM example above, if all applications had exceptionally low Fit Scores, e.g. < 60 percent and adjusting the scope of the evaluation does not make a significant difference, then you have a prime candidate for internally built software.
- Auditing RFIs
Some vendors can be “over optimistic” when responding to RFIs. If a new application is selected to replace an existing application, that vendor’s RFI response should be audited.
If the process outlined here was followed, applications with high ownership costs and low value were selected for potential rationalization. A comprehensive list of requirements was developed. Employees rated them for importance to create a requirements profile, an objective standard unique to the organization for that type of application.
In the gap analysis, current and potential replacement applications were evaluated against the requirements profile. The results of each application evaluation were distilled into one number, the Fit Score, which was used to measure how well applications meet organizational needs.
A data-driven analysis identified the best-fit application for the organization’s particular needs. The question of which existing applications should be kept, or if an entirely new application should be bought, has been rationally and objectively answered.
Appendix 1: Common causes of application sprawl
- Acquisitions & mergers
- Business strategy changes
- Business growth and the need for immediate solutions leads to software purchased just to solve a problem.
- In-house software developed as point solutions. Technology siloes develop where project teams fail to communicate.
- New software with better features overtakes existing applications, and the old applications are not retired.
- Compliance requirements cause obsolete applications to hang around. Access is not disabled, and some people continue using them.
- Organizational siloes where different departments bring cloud applications online to solve similar problems.
- Political purchases. New senior executives introduce software “because it worked well at my previous company”. This new software is a poor fit for the organization so the original software that was supposed to be replaced can’t be retired.
Appendix 2: Typical problems caused by application sprawl
- Unnecessary software costs for underused applications. This takes the form of annual software maintenance paid to vendors, or fees for cloud applications.
- Increased administration costs. All applications require some level of system administration, and these costs are often overlooked because they tend to come out of general IT budgets.
- Increased support costs. Each supported system requires helpdesk staff to support it.
- Increased training requirements for new users. Also, when there are too many applications, people tend to use each application less frequently and forget how to do things.
- User confusion caused by a duplication of functionality. Different departments use different applications for the same business processes.
- De-normalized data. The same information is stored in different systems in different formats. For example, after a merger two different sets of customers exist in two different CRMs. Some customers can be in both systems. Even if each customer is in one or the other system, automated reports covering the whole customer base cannot be obtained.
- Reduced efficiency. Older applications often don’t have the functionality or ease of use delivered by current applications.
- Increased interface costs. As the number of applications increases, the costs of those applications exchanging data increases exponentially. Data tends to be siloed in different applications, which prevents users from getting the big picture.
- Increased development costs. Custom applications developed in-house may have to work with de-normalized data in multiple repositories with different APIs and data schemas. This significantly increases the cost of internal software development.
- Reduced security caused by an increased attack surface. More applications running mean more potential security holes that hackers can exploit.
- Unnecessary data center resources consumed. Organizations find the number of VMs explodes, but also find the usage of those VMs (and the applications that run on them) is lower than expected. More applications mean more systems to back up, and more effort to manage those backups.
Appendix 3: Application ownership costs
Ownership costs include all regular, ongoing direct and indirect costs associated with applications. They do not include once off costs like implementation consulting or initial training. Examples are:
Commercial off-the-shelf software ownership costs
- Annual software maintenance costs
- Periodic upgrade costs
- Data center costs, including things like backup, failover, etc. Also indirect costs like power, cooling, floor space, physical security, etc.
- Cloud or SaaS software ownership costs
- User access fees
- Option fees, e.g. base, standard or premium access.
- In-House application ownership costs
- Bug fixes
- Release testing
- Change management
- Analyst & developer salaries & overheads
- Management of analysts, developers, testers, technical writers, etc.
- Recruiting costs for developers to maintain obsolete applications, e.g. written in Cobol
- Application documentation costs
- Data center costs, including things like backup, failover, etc. Also indirect costs like power, cooling, floor space, physical security, etc.
- Ownership costs common to all applications
- End user training
- Helpdesk support. Also, as support staff leave, replacements must be trained.
- Lost user productivity, e.g. when users should be able to do something with the software, but they need support
- Customer costs, e.g. when slow response from the organization caused by poor software fit results in customers being lost
- Opportunity costs of downtime
- Compliance & auditing costs
- Security testing, auditing
- Inter-application communication costs, where one application needs data from another, and that must be maintained.
- Reporting costs, where data from multiple applications must be normalized and merged. Often done manually in spreadsheets.
- Disaster recovery & business continuity planning and testing
- IT staff management
This article was written by Chris Doig from CIO and was legally licensed through the NewsCred publisher network.