This article originally appeared on The Next Web
In the latest Magic Quadrant report released by Gartner last year, Amazon Web Services (AWS) maintained its position as the king of cloud Infrastructure as a service (IaaS) providers. Followed by Microsoft Azure and Google Cloud Platform, the three cloud providers are often referred to as ‘hyperscale vendors.’
What was more shocking about the news was that AWS had more than 10 times the computing capacity in use than the next 14 largest cloud companies combined.
In addition, AWS announced a metric that was hard to contend with: over 1 million active enterprise users, not individuals. This was the one metric that not even Microsoft was being transparent about when it reported on its numbers.
So how come no-one has managed to oust AWS from its IaaS throne? Below we provide you with the comparison analysis supported by Gartner, and the five case studies of AWS taken from Fortune 500 and Unicorn companies.
Magic Quadrant comparison
Every year, Gartner published a positioning analysis for competing players in the major technology markets, called Gartner Magic Quadrant.
Using graphical assistance and a set of evaluation criteria, a Magic Quadrant helps you quickly determine how technology providers are executing their visions and how well they are performing against Gartner’s market view.
In all types of Magic Quadrant, two axes representing Gartner’s evaluation criteria – ability to execute and completeness of vision – provide four dimensions to map the competing players: Leaders, Visionaries, Niche Players and Challengers.
Leaders execute well against their current vision and are well positioned for tomorrow, and Visionaries understand where the market is going or have a vision for changing market rules, but do not yet execute as well as Leaders.
Challengers execute well today or may dominate a large segment, but don’t show an understanding of market direction, while Niche Players focus successfully on a small segment and do not out-innovate or outperform others.
Gartner: Magic Quadrant for Cloud Infrastructure as a Service 2015
So, what did Gartner see as Amazon’s strengths?
- Diverse customer base
- Broadest range of use cases (cloud native applications, e-business hosting, general business applications, enterprise applications, development environments and batch computing)→ mostly chosen for strategic adoption
- Large tech partner ecosystem including software vendors that integrated their solutions with AWS
- Extensive network of partners that provide app development expertise, managed service and professional services such as data center migration
- Richest array of IaaS and Platform as a Service (PaaS) capabilities
- Rapid service offerings and higher-level solutions expansion
However, Gartner also offered some downsides to Amazon’s offering:
- Can be a complex vendor to manage
- Charges separately for optional items that are sometimes bundled with competing offerings
- Tier-based customer support depending on chosen support purchases, rather than ‘relationship’ or size-of-spend based
- Broad capabilities mean services that attract less customers’ interest will not get the same level of continued investment by AWS
- New capabilities often compete with products and services from AWS partners that potentially leads to ecosystem conflicts
Amazon has been in the business the longest time compared to the other two giants, Microsoft and Google, and this gave them a first-mover advantage in what Amazon has called the cloud virtuous cycle: value based pricing → more customers → more usage → more infrastructure → economies of scale → lower infrastructure costs → continued innovation/back to value.
Here’s why big companies listed in the Fortune 500 and Unicorn chose to use AWS for fulfilling their cloud needs.
Huge capacity means timely solutions for your business challenges
Yes, yes, we have been talking about its computing capacity since the very beginning. But how big is it? Okay, let’s do some math.
AWS placed its data centers across 33 availability zones within 12 regions worldwide. Each availability zone has at least one data center (some have as many as six) that has redundant power for stability, networking and connectivity. In each data center, there are between 50,000 to 80,000 servers with up to 102 Tbps bandwidth.
If you assume an average of three data centers per zone and 65,000 servers per data center, you will end up having 6.4 million servers worldwide. For those of you who care about availability and performance of their applications in the cloud, the huge computing capacity of AWS ensures higher fault tolerance and low latency.
Pfizer (56th on Fortune 500), a global medicine company, uses Amazon Virtual Private Cloud (VPC) to handle its peak computing needs in a secure environment. With VPC, it carries out computations for the Worldwide Research and Development (WRD) division. That is responsible for supporting large-scale data analysis, research projects, clinical analytics and modeling.
Dr. Michael Miller, Head of High Performance Computing (HPC) for WRD explained, “Research can be unpredictable, especially as the on-going science raises new questions.” Assisted by VPC, he can now lead the WRD team to respond to these challenges by providing the computing means that exceed the dedicated HPC system.
Automatically match load demands on your critical, high volume applications
Expedia, a leading online travel company (458th on Fortune 500), has to deal with vasts amount of data when it comes to providing leisure and business travel to customers worldwide.
One big challenge with handling all that data is how to maintain critical, high volume applications without worrying about the infrastructure stability.
One of the high volume applications is the Global Deals Engine (GDE), an engine that delivers deals to its online partners and allows them to create custom websites and apps using Expedia’s Application Program Interface (API) and product inventory tools.
Credit:Amazon Web Services
Expedia Global Deals Engine Architecture on AWS
With GDE, Expedia has to process approximately 240 requests per second. Considering the huge amount of requests they have to handle from this engine alone, they decided to run it on AWS because of Auto Scaling.
Murari Gopalan, technology director of Expedia, said, “The advantage of AWS is that we can use Auto Scaling to match load demand instead of having to maintain capacity for peak load in traditional data centers.”
Auto Scaling on AWS helps you automatically adjust the number of servers added or removed depending on the load. Auto Scaling can also detect when a server is unhealthy, terminate it, and launch another server to replace it. This way, Expedia achieved what it wanted: Stable infrastructure.
Granular-level costs monitoring improves ROI assessment
In the world of cereal industry, profits are tight. Even for a company like Kellogg (210th on Fortune 500), every dollar spent usually goes to the marketing division for coupons, special offers, sponsorships, even cereal placement on the grocery store shelf.
To stay competitive, Kellogg needed to invest in new IT infrastructure to run dozens of complex data simulations on TV ad spend, digital marketing and other promotions and keep tabs on the costs spent on the new IT investment.
So they decided to invest in AWS with its SAP Hana environment since this infrastructure could accommodate terabytes of data, scale according to needs and yet stay within budget.
Credit:Amazon Web Services
Kellogg SAP HANA Deployment Architecture on AWS
In addition to this, thanks to Amazon CloudWatch, Kellogg could allocate costs to each department based on their infrastructure use. Stover Mcllwain, senior director of IT Infrastructure Engineering at Kellogg, said, “AWS breaks down usage and cost to such a granular level that we can identify which costs come from which department, like a toll model.”
This way, Kellogg could make better decisions around the capacity each department needed to avoid waste, and assess the true return on investment of AWS.
Quick storage scalability without incurring long lead times for upgrades
Spotify (15th on Unicorn list), the leading music streaming service, offers instant access to over 16 million licensed songs and it’s growing. Due to its huge collection of songs, Spotify faces the eternal challenge of cataloging not only yesterday’s and today’s popular tracks, but also those that will be released in the future.
Operations director for Spotify, Emil Fredrisson, explained that Spotify needed a storage solution that can scale quickly, keeping up with the pace of their library growth.
To give you a figure why Spotify needs to do this, they add over 20,000 tracks a day to its catalogue. Quick scalability is a key and a long lead time (amount of time that elapses between when a process starts and when it’s completed) is out of the question.
Amazon Simple Storage Service (S3) provided what they were asking for: Short lead time and scalability.
In the past, establishing new storage solution required several months of preparation, but with Amazon S3, Spotify can spontaneously adjust to any alterations in user demand.
Regarding S3 benefits, Frederiksson commented, “By removing the restrictions incurred by in-house solutions, we enabled much faster development and deployment cycles.”
He also added, “The ability to go from a system architecture design and capacity requirements to an online and working production system in very little time is fantastic.”
Vast solutions for different needs all under one umbrella
What Airbnb (3rd on Unicorn list) experienced with AWS was all of the previous four benefits combined and more.
Only a year after Airbnb launched, they decided to migrate almost all of their cloud computing to AWS due to a service administration problem with its previous provider. The initial interest was triggered because of the ease of building up more servers without having to contact anyone and without having minimum usage commitments. This, however, was just the beginning.
As the company continued to grow, so was its infrastructure demand. Right now, Airbnb uses 200 Amazon Elastic Computing Cloud (EC2) instances (AWS package for different needs) for its application, memcache (a system used to speed up websites), and search servers.
Combine the above with Amazon S3’s ability to host backups and static files including 10TB of user pictures, and Amazon CloudWatch, which allows the company to easily supervise all of its EC2 instances, meant Airbnb can keep the lights on for its millions of customers.
To maintain stability, Airbnb uses Elastic Load Balancing to automatically distribute incoming traffic between multiple instances. Additionally, Amazon’s Elastic MapReduce allows the company to easily process and analyze 50GB of data daily, and Amazon Relational Database Service (RDS) simplifies the time-consuming administrative tasks typically associated with databases.
Such tasks include, but not limited to, replication (the frequent electronic copying of data from a database in one computer or server to another so that all users share the same level of information) and scaling. It is also noteworthy to know that Airbnb was able to complete the entire database migration to RDS and only experience 15 minutes of downtime.
Thanks to the different solutions provided by AWS, Airbnb saved the expense of at least one operation’s position and gained increased flexibility to meet demands for future growth.
Interested to know more about AWS?
Whether you are new to the whole concept of cloud computing, want to dig deeper about AWS, or prepare yourself for the next AWS certification exam, this Amazon Web Services Engineer Bootcamp Bundle on TNW Deals will give you the valuable knowledge and training to be a part of the cloud king community for only $29.
➤ Get this 89 percent off deal
This article was written by TNW Deals from The Next Web and was legally licensed through the NewsCred publisher network.