Amazon and Microsoft Take Public Cloud Storage To The Next Level

Author

Janakiram MSV, Contributor

April 21, 2015

Last week, Microsoft announced the general availability of Azure Premium Storage, the choice of storage for demanding workloads. A week before that, at the AWS Summit, Amazon has launched a new storage type on the public cloud called Elastic File System. Both these announcements have a positive impact on the public cloud adoption.

Amazon Elastic File System
Public cloud storage is typically available as object storage, block storage, and archival storage. Object storage is exposed through standard REST APIs for storing and retrieving files. Developers use explicitly the object storage API for the applications to take advantage of it. Block storage volumes are attached to a VM after which they become available as local disks. Archival storage is an alternative to tape-based backups. Often referred to as cold storage, less frequently accessed data is dumped in the archival storage. While all the three storage types address specific scenarios, what’s missing is the network file share equivalent on the public cloud. Customers rely on complex configuration based on file systems such as Gluster.

Amazon Elastic File System (EFS) is the latest addition to the AWS storage offerings. The new service provides multiple EC2 instances with low-latency, shared access to a fully-managed file system. According to AWS, Elastic File System provides elastic capacity that automatically grows and shrinks as the files are added and removed. Based on the standard NFSv4 protocol, the file system is accessible from both Microsoft Windows and Linux operating systems. Since the file system is available as a multi-tenant, shared service, Amazon is backing it up with SSD-based storage. The data is replicated across multiple availability zones for redundancy and high availability. The service integrates with Amazon’s security model based on Identity and Access Management (IAM) and VPC security groups. Administrators can use standard file and directory permissions to control access to the file system.

Before Amazon EFS, customers had to set up a dedicated file server based on NFS or Gluster, or a proprietary file system to share files and directories across running Amazon EC2 instances. This extra effort resulted in an additional cost of operating and maintaining a dedicated file server. With Amazon EFS, customers get a managed file sharing service backed by SLA. They only pay for what they use on a monthly basis. Amazon charges $0.30 per GB per month, which is expensive than Amazon S3 charged at $0.03 per GB per month excluding the access charges and bandwidth. However, the use case of Amazon EFS is very different from that of Amazon S3. While the data stored in Amazon S3 can be accessed from any application or code, the data in Amazon EFS is available only to the instances running in Amazon EC2. Though Amazon EFS has API for developers, it is primarily meant for administration and management but not for data access.

Enterprise applications like Microsoft SharePoint, Microsoft Dynamics CRM, and some of the open source software like Drupal and Joomla that need shared storage can take advantage of Amazon EFS.

AWS is not the first to offer shared file system. Microsoft Azure announced File Service last year, which is still in the technical preview. The fundamental difference between Amazon EFS and Azure File Service is in the protocol. Microsoft chose to expose Azure File Service through a Windows specific protocol called Simple Message Block (SMB). Though Linux machines can talk to SMB through Samba, the performance is not the same. Unlike Amazon EFS, Azure File Service has REST API to add and delete files. After becoming generally available, Microsoft Azure File Service would be charged at $0.10 per GB per month.

Though not exactly a shared file system, Google Compute Engine allows attaching the same persistent disk to multiple VMs in read-only mode. This configuration is useful in pre-populating a block storage volume and sharing it across multiple instances.

Azure Premium Storage
When moving enterprise workloads to the cloud, customers want the performance to match their existing environment. One of the most visible drawbacks of cloud migration is the drop in the I/O performance. In the last few years, public cloud providers attempted to address this by moving to Solid State Drives or SSDs. Though SSD-based storage is expensive than the standard magnetic disk-based storage, customers prefer to run a set of workloads on them. Microsoft Azure Premium Storage promises to offer best-in-class public cloud storage for enterprise workloads.

According to Mark Russinovich, CTO, Microsoft Azure, Premium Storage is designed for Azure Virtual Machine workloads which require consistent high IO performance and low latency in order to host IO intensive workloads like OLTP, Big Data, and Data Warehousing on platforms like SQL Server, MongoDB, Cassandra, and others.

Premium Storage needs to be attached to Azure DS Series VMs in the form of a Page Blob or Data Disk. Customers can attach multiple disks to a VM to get up to 32 TB of storage per VM with more than 64,000 IOPS per VM.

With the right configuration, VMs can reach 50,000 IOPS, which is considered to be the best performance on the public cloud. The below screenshot of a benchmarking tool shows a VM achieving 100,000 IOPs of performance.

The new storage type is available to both Microsoft Windows and Linux VMs. Available in 128GB, 512GB, and 1TB configurations, Premium Storage will be charged $17.92, $66.56, and $122.88 respectively.

Amazon Elastic File System and Microsoft Azure Premium Storage address the customer concerns when migrating enterprise workloads. This effort certainly raises the bar of public cloud storage offerings.

This article was written by Janakiram MSV from Forbes and was legally licensed through the NewsCred publisher network.


Great ! Thanks for your subscription !

You will soon receive the first Content Loop Newsletter