Amazon seems to believe in the principle of disrupting its own business before someone else does. After launching various services such as Trusted Advisor that help customers save millions of dollars, it is in the process of pushing a new paradigm that has the potential to become an alternative it to its cash cow – Amazon EC2.
For any cloud provider, compute, and storage, are the essential drivers of the revenue. Virtual machines are responsible for delivering the compute service in the cloud. When enterprises move complex workloads to the cloud, they will start spinning up beefy virtual machines which directly contribute to the bottom line. That’s one of the reasons why Amazon, Microsoft, and Google keep introducing powerful VM types.
With the rise of Docker, the focus has shifted to container images as the fundamental unit of deployment. Though containers run inside a VM, they are becoming the preferred mechanism to expose compute to outside world. Containers as a Service (CaaS) delivers an aggregate pool of resources by abstracting the underlying VMs. Today, many cloud providers offer VMs and container environments as the compute service to customers.
Amazon’s approach to computing is moving in a different direction. While continuing its investments in VMs and containers, the company is aggressively pushing the agenda of serverless computing. AWS Lambda, which was announced a couple of years ago at the re:Invent event is the serverless computing layer from Amazon. Since its launch, the service has matured with the addition of languages, capabilities, tooling, and the integration with other AWS services.
Though there were many announcements from Amazon’s annual user conference, AWS re:Invent 2016, the updates related to AWS Lambda caught the attention of developers and customers. Amazon is taking Lambda to places which no one ever expected.
Here is a quick summary of Lambda related announcements from re:Invent:
- Lambda supports C#
- Lambda comes to IoT devices and hubs
- Lambda is embedded in Snowball
- Lambda integrates with CloudFront
- Lambda supports tracing through X-Ray
- Step Functions bring orchestration to Lambda
By adding C# support to Lambda, Amazon wants to attract those Microsoft shops with massive investments in .NET. It exploited the opportunity that came in the form of open source .NET called .NET Core. Non-Windows developers using Linux and macOS will also be able to develop serverless applications on Lambda. AWS has created tools that integrate with open source .NET to easily develop and deploy Lambda functions from the command line. I think this is one of smartest moves by Amazon to beat Microsoft on its home turf.
Apart from embedding .NET Core, it also built a set of tools for Visual Studio which make the process of deploying C# functions to Lambda incredibly simple. It was not a coincidence that Microsoft announced Visual Studio tooling for Azure Functions, the AWS Lambda competitor on Microsoft’s public cloud, on the same day as re:Invent.
AWS Greengrass, the embedded Lambda compute in connected devices is another strategic move from Amazon. The most complicated part of IoT solutions is the device management in offline scenarios. Developers find it challenging to write a consistent layer that can handle device management and communication seamlessly across online and offline scenarios. With Greengrass, Amazon is now bringing a subset of Lambda to hubs and gateways that manage the devices locally. The IoT device SDK that runs on the sensor nodes can talk to the Lambda endpoints in the hub. When the hub gains connectivity to the cloud, it can seamlessly synchronize the state. Developers will write single codebase for both local and cloud connectivity. This solves the complex problem of offline device and state management.
Technically speaking, AWS Greengrass can run on full-blown x86 servers that double up as hubs and gateways. OEMs can bundle Greengrass with their gateways, which will officially mark Amazon’s entry into the IoT appliance world. This technology has the potential to become an alternative to the Fog Computing model advocated by the OpenFog Consortium, led by ARM, Cisco, Dell, Intel, and Microsoft.
Amazon’s storage appliance, Snowball Edge includes an S3-compatible endpoint and the Lambda runtime. Customers can use the same toolchain that they use to ingest data into Snowball. By writing a simple Lambda function, they can trigger events when a criterion is met during the ingestion process. The function can then selectively upload the data to the cloud. In many ways, this is Amazon’s converged infrastructure appliance, which ships with storage, compute, and networking capabilities. The fundamental difference between conventional appliances and Snowball Edge lies in the compute layer. AWS Lambda has become the compute service powering Snowball Edge. Amazon chose Lambda instead of a Xen-based virtualization or Docker-based containerization. This is an indication of what AWS wants to do with Lambda in the future.
Amazon X-Ray, the APM service, which is a competitor to New Relic, AppDynamics and other vendors with a similar offering, delivers out of the box tracing, debugging and performance monitoring of Lambda functions. Developers can visualize the flow of messages across multiple functions and resources to accurately identify the performance bottlenecks. This service is another sign that Amazon wants more developers to use Lambda.
Finally, to ease the pain of composing complex applications assembled from multiple Lambda functions, Amazon is shipping a visual canvas to design the workflow. This service will take Lambda to the next level by enabling the deployment of complex serverless workloads. Developers can reuse serverless functions across different applications. They will be able to create sequential, branching, and parallel steps to design a workflow. This service will open multiple avenues for system integrators to create tools that manage and maintain customer deployments. AWS Step Functions is a true microservices environment in the cloud.
It is evident that Amazon is placing its bets on Lambda. Though it has the option of using VMs and containers, it prefers lambda because of the following reasons:
- Lightweight runtime – AWS Lambda runtime is extremely compact, which is easy to ship and update. Node.js will be the most preferred language and runtime for Lambda.
- No dependence on 3rd Party – By pushing Lambda, Amazon will drive its own agenda with no reliance on 3rd parties. While the entire world is trying to build microservices using Docker or an alternative container environment, Amazon can steer clear of the politics and the dependence on container runtimes.
- Exclusivity and stickiness – Developers using Lambda on-premises or in the cloud will use the same framework and workflow. This enables Amazon to grow and retain developers on its platform. AWS Greengrass and Snowball Edge are just examples of this strategy. Amazon will continue to push Lambda as the preferred compute layer across all its hybrid services.
- Developer tooling and experience – Amazon will continue to write plugins and tools that make the experience of dealing with Lambda smooth and seamless. The recent announcements such as Serverless Application Model, environment variable support, .NET Core Lambda CLI, Visual Studio tooling are examples of this strategy. Going forward, we may see Lambda support for Eclipse, Android Studio, and other IDEs. With the Cloud9 acquisition, Amazon is well-positioned to deliver a cloud-based IDE tightly integrated with Lambda.
The key takeaway is that Amazon wants enterprises to consume EC2 while it is pushing startups and developers towards Lambda. This move from Amazon will fuel the growth of serverless computing in the industry. Though Microsoft and Google have similar offerings, AWS Lambda is way ahead of the game.
This article was written by Janakiram Msv from Forbes and was legally licensed through the NewsCred publisher network.