Quantcast
Channel: VMware Communities : Blog List - All Communities
Viewing all articles
Browse latest Browse all 3135

Part II: Revisiting re:Invent 2014, Lambda and other AWS updates

$
0
0

server storage I/O trends

Part II: Revisiting re:Invent 2014 and other AWS updates

This is part two of a two-part series about Amazon Web Services (AWS) re:Invent 2014 and other recent cloud updates, read part one here.

  AWS re:Invent 2014

AWS re:Invent announcements

 

Announcements and enhancements made by AWS during re:Invent include:

  • Key  Management Service (KMS)
  • Amazon RDS  for Aurora
  • Amazon EC2 Container Service
  • AWS Lambda
  • Amazon EBS Enhancements
  • Application  development, deployed and life-cycle management tools
  • AWS Service  Catalog
  • AWS CodeDeploy
  • AWS CodeCommit
  • AWS CodePipeline

AWS Lambda

In addition to announcing new higher performance Elastic Cloud Compute (EC2) compute  instances along with container service, another new service is AWS Lambda.  Lambda is a service that automatically and quickly runs your applications code  in response to events, activities, or other triggers. In addition to running  your code, Lambda service is billed in 100 millisecond increments along with  corresponding memory use vs. standard EC2 per hour billing. What this means is  that instead of paying for an hour of time for your code to run, you can choose  to use the Lambda service with more fine-grained consumption billing.

 

Lambda service can be used to have your code functions  staged ready to execute. AWS Lambda can run your code in response to S3 bucket  content (e.g. objects) changes, messages arriving via Kinesis streams or table  updates in databases. Some examples include responding to event such as a  web-site click, response to data upload (photo, image, audio, file or other  object), index, stream or analyze data, receive output from a connected device  (think Internet of Things IoT or Internet of  Device IoD), trigger from an in-app event among others. The basic idea  with Lambda is to be able to pay for only the amount of time needed to do a  particular function without having to have an AWS EC2 instance dedicated to  your application. Initially Lambda supports Node.js (JavaScript) based code  that runs in its own isolated environment.

AWS cloud example
Various application code deployment models

 

Lambda service is a pay for what you consume, charges are  based on the number of requests for your code function (e.g. application),  amount of memory and execution time. There is a free tier for Lambda that  includes 1 million requests and 400,000 GByte seconds of time per month. A  GByte second is the amount of memory (e.g. DRAM vs. storage) consumed during a  second. An example is your application is run 100,000 times and runs for 1  second consuming 128MB of memory = 128,000,000MB = 128,000GB seconds. View  various pricing models here on the AWS Lambda site that show examples for different  memory sizes, times a function runs and run time.

 

How much memory you select for your application code determines  how it can run in the AWS free tier, which is available to both existing and  new customers. Lambda fees are based on the total across all of your functions  starting with the code when it runs. Note that you could have from one to  thousands or more different functions running in Lambda service. As of this  time, AWS is showing Lambda pricing as free for the first 1 million requests,  and beyond that, $0.20 per 1 million request ($0.0000002 per request) per  duration. Duration is from when you code runs until it ends or otherwise  terminates rounded up to the nearest 100ms. The Lambda price also depends on  the amount of memory you allocated for your code. Once past the 400,000 GByte  second per month free tier the fee is $0.00001667  for every GB second used.

Why use AWS Lambda vs. an EC2 instance

Why would you use AWS Lambda vs. provisioning an Container, EC2 instance or running your application code function on a traditional or virtual machine?

If you need control and can leverage an entire physical server with its operating system (O.S.), application and support tools for your piece of code (e.g. JavaScript), that could be an option. If you simply need to have an isolated image instance (O.S., applications and tools) for your code on a shared virtual on-premise environment then that can be an option. Likewise if you have the need to move your application to an isolated cloud machine (CM) that hosts an O.S. along with your application paying for those resources such as on an hourly basis, that could be your option. Simply need a lighter-weight container to drop your application into that's where Docker and containers comes into play to off-load some of the traditional application dependencies overhead.

However, if all you want to do is to add some code logic to support processing activity for example when an object, file or image is uploaded to AWS S3 without having to standup an EC2 instance along with associated server, O.S. and complete application activity, that's where AWS Lambda comes into play. Simply create your code (initially JavaScript) and specify how much memory it needs, define what events or activities will trigger or invoke the event, and you have a solution.

View AWS Lambda pricing along with free tier information here.

Amazon EBS Enhancements

AWS is  increasing the performance and size of General Purpose SSD and  Provisioned IOP's SSD  volumes. This means that you can create volumes up to 16TB and 10,000 IOP's for  AWS EBS general-purpose SSD volumes. For EBS Provisioned IOP's SSD volumes you  can create up to 16TB for 20,000 IOP's. General-purpose SSD volumes deliver a maximum  throughput (bandwidth) of 160 MBps and Provisioned IOP SSD volumes have been specified  by AWS at 320MBps when attached to EBS optimized instances. Learn  more about EBS capabilities here. Verify your IO size and  verify AWS sizing information to avoid surprises as all IO sizes are not  considered to be the same. Learn more about Provisioned IOP's, optimized  instances, EBS and EC2 fundamentals in this StorageIO  AWS primer here.

Application  development, deployed and life-cycle management tools

In addition  to compute and storage resource enhancements, AWS has also announced several  tools to support application development, configuration along with deployment (life-cycle  management). These  include tools that AWS uses themselves as part of building and maintaining the  AWS platform services.

AWS Config  (Preview e.g. early access prior to full release)

Management,  reporting and monitoring capabilities including Data center infrastructure  management (DCIM) for monitoring your AWS resources, configuration (including history),  governance, change management and notifications. AWS Config enables similar capabilities  to support DCIM, Change Management Database (CMDB), trouble shooting and diagnostics,  auditing, resource and configuration analysis among other activities. Learn  more about AWS Config here.

AWS Service  Catalog

AWS  announced a new service catalog that will be available in early 2015. This new  service capability will enable administrators to create and manage catalogs of approved  resources for users to use via their personalized portal. Learn more about  AWS service catalog here.

AWS CodeDeploy

To support code rapid deployment  automation for EC2 instances, AWS has released CodeDeploy. CodeDeploy masks  complexity associated with deployment when adding new features to your  applications while reducing human error-prone operations. As part of the  announcement, AWS mentioned that they are using CodeDeploy as part of  their own applications development, maintenance, and change-management and  deployment operations. While  suited for at scale deployments across many instances, CodeDeploy  works with as small as a single EC2 instance. Learn more about AWS CodeDeploy here.

AWS CodeCommit

 

For application code management,  AWS will be making available in early 2015 a new service called CodeCommit.  CodeCommit is a highly scalable secure source control service that host private Git repositories. Supporting  standard functionalities of Git, including collaboration, you can store things  from source code to binaries while working with your existing tools. Learn more  about AWS CodeCommit here.

AWS CodePipeline

 

To support application delivery  and release automation along with associated management tools, AWS is making  available CodePipeline. CodePipeline is a tool (service) that supports build,  checking workflow's, code staging, testing and release to production including  support for 3rd party tool integration. CodePipeline will be  available in early 2015, learn more here.

Additional reading and related  items

Learn more about the above and other AWS services by  actually truing hands on using their free tier (AWS Free Tier). View AWS re:Invent produced breakout session videos here, audio podcasts here, and session slides here(all sessions may not yet be uploaded by AWS re:Invent)

What this all means

AWS amazon web services

 

AWS continues to invest as well as re-invest into its environment both adding new feature functionality, as well as expanding the extensibility of those features. This means that AWS like other vendors or service providers adds new check-box features, however they also like some increase the depth extensibility of those capabilities. Besides adding new features and increasing the extensibility of existing capabilities, AWS is addressing both the data and information infrastructure including compute (server), storage and database, networking along with associated management tools while also adding extra developer tools. Developer tools include life-cycle management supporting code creation, testing, tracking, testing, change management among other management activities.

 

Another observation is that while AWS continues to promote the public cloud such as those services they offer as the present and future, they are also talking hybrid cloud. Granted you have to listen carefully as you may not simply hear hybrid cloud used like some toss it around, however listen for and look into AWS Virtual Private Cloud (VPC), along with what you can do using various technologies via the AWS marketplace. AWS is also speaking the language of enterprise and traditional IT from an applications and development to data and information infrastructure perspective while also walking the cloud talk. What this means is that AWS realizes that they need to help existing environments evolve and make the transition to the cloud which means speaking their language vs. converting them to cloud conversations to then be able to migrate them to the cloud. These steps should make AWS practical for many enterprise environments looking to make the transition to public and hybrid cloud at their pace, some faster than others. More on these and some related themes in future posts.

 

The AWS re:Invent event continues to grow year over year, I heard a figure of over 12,000 people however it was not clear if that included exhibiting vendors, AWS people, attendees, analyst, bloggers and media among others. However a simple validation is that the keynotes were in the larger rooms used by events such as EMCworld and VMworld when they hosted in Las Vegas as was the expo space vs. what I saw last year while at re:Invent. Unlike some large events such as VMworld where at best there is a waiting queue or line to get into sessions or hands on lab (HOL), while becoming more crowded, AWS re:Invent is still easy to get in and spend some time using the HOL which is of course powered by AWS meaning you can resume what you started while at re:Invent later. Overall a good event and nice series of enhancements by AWS, looking forward to next years AWS re:Invent.

 

Ok, nuff said (for now)

Cheers gs


Viewing all articles
Browse latest Browse all 3135

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>