Tasting some AWS Lambda Curry !

Lambda Curry!
Serve up your own event-driven recipes from Amazon's Cloud Cooker!

In this post I’ll  share my thoughts on AWS Lambda – a new offering from Amazon Web Services that is currently in technical preview.  I’ll  take you through the good and bad parts and explain why so there is so much excitement and hype in the community about this new offering.

I’ve created a small open source project on github to demonstrate how you can use AWS Lambda to synchronize files between an S3 bucket and a personal dropbox account. I’m sure there will be a whole cookbook of recipes appearing in community once the service goes to general release.

Lambda is currently in a closed technical preview, I’ve been fortunate enough to get early access to Lambda and I’ve spent the past few days writing and deploying some Lambda functions to the EU Ireland region.

Announcing_Lambda_reInvent2014_lg

(above) AWS CTO Werner Vogels announces AWS Lambda @ AWS reinvent 

So what is AWS Lambda?

AWS Lamdba is a new service  that runs your code (node.js functions) in response to events that happen in the AWS cloud. You only pay for the compute time required to run your code and billing is metered in increments of 100 milliseconds making it a very economic offering. This is a pure platform as a service play from Amazon , you don’t need to spin up and manage any infrastructure on AWS to run your lambda functions.To paraphrase the great Nigel Tufnel from Spinal Tap if platform as a service is a volume dial from one to ten then AWS Lambda goes to eleven!

Spinal_Tap_-_Up_to_Eleven

(above) The volume knobs of Tufnel’s Marshall amplifier went up to eleven 

Event-Driven Programming is a paradigm which has been around for many years now, in which the flow of the program is determined by external events.  Most desktop applications are event drive – the button click or key press event triggers a specific behavior in the program. Database triggers are another great example of event driven programming. The event of a row being updated in a database table can trigger the execution of custom code.AWS Lambda exposes an event driven programming model to the AWS cloud. For now the number of events on offer is quite small but this will grow over time. Currently you can tap into events such as

  • A new object was created in an S3 bucket ( PUT / POST / COPY operations)
  • DynamoDB table change tracking – any changes made to the table are tracked on a rolling 24-hour basis. You can retrieve this stream of update records with a single API call.
  • New messages arriving in an AWS Kinesis stream

You can also expose your own custom events and call these from as you like from you own application code.

Typically lambda functions will respond to a trigger within milliseconds of the event occurring and from my experience to date the response times have been in the teens.

So what’s the big deal with Lambda ?

Well prior to this service offering if you wanted to respond to an event like syncronising an S3 object with a dropbox folder you’d need to build something like this:

  • When you add a new file to an S3 bucket you’d then place a message on a dedicated SQS message queue.
  • You’d deploy a fleet of EC2 workers to poll the queue and process the messages
  • Finally you’d tweak the auto scale settings to ensure the workers can scale accordingly

With Lambda all this goes out the window, your lambda functions execute in a fully managed, shared compute environment.

How much does it cost?

Amazon has priced the service based on the number of Lambda function requests served and the compute time needed to run your code. The basic formula is that you pay $0.20 per 1 million requests, plus $0.00001667 per GB-second of compute time used, although the first million requests per month are free.There’s also a “free tier” that includes 1 million requests per month and 400,000GB-seconds of compute. Emm, simple eh ? well just in case you are confused Amazon have provided a few uses cases here to give you a feel for the prices

 

Programming Model

The entry point for your lambda function expects 2 parameters.

  • The event parameter contains all the data relating to the event, in this case its an an object being created in an S3 bucket.
  • The context parameter is used to notify lambda that your function has completed successfully.
exports.handler = function (event, context) {

    var srcBucket = event.Records[0].s3.bucket.name;
    var srcKey    = event.Records[0].s3.object.key;

    getToken().then(function (token) {
             getFile(srcBucket,srcKey)
            .then(function (fileStream) {
                  // sync  file
                  context.done(null, "file sync complete");
            })
            .catch(function (error) {
                console.log(error);
            })
    }).done();
};

 Anatomy of a Deployment Package

A deployment package consists of a zip folder containing your node code and any node module dependencies. As with any AWS resource you have the usual  fine grained access controls via  IAM roles and permissions.

Deploying your Code

You can upload you deployment package straight from the AWS console or use the AWS CLI. If you are doing any serious development work check out this grunt plug-in that lets you build and deploy your packages through the grunt task runner.
Author disclosure: I am a contributor to the grunt-aws-lambda project on github.

 

Monitoring and Health checks

Out of the box lambda comes with cloudwatch metrics including request duration, request count and execution error counts. It would be great to see the error handling functionality grow to include things like retry logic and different error notification options .
cloudwatch

What I would like to see

  • Many more events exposed, this will surely come with time
  • Time based triggering of custom events
  • Reserved Instances for executing your lambda functions. This will satisfy customers that are worried about their code executing in a multi-tenanted environment and guarantee consistent performance and response times.
  • Support for more languages. Don’t get me wrong, I’m a big fan of node.js but AWS Lambda should not be limited to JavaScript only.
  • A great eco-system to share AWS recipes with other AWS customers.
  • Support for NPM at run time , currently you need to zip up and deploy all the node modules that you lambda function relies on
  • Pre-canned retry and error handling patterns to choose from
  • Lambda is screaming for some great IFTTT integrations !
  • Some nice DevOps integration to enable AWS infrastructure deployments from a valilla S3 bucket upload

Lambda is still in technical preview, if you’d like to get your hands dirty and have a play head on over to the AWS Lambda Preview Site and register your details…

http://aws.amazon.com/lambda/preview/

 

Useful resources

Video – AWS Lambda Announcement at Reinvent

Github S3 to Dropbox file synchronization

AWS Official Documentation

AWS Lambda Walkthrough Command Line Companion

Lambdash – AWS Lambda Shell

Grunt AWS Lambda task runner

 Footnotes

 

 

 

 

I’m the first Irishman to contribute to ASP.NET !

So I’m pretty confident that I’m the first Irish man to contribute to the next version of Microsoft’s ASP.NET.

Its a pretty funny story, I attended NDC London last month and David Fowler was on stage demonstrating some of the cool features of ASP.NET vNext when he noticed a small bug. I fixed it that night, submitted a pull request and its been accepted into the main build.

It’s really great to see Microsoft embracing open source and the community.

ASPNET vNext is a complete rewrite from the ground up and will run on both Linux & Windows. Its a big move in the right direction from Microsoft. Gone are the days of vendor lock, you no longer need Windows to run ASP.NET , you no longer need a Microsoft IDE  to write your code and there are already bunch of open source plug-ins coming for Sublime and you favorite text editors. Its heavily inspired from frameworks like Sinatra / Express – out of the box its a minimalistic web hosting framework, if you want more bells and whistles you need to pull them in as extra packages. Its still a work in progress but hopefully we’ll see ASP.NET vNext in the market in 2015.

Anyway here’s my pull request in all its glory, open at Microsoft is definitely open for business !

https://github.com/aspnet/Hosting/pull/125

NDC London 2014 Highlights

Last week I traveled to London to attend the NDC 2014 developers conference. It was an excellent conference – great speakers, really friendly crowd, well organised and run. I’d highly highly recommend it to any software developer. All the sessions were recorded and I’d expect them to appear on vimeo shortly, I already have access to the recordings by registering as a conference delegate.

Here’s a rundown of my favorite talks & highlights from the conference in no particular order

“Reactive Game Development For The Discerning Hipster” – Bodil Stokke

This was a real breath of fresh air ! Bodil took to her keyboard and built out a working game from scratch using the JavaScript RxJS reactive extensions library.There were ponies jumping around on the screen, avoiding obstacles and catching magic coins. Well done, it was real brave to get up there and code live on stage. We need more live coding at tech conferences. She showed how easy it is to compose a application using the  asynchronous reactive library without a single callback in sight. Flying ponies, live coding & reactive extension woot !!

2014-12-03 12.20.37

“ASM.js, SIMD, and JS as a compiled-language virtual machine” – Brendan Eich.

Firstly, wow I can’t believe I came face to face with the creator of JavaScript! Brendan took us through a brief history of the language  from the early days back at Netscape all the way through the present day with the ECMA 6 language enhancements , ahead of time compilation engines and then beyond to ASM.js. It’s a subset of JavaScript  which provides a model closer to C/C++ by eliminating dynamic type guards, boxed values, and garbage collection. The code can be compiled ahead of time and stored in offline storage giving you fast start up times with very good performance characteristics. Brendan shared preliminary benchmarks of C programs compiled to ASM.js that are within a factor of 2 slowdown over native compilation with Clang. Game developers like Unity are working with ASM.js as a way to get their games running on the Web without plug-ins. This would also opens the door to many new types of games and mash-ups with everything running in the browser.

Check out Brendan Eich playing a rewrite of Doom with a mash-up  inside it where another port of Doom is running in an iFrame. A game within a game – confused?, I know I was !2014-12-05 12.36.09

“Practical Considerations for Microservices” – Sam Newman

This talk was very close to my heart – I’ve spent the last 12 months working with a large team on a successful project built from the ground up using microservices. Sam did a great job explaining what microservices are and the pitfalls to avoid when you adopt this style of architecture. Great common sense stuff here, it all resonated with me and  its great to see microservices becoming more mainstream. Sam talked about what you should standardise across a project – make sure you use consistent message exchange patterns , monitoring and deployment approaches but don’t get hung up upon how the microservices are built internally. I really enjoyed this session.

“Five (or so) Essential Things to know about ASP.NET vNext “- David Fowler and Damian Edwards

Damian Edwards and David Fowler demonstrated ASP.NET vNext with some fun code samples and slides that explained the brand new stack. The key word here is “new”. Shiny shiny new !.It looks like a rewrite from the ground up to support Windows and Linux for the first time. The photo below shows the Windows components in blue and the new linux stack in orange.

2014-12-04 16.27.25

The guys shared a lot of information in 60 minutes. Web.config files are now gone ! In future you’ll be working with project.json files to store all your project dependencies. This is going to make it much easier to author .NET apps outside of Visual Studio. The guys demonstrated the cross platform support  writing the code once and running it on a Windows and Linux VM  – this got a great round of applause from the crowd !

Next it was onto the dynamic recompiling – Any changes to a dynamically compiled file will automatically invalidate the file’s cached compiled assembly and trigger a recompilation. The guys changed some C# controller code and the changes appeared with a browser refresh. This is a great step forward, in theory you can now eliminate the whole compile step from your deployment process. This is all possible because ASP.NET is now leveraging the Roslyn Compiler as a Service.

It looks like there will be an awful lot of breaking changes with the new version. All the web server middle-ware has been cleanly separated out into separate NuGet packages. I’m guessing the middle ware interfaces are very close to the OWIN Interfaces.  When you create a new  ASP.NET project everything is turned off by default. You need to enable and pull down the middleware packages you need for your app. This is a really good thing. Your web application will be much more lightweight wthout any any additional bloat. The demo’s reminded me a lot of the minimalist node.js Express Framework. Oh yeah and WebAPI is now part of the same codebase !

“Lessons From Large AngularJS Projects” – Scott Allen

Scott Allen delivered a excellent talk on patterns and approaches to consider on your next AngularJS project.  Things like error handling – how to set up an error handling service to handle all unhandled errors rather than relying on scope.emit and the evil $rootscope variable. He demonstrated some clean code to manage security tokens using http interceptors and decorators , I’ll definitely be digging into this one further. Finally he covered the $httpbackend  mocking service  that lets you program expectations for external http calls without having to go over the wire in an automated test.

“Kicking the complexity habit” – Dan North

Dan North gave a really funny talk on how we should all be avoiding unnecessary complexity at every point in the SDLC. Without rigorous care and attention software quickly becomes messy and unmanageable. Even with the best intentions entropy and complexity are a fact of life in growing applications.  From your IDE to your automated build, from DDD’s ACLs to TDD and other TLAs, from backlogs to burn-ups, we are surrounded by props for coping with complexity. As appealing as these are, they also make us less likely to address the underlying problem of complexity itself. Dan believes you can learn to recognise these coping mechanisms for what they are, and intends to set you on the path to simplicating your programming life. Great talk and very thought provoking.

Fixing the dastardly NuGet Error : “Unable to connect to remote server”

Package Managers are awesome … when they work ! Lately I’ve been having a lot of trouble running NuGet package restore commands on my dev machine. Its all my own doing but I thought I’d share the pain points with you and 5 simple steps to to follow whenever you encounter the error that will get you back up and running…

Step 1 –  (seriously) close visual studio ,reboot your machine and try again

Step 2 –  disable any HTTP debugging  proxies on your machine

If you are running a local proxy debugging tool such as  Fiddler or OWASP Zed Attack Proxy you’ve broken NuGet ! You’ll need to turn them off (open fiddler=> fiddler option => connection => uncheck “act as system proxy on startup”)  and ensure that the  environment variable  HTTP_PROXY is removed from your system settings.

Step 3 –  (seriously) close visual studio  again , reboot your machine and try again

Step 4 –  clear the package cache(s) on your machine

NuGet has deep pockets and lots of them !

  • Delete the contents of the  folder

C:\Users\{username}\AppData\Local\NuGet\Cache

  • Delete the contents of the “packages” folder that sits alongside your visual studio .sln file

Step 5 – If you are using a custom package sources delete and recreate it

We run an internal NuGet server in the office, the trouble is I work remotely from the other side of the world so I need to connect over a VPN to access the server, this is way too slow for me so I sync up the contents of a local folder on my pc with the NuGet packages from the office and point to this using a custom package source that points to the folder.

nuget

Occasionally NuGet has trouble dealing with this and I need to delete the Package source and recreate it…

Voilà – you are hopefully back in business !

Don’t get hung up on code coverage statistics

Automated unit tests are a critical asset on every software project. They give you the confidence to constantly refactor and evolve you designs as your code morphs and evolves. Exception handling and adding boundary condition checks is often forgotten about unless you take the time to think it through and writing unit tests gives you the head space to do just this. Then there’s the poor sod that has to maintain your code in the future . The first place I go to understand how some code works is the unit tests.

Code coverage reports and statistics are fraught with danger. Coverage reports should not be used a management tool to judge the overall quality of a solution. All they really tell you with certainty is how much code hasn’t been tested at all. Getting hung up on the code coverage percentage is self defeating. If teams feel pressured into achieving a certain magic number there is a real danger that quantity becomes more important to the quality of the tests and you’ve missed the whole point. The focus should be on writing valuable unit tests that improve the quality and resilience of the overall solution, the code coverage metric is simply a side effect of this process.

A better way to use a code coverage report is to use it as a conversation starter with your team. If one area of the code has low coverage find out why, make sure you understand the functionality that lives there, maybe the code is trivial or it’s better tested with an integration test. As the project iterations unfold expect the total number of unit tests to climb steadily but don’t get hung on the numbers.

Microsoft open sources the .NET compiler platform “Roslyn”

One of the big announcements to come out of this week’s //BUILD conference was that Microsoft have now open sourced a preview of Roslyn – their next generation compiler for C# and VB.NET.

Roslyn exposes the entire code source code parser,  semantic analysis and syntactic tree in high fidelity as an API. This opens the door for language geeks to roll their own code inspection and refactoring tools (like ReSharper) . You can also leverage Roslyn to create brand new programming languages.

The original C# compiler must have been getting pretty unwieldy given its been around since 1998. I know the code I wrote in 1998 is beyond comprehension ! I’m guessing that the primary motivation for rewriting the compiler was an internal one for Microsoft –  a new clean compiler will enable Microsoft to innovate on new language features more quickly and gives them the ability to create new languages more quickly. With this in mind the move to open source Roslyn is hugely significant. From now on the entire community  ( and not just Microsoft ) will reap the benefits of the compiler as a service.

Roslyn has been a long time in the making, the last CTP was in 2012, late last year it was announced that the daily builds of Visual Studio were being compiled using Roslyn. Here’s hoping that the final version makes it to the big time soon.

The .NET Compiler Platform can be found here.

What to expect in HTTP/2.0

Its been almost 15 years since a new version of HTTP was released but the wait is almost over. According to this wikipedia link  HTTP/2.0 may be ready by the end of 2014. The good news is that that all the semantics remain the same, there are no planned changes to HTTP methods and status codes, we are all 200-OK with this (pardon the pun).

The most significant change that is planned is at the transport layer with the introduction of HTTP multiplexing. A new binary framing layer in HTTP will allow the delivery of multiple requests and responses on a single shared connection without blocking each other. Currently you need open multiple connections to a server to enable parallelism or revert to something like websockets. With multiplexing all requests for resources will execute in parallel by default. Multiplexing has the potential to make many of the current performance tweaks in today’s web world obsolete. Will you still need to consider image spriting when multiple images can be requested in parallel at load time ? Will we still need to consider concatenating CSS and JavaScript into a single file to reduce the number of page requests? Maybe the real way to speed up you website will be to tweak the priority assigned of each requested resource rather than worrying about requesting too many resources.

Part of me will definitely be sad to see the end of the text based protocol. I love the idea that under the covers something as powerful as a browser and a web server can figure everything out through a simple text based protocol.

Other things coming in HTTP/2.0 include

  • Cache Pushing – proactively pushing content to the browser ahead of time to speed things up
  • HTTP Header Compression

You can check out a draft of the specification here (warning strong coffee required!)

http://tools.ietf.org/html/draft-ietf-httpbis-http2-10

 Update – Feb 24th

The specification for HTTP 2.0 has just been finalised, more details available here –

http://thenextweb.com/insider/2015/02/18/http2-first-major-update-http-sixteen-years-finalized/