NDC London 2014 Highlights

Last week I traveled to London to attend the NDC 2014 developers conference. It was an excellent conference – great speakers, really friendly crowd, well organised and run. I’d highly highly recommend it to any software developer. All the sessions were recorded and I’d expect them to appear on vimeo shortly, I already have access to the recordings by registering as a conference delegate.

Here’s a rundown of my favorite talks & highlights from the conference in no particular order

“Reactive Game Development For The Discerning Hipster” – Bodil Stokke

This was a real breath of fresh air ! Bodil took to her keyboard and built out a working game from scratch using the JavaScript RxJS reactive extensions library.There were ponies jumping around on the screen, avoiding obstacles and catching magic coins. Well done, it was real brave to get up there and code live on stage. We need more live coding at tech conferences. She showed how easy it is to compose a application using the  asynchronous reactive library without a single callback in sight. Flying ponies, live coding & reactive extension woot !!

2014-12-03 12.20.37

“ASM.js, SIMD, and JS as a compiled-language virtual machine” – Brendan Eich.

Firstly, wow I can’t believe I came face to face with the creator of JavaScript! Brendan took us through a brief history of the language  from the early days back at Netscape all the way through the present day with the ECMA 6 language enhancements , ahead of time compilation engines and then beyond to ASM.js. It’s a subset of JavaScript  which provides a model closer to C/C++ by eliminating dynamic type guards, boxed values, and garbage collection. The code can be compiled ahead of time and stored in offline storage giving you fast start up times with very good performance characteristics. Brendan shared preliminary benchmarks of C programs compiled to ASM.js that are within a factor of 2 slowdown over native compilation with Clang. Game developers like Unity are working with ASM.js as a way to get their games running on the Web without plug-ins. This would also opens the door to many new types of games and mash-ups with everything running in the browser.

Check out Brendan Eich playing a rewrite of Doom with a mash-up  inside it where another port of Doom is running in an iFrame. A game within a game – confused?, I know I was !2014-12-05 12.36.09

“Practical Considerations for Microservices” – Sam Newman

This talk was very close to my heart – I’ve spent the last 12 months working with a large team on a successful project built from the ground up using microservices. Sam did a great job explaining what microservices are and the pitfalls to avoid when you adopt this style of architecture. Great common sense stuff here, it all resonated with me and  its great to see microservices becoming more mainstream. Sam talked about what you should standardise across a project – make sure you use consistent message exchange patterns , monitoring and deployment approaches but don’t get hung up upon how the microservices are built internally. I really enjoyed this session.

“Five (or so) Essential Things to know about ASP.NET vNext “- David Fowler and Damian Edwards

Damian Edwards and David Fowler demonstrated ASP.NET vNext with some fun code samples and slides that explained the brand new stack. The key word here is “new”. Shiny shiny new !.It looks like a rewrite from the ground up to support Windows and Linux for the first time. The photo below shows the Windows components in blue and the new linux stack in orange.

2014-12-04 16.27.25

The guys shared a lot of information in 60 minutes. Web.config files are now gone ! In future you’ll be working with project.json files to store all your project dependencies. This is going to make it much easier to author .NET apps outside of Visual Studio. The guys demonstrated the cross platform support  writing the code once and running it on a Windows and Linux VM  – this got a great round of applause from the crowd !

Next it was onto the dynamic recompiling – Any changes to a dynamically compiled file will automatically invalidate the file’s cached compiled assembly and trigger a recompilation. The guys changed some C# controller code and the changes appeared with a browser refresh. This is a great step forward, in theory you can now eliminate the whole compile step from your deployment process. This is all possible because ASP.NET is now leveraging the Roslyn Compiler as a Service.

It looks like there will be an awful lot of breaking changes with the new version. All the web server middle-ware has been cleanly separated out into separate NuGet packages. I’m guessing the middle ware interfaces are very close to the OWIN Interfaces.  When you create a new  ASP.NET project everything is turned off by default. You need to enable and pull down the middleware packages you need for your app. This is a really good thing. Your web application will be much more lightweight wthout any any additional bloat. The demo’s reminded me a lot of the minimalist node.js Express Framework. Oh yeah and WebAPI is now part of the same codebase !

“Lessons From Large AngularJS Projects” – Scott Allen

Scott Allen delivered a excellent talk on patterns and approaches to consider on your next AngularJS project.  Things like error handling – how to set up an error handling service to handle all unhandled errors rather than relying on scope.emit and the evil $rootscope variable. He demonstrated some clean code to manage security tokens using http interceptors and decorators , I’ll definitely be digging into this one further. Finally he covered the $httpbackend  mocking service  that lets you program expectations for external http calls without having to go over the wire in an automated test.

“Kicking the complexity habit” – Dan North

Dan North gave a really funny talk on how we should all be avoiding unnecessary complexity at every point in the SDLC. Without rigorous care and attention software quickly becomes messy and unmanageable. Even with the best intentions entropy and complexity are a fact of life in growing applications.  From your IDE to your automated build, from DDD’s ACLs to TDD and other TLAs, from backlogs to burn-ups, we are surrounded by props for coping with complexity. As appealing as these are, they also make us less likely to address the underlying problem of complexity itself. Dan believes you can learn to recognise these coping mechanisms for what they are, and intends to set you on the path to simplicating your programming life. Great talk and very thought provoking.

Hybrid on premise / cloud architectures with the azure service bus relay

In this post I’ll take you through the steps involved in exposing your on-premise database and .NET code as a simple RESTful service using the service bus relay binding. I’ve also built out a an ASP.NET MVC client to consume the service.

All the source code is available here



Recently I’ve been delving into the world of hybrid on premise / cloud architecture. When it comes to start-up’s and companies building out new products the decision to build out a cloud solution is often be a no-brainer. But what about companies that have a significant investment in existing desktop / on-premise solutions? As much as we’d all love to, it’s not always possible to throw the baby out with the bath water and start again from scratch, enter the service bus relay binding!

I work for a large ISV that has a significant investment in an on premise line of business accounting system. The product has taken close to a decade to develop. Even with an aggressive online strategy it will take years to migrate this system to a true cloud solution. The service bus relay binding gives us the ability to quickly expose existing business logic and data to the cloud with only a small development overhead.

“I feel that Microsoft haven’t done themselves justice selling the service bus relay to the community”

Personally, I feel that Microsoft haven’t done themselves justice selling the service bus relay to the community. Its a really clever piece of technology and I’m surprised it isn’t being adopted more widely. I put the following solution together to demonstrate to the business how they could surface on-premise reports in the cloud.


diagram – wrap existing business logic / stored procedure calls in a RESTful API exposed as a webHttpRelayBinding endpoint.

1. Creating a service namespace on Windows Azure

a) Log on to the Windows Azure Management Portal.

b) Click Service Bus, Create and enter the name of your service bus namespace (e.g “cloudburst”). For the best performance you should ensure your RESTful client is also deployed to the same Location, in this case US-West.

c) Once created click on Access Key and take note of the default issuer (“owner“) and default key(“Rd+I2mw7CaJ4pdJ7faf4yZKzI92PkYKVnE3qAA7QOIc=“). You’ll need to enter these into the app.config file in the project OnPremise.ServiceHost to enable the serviceHost to burst out to the service bus.

2. On premise service host

The Windows Azure Service Bus NuGet package pulls in all the service bus dependencies into your project. I’ve used the WebServiceHost to expose a RESTful service definition.

string serviceNamespace = "cloudburster";

Uri address = ServiceBusEnvironment.CreateServiceUri("https", serviceNamespace, "reports");

WebServiceHost host = new WebServiceHost(typeof(ReportingService), address);


I’ve configured the relay binding to use the access key when establishing the relay binding.

3. Exposing a RESTful API

I’m exposing an API to query contact information from a legacy database. The RESTful API takes the following format:

all contacts https://myobconnector.servicebus.windows.net/Contact/ GET
single contact https://myobconnector.servicebus.windows.net/Contact/{id} GET

The WebGet attribute  allows me to configure a JSON response type and to overlay a logical RESTful API over the WCF service contract.

    [ServiceContract(Name = "ContactContract", Namespace = "http://samples.microsoft.com/ServiceModel/Relay/")]
     public interface IContactService

        [WebGet(ResponseFormat = WebMessageFormat.Json, UriTemplate = "/{id}")]
        ContactEntity GetContact(string id);

        [WebGet(ResponseFormat = WebMessageFormat.Json, UriTemplate = "/")]
        List GetAllContacts();


4. Beware! there be dragons when running in a secured network !

Once the service host is running your data is exposed as a RESTful endpoint. For the purposes of this code sample I haven’t secured the client endpoint and I’m using a plain WebHttpBinding. This requires that the http ports 80/443 ports are open for outbound traffic on your network. If you are running in any sort of secured corporate environment you’ll likely run into firewall problems. This link will point you on the right path. This is one area where the documentation lets you down slightly. If you just read the brochures you’ll be led to believe that the relay binding can cope with NAT devices and internal firewalls but if your network administrator is doing their job properly you’ll likely need to get some firewall rules put in place.

5. consuming the API from a REST client (ASP.NET MVC)

Consuming the RESTful services is pretty straight , please refer to Cloud.App for a working solution. NewtonSoft’s free JSON serializer does a pretty reasonable job of hydrating your JSON payloads back into .NET types

        public ContactEntity Get(int Id)

            string url = "https://cloudburst.servicebus.windows.net/contact/" + Id.ToString();

            using (WebClient serviceRequest = new WebClient())
                string response = serviceRequest.DownloadString(new Uri(url));

                var data = JsonConvert.DeserializeObject(response);

                return data;

5. bench marking, performance & latency

work in progress – I’m working on a simple test harness to benchmark performance and latency in sending different sized payloads over a relay binding. From running the MVC rest client it looks like establishing the channel can be expensive (approx 1 second),  the first time but then subsequent service calls are pretty responsive. I’ll be publishing some test results soon. The plan is to build a simple ping service and instrument the timings.


I’m sure you’ll agree that its a pretty painless process to pick up your existing .NET code and start to expose it using the relay binding. There wasn’t a lot of examples out there so I decided to write this post and open source the code.This hybrid on-premise-cloud architecture has lots of possibilities in the real world.

It offers a pretty compelling alternative for companies that are slow to store their data in the cloud for data sovereignty issues (remember the data is still stored on premise) or for applications that need to surface some of their functionality to the cloud.

May the source be with you !


TechRepublic Podcast

Earlier this week I was a guest on TechRepublic’s “The Upside” podcast where I talked to Chris Duckett  about the Mass Mobile Experiment – an open source collaboration platform I developed with Simon Raik-Allen from MYOB. We recently showcased the technology at Tech Ed Australia.

“One of the most interesting talks at this year’s Australian TechEd event was the Mass Mobile Experiment. The platform’s Pong implementation was used to entertain attendees before the conference’s keynote.”

We chat about the inspiration for project  – Loren Carpenter’s crowd swarming pong experiment from 1991 and we geek out about TypeScript and node.js

Here’s a link to the podcast


Enjoy !

MYOB Neo4J Coding Competition

Last week marked the end of the MYOB Neo4J coding competition. This was an internal competition for the development team in the Accountants Division of MYOB, to develop a customer relationship  system for accountants using node.js and Neo4J.  MYOB is one of the largest ISV in Australia and the team in the Accountants Division are focused on developing line of business applications for accounting practices.

A coding competition with a difference!

I wanted to have a level playing field for the competition so what better to throw at a bunch of Microsoft developers than a Neo4J, Node.js and Heroku challengeJ! The competition ran for 8 weeks and the challenge was to build an online CRM system that ingested a bunch of text files that represented data from a typical accounting practice.  The business domain was very familiar to the team but the technologies were all new.

To add another twist, points were awarded to the people within the team that made the biggest community contributions over the 8 weeks (MYOB ‘brown bag’ webinar sessions, yammer discussion threads and gists on GitHub). I wanted this to be a very open open-source competition!

Why Neo4J?

When you dig deeper and analyse the data that an accounting practice uses it’s all based around relationships – an accounting practice has employees, employees manage a bunch of clients, and these clients are often related (husband and wife, family trust etc). The competition gave the team a chance to dip their toes into the world graph databases and to see how naturally we could work with the data structures.

And the winner is Safwan Kamarrudin!

I’m pleased to announce that Safwan Kamarrudin is the winner and proud owner of a new iPad! Safwan’s solution entitled “Smörgåsbord” pulled together some really cool node.js modules including the awesome node-neo4j, socket.io and async. Safwan made a massive contribution to the competition community through the use of yammer posts, publishing GitHub Gists and by running brownbag sessions here in the office.

Accountants Division program manager Graham Edmeads presenting Safwan with his prize!

An interview with the winner!

Qn – So where did you come up with the name “Smörgåsbord”, are you a big fan of cold meat and smelly cheese?

I chose the name because the competition asked contestants to use a smorgasbord of technologies. Plus, I thought it would be cool to have umlauts in the name.

 Qn – Where can we find your solution on GitHub?


 Qn – Complete this sentence – Noe4J is completely awesome because ….

Data is becoming more inter-connected and social nowadays. While “relational” databases can be used to build such systems, they are definitely not the right tool for the job due to their one-size-fits-all nature (despite the name, relational databases are anything but relational). Modelling inter-connected data requires a database that is by nature relational and schema-free, not to mention scalable! And in the land of graph databases, in my opinion there is no database technology that even comes close to Neo4J in terms of its features, community and development model.

 Qn – What in your opinion is the biggest challenge to wrapping your head around Graph database concepts?

For someone who is more used to relational databases, the differences between nodes and tables need some getting used to. In a graph database, all nodes are essentially different and independent of each other. They just happen to belong to some indices or are related to other nodes.

This also relates to the fact that nodes of a similar type may not have a fixed schema, which can be good or bad depending on how you look at it.

Another subject that I had to grapple with was whether it makes sense to denormalize data in Neo4J. In a NoSQL database, normalization has no meaning per se. In some cases, data normalization even negates the benefits of NoSQL. Specifically, many NoSQL databases don’t have the concept of joins, so normalizing data entails having to make multiple round trips to the database from the application tier or resorting to some sort of map-reduce routine, which is inefficient and over engineered. Moreover, normalization assumes that there’s a common schema shared between different types of entities, and having a fixed schema is antithetical to NoSQL.

 Finally a word of thanks!

I’d like to say a huge thanks to Jim Webber, Chief Scientist at Neo Technologies for helping me launch the coding competition. Jim was struck down with chicken pox just hours before the competition was launched but he still managed to join me online to launch it and take the team through the infamous Dr Who use case. You are a legend Jim, many thanks!

May the source be with you!


How an architect can build an exceptional software development team

I’ve had the pleasure of hiring and growing an awesome team of developers at MYOB Australia. In this post, I share my ideas for how  an architect can build an exceptional development team.

Hire craftsmen not programmers

Craftsmen take pride in their code down to the very last detail. They watch over the code and fix up the broken windows as they come across them.When you get the opportunity to hire new people don’t waste it with mediocrity. Dig deep in the interviews, understand how the candidate problem solves and if possible watch them code. If you have any doubts then keep on looking.

Hire great communicators

Great communication makes a team succeed . Make sure you hire candidates that prefer open honest conversations over lengthy email trails. You want developers that are comfortable at the whiteboard and can explain their ideas clearly. Encourage everyone on the team to have a voice and to respect each others opinions.As the architect your role it communicate the designs to everyone over and over again .

No big upfront architecture and design

Big upfront architectural fails – end of story. As the architect you need to set the vision and technical direction for the team but you must allow the designs to evolve and fall out naturally as your guys code out the features. Empower the entire team to make architectural decisions and guide them along the path.

Create a culture of continuous learning

Lunch time brown bag sessions are a fun and social way for your team to learn. Encourage everyone to present a session, don’t stick to the same presenters. We’re in the middle of a “20/20 brown bag series” at MYOB – 20 brown bag sessions in 20 weeks covering a wide range of topics, not just programming.

Try an internal social networking tool

Yammer is a great tool for helping like minded people connect and share ideas. Use some gentle persuasion to get everyone yammering – once people get it you can step back and watch the ideas flow.

Organise a coding competition

A coding competition is another fun way to get your team to think outside the square. We are pretty much an all Microsoft shop so I threw down the gauntlet and organised a challenge where the guys had to learn a whole new bunch of open source tools – Neo4J, Node.js and Heroku.

Be approachable – all the time

No matter how busy your day is, if you are at your desk and someone approaches you for help make time for them.

hope this helps


AWS S3 and Azure blob storage compared- same same but different

At first glance Amazons Simple Storage Service and Windows Azure blob storage appear to offer the same functionality but there are a few  subtle between these  two storage abstractions in the cloud. In this post I’ll explain what the differences are.

Here’s a quick refresher on the terminology

Amazon simple storage service (S3)

In Amazon speak every object stored in S3 is contained in a top level bucket. Bucket names are unique across all of Amazon S3. Within a bucket  you can use any names you like for your objects. Although the hierarchy is two levels deep you can fake deep object graphs suing a naming prefixes. Plenty of folks store static content for their websites in S3 and back the data store with a CDN to ensure fast delivery to the browser. S3 is great for WORM data ( write once read many times).

Azure’s blob storage

In azure speak objects are stored in blob storage. Every object stored in blob storage is  associated with one top level container. The container partitions everything by a unique namespace across all of azure blob storage. Within a container  you can use any names for your objects.Again its common to store static content and back it with a CDN to serve out static data to your website.

What they have in common

  • cheap, durable, reliable storage
  • REST API to get at the data
  • Hierarchy 2 levels deep
  • Versioning
  • ACL’s to lock down who can see what
  • Multi-part upload

The main differences

Blob Storage S3
 Bittorrent P2P N Y
(optional) delete item after a specified amount of time N Y
Encrypt data at rest N Y

hope this helps!


Bench marking node.js on Windows Azure


I’ve been working on a pretty cool side project that I presented at Tech Ed Australia 2012 – “The Mass Mobile Experiment!”

It’s a generic collaboration framework that enables lots of people (say at a conference) to enjoy a shared experience in real time using their mobile phone, tablet or other internet device. At my Tech Ed session I had over100 people playing a single game of pong on the big screen using their mobile phones to control the paddles in real time! The platform is built using node.js, websockets (socket.io) and it supports a plug-in architecture enabling other games / experiences  to plug in pretty easily. So far I’ve got a multi-player quiz game, pong, a political worm and an interactive presentation.

Conceptual Architecture – MME ( Mass Mobile Experiment)

  • Client ( mobile phone) sends data to server over long running websocket
  • Server (node.js) aggregates the data and sends to the playfield over websockets
  • Playfield (browser on a big screen) runs the game loop and process the aggregated data from the server.
  • Control Panel allows you to change games and throttle the client and server

Bench marking on Windows Azure

In order to load test the platform I built yet another game! This time I got 200 people in the office to “play the game”. It involved leaving a web page open for 20 minutes while I stepped up the number of websocket connections on each browser and started to send data to the server.

  • The client connects to the server over websockets and sends typical game data on a timer
  • The contol panel broadcasts messages to all clients telling them to step up the number of websocket connections to the server and to increase the send frequency. In effect a single browser is then multi-plexing several websockets to the server. I found that a single browser can easily multiplex 10 different web socket connections without slowing down the client side JavaScript Code.
  • The server collects interesting metrics such as requests per second and CPU and sends this to the playfield
  • The playfield ( load test app) listens for data over another websocket and plots the data in real time


  • Node.js server running on a medium size worker role on Window Azure, 3.5 GB RAM, Allocated Bandwidth 200 (Mbps).
  • 2000 concurrent WebSockets ( multiplexing over 200 different laptops in the office)
  • Requests per second 8500
  • Memory Usage on Azure 76%
  • Message send frequency from client – 4 messages per second

Check out this screenshot from the azure management portal – I managed to push the CPU to a 89% at 11:40 when the system ramped up to 2000 concurrent users!


  • Node.js running on an azure worker role scales really really nicely. In my case a medium sized VM scaled to 2000 concurrent web sockets processing 8000 requests per second. Not a single message got lost between the browser and the server even when the server was under stress!

Why you should distrust this post !

  • The measurements were taken using node.js v0.8.9, socket.io v 9.0, these technologies are evolving rapidly.
  • For the mass mobile experiment the node server is pretty simplistic, it aggregates data and sends it to the playfield. This may not represent what your application is doing.

All of the results along with all of the source code is open sourced here on GitHub

May the source be with you !