Guiding Principles for an Evolutionary Architecture



Back in the dark days of waterfall projects we invested heavily in big upfront architecture and design. Systems were well thought out and documented within an inch of their lives. All this happened before anyone cuts a single line of code. There was little or no room to change once things were in flight. Twelve months later we’d go through a horrendous regression test cycle and finally release a product only to find out that our clients needs had evolved and we’d ended up building something that nobody wanted, but hey at least it was well architected!

Thankfully, those days are gone my friend and big upfront design has no place in Agile. With Agile, teams still complete architectural work, but it’s done very differently. Instead of a big up-front design where decisions are made about the architectural needs for an entire system, Agile teams take an incremental and evolutionary approach. However, designing, building, and maintaining a robust architecture doesn’t come for free. You need to maintain focus on architecture throughout the entire process and stick to some guiding principles.

Evolutionary architecture gives us most of the benefits of enterprise architecture without the problems caused by trying to accurately predict the future. Here are a number of useful techniques and principles that will help you to maintain a clean architecture through the lifetime of your product.

“The best architectures, requirements, and designs emerge from self-organizing teams.”  Agile Manefesto Principles

Last responsible moment

The last responsible moment isn’t about encouraging procrastination, it is saying that you should delay decisions as long as you can but no longer. The longer you can afford to wait to make a design decision the more  information you’ll have on hand and the better placed you are to get it right. Decisions made too early in a project are hugely risky. These decisions often result in work that has to be thrown away. Even worse, those early decisions can have crippling and unavoidable consequences for the future of the project.

Early in a project you should make as few binding decisions as you can get away with. Start small with a few critical stories but aim for working end to end software. Establish a skeleton architecture or ‘steel thread’ with an end to end data flow and establish your test approach.The driving technical requirements for a system should be identified early to ensure they are properly handled in subsequent designs and implementations. Choices like the programming language and the database technology need to be made at the start of a project but deciding on the right level of services decomposition will emerge later on. The last responsible moment makes sense for decisions which are costly to reverse and it will keep you from over architecting and designing for functionality that may never arise.

Establish lightweight documentation

The Agile Manifesto prefers “working software over comprehensive documentation”. This doesn’t mean that you can ditch documentation entirely.Technical documentation is valuable  but it needs to be kept at the right level if it has any chance of being kept up to date and surviving the duration of your project. Documentation becomes much more valuable  when it takes on a collaboration nature. Wikis are a great way to socialise designs and to communicate both within your team and externally. The act of writing or sketching out a diagram helps you to think something through properly and increases your own understanding. You  should focus on keeping things lightweight and relevant – make sure there is value in what you are documenting and make sure there is an audience that finds it useful. In general, system diagrams,  design decisions , operational instructions and requirements should be documented to help you team understand what they are building and to help the others that come along after them.

Establish continuous integration and automation right from the get-go

Nothing builds or destroys agility more than a team’s commitment to continuous integration. Automated tests gives you a safety net that lets you refactor and change your code quickly knowing that you haven’t broken anything.

In order to  evolve an architecture over time you need to maintain and grow all your automated tests –  unit , integration, contract and end to end tests. Don’t get hung up on code coverage metrics but pay attention to tests that fail frequently, these will point you to the problem area’s in your code base that need your attention.

Without adequate automated tests your architecture won’t evolve freely. Making wholesale changes and restructuring existing code will becomes too risky and you’ll end up in a cycle of maintaining and extending poor code that compromises your architecture.

End to end tests are the slowest to run and the most brittle to maintain. When choosing the scenarios for an end to end test you should focus on the key workflows through your software. As the system grows don’t be afraid to throw away some of these tests and replace them with more valuable scenario’s. Your tests like you architecture need to evolve and sometimes that means they need to be deleted.

Simple code and simple designs are always best

Writing code is easy, writing code that is easy for others to understand is not so simple. Striving for simplicity when building complex systems should always be front of mind. Don’t let team members work alone, pair programming and code reviews help to make code more understandable and readable. When it comes to design, the easier things are to understand the better. Simple code and simple designs takes less time to understand, have fewer bugs, and are easier to modify. Beware of shiney new frameworks and abstractions for abstractions sake!

Remember Conways laws 

Many years ago, Melvin Conway wrote a paper that proposed that the way an organization is structured would have a strong impact on how a system is created. He wrote

“Any organization that designs a system (defined more broadly here than just information systems) will inevitably produce a design whose structure is a copy of the organization’s communication structure.”

It is all too easy to focus purely on the technical challenges when architecting a system  but you need to think about your organisation structure and assign the right work to the right teams. Having a single team work on and own an individual service is far better than having joint ownership across teams. Coordination and communication across teams is hard and all too often people find ways to avoid this altogether and you end up with large hard to maintain codebases with duplicate functionality added per team.

Finally, a word on robustness

Postels Law also known as the robustness principle is a great guideline when it comes to designing message contracts

“Be conservative in what you send, be liberal in what you accept”

As soon as you publish an API either internally or externally other systems will start to rely on it. Once it is the wild your API’s become much harder to change. When it comes to exposing data you should expose the bare minimum and no more and you should version all message contracts. Conversely when consuming an API its best to ignore parts of the message you aren’t interested in, you don’t care if these change over time.

Evolutionary architecture works but only when you focus and continually works through these guiding principles.

Hope this helps

Blue-Green Deployments with Azure Web Apps

Blue-green deployment is a technique that reduces the downtime and risk of releasing new versions of an application by running two identical production environments called blue and green. At any time, only one of these environments is live, with the live environment serving all production traffic. For example, blue is currently live and green is idle. To release a new version, you deploy and test in the green environment . When you testing is completed you switch the routing so all incoming requests now go to green instead of blue. Green is now live, and blue is idle, now wash, rinse and repeat..

There are many benefits to adopting this technique – you can release to production as often as you like with zero-downtime and you can easily rollback your changes and swap from green to blue if needs be.

Azure Web Apps provides an out-of-the box solution for blue-green deployments through the use of deployment slots. When you provision a web application you can create up to 4 additional deployment slots. Deployment slots are actual live web apps with their own host names, you can push code to individual deployment slots and the content and configurations elements can be swapped through a simple API call. Azure manages all the traffic redirection when you swap your blue-green slots and guarantees that no requests are dropped during the swap.

Azure Resource Manager allows you define all the resources that make up your deployment environment in a single template file. Once defined, you can deploy, monitor and manage these resources as one atomic group.

By combining web apps, deployment slots and  azure resource manager it’s pretty easy to build  a continuous blue green deployment pipeline.

I’ve published a set of scripts and templates to enable blue-green deployments for a WebAPI backed by a azure SQL database, you can grab all the code here


Create a new environment

Our stack consists of Web App and a SQL Azure database. these are defined in the file webapi-deploy.json.  All service specific settings are externalised and defined in a seperate paramters file. To create a brand new stack  invoke the New-AzureResourceGroupDeployment 


 "$schema": "",
 "contentVersion": "",
 "parameters": {
 "siteName": {
 "value": "CustomerService"
 "hostingPlanName": {
 "value": "NewServiceAppPlan"
 "siteLocation": {
 "value": "North Europe"
 "sku": {
 "value": "Standard"
 "serverName": {
 "value": "mydatabaseserver"
 "serverLocation": {
 "value": "North Europe"
 "administratorLogin": {
 "value": "dbuser"
 "administratorLoginPassword": {
 "value": "your-password"
 "databaseName": {
 "value": "CustomerDatabase"

Once the web app has been created you can add a second deployment slot

function Add-StagingSlot([string]$sitename,[string]$location)
New-AzureWebsite -Name $sitename -Location $location -Slot "Stage"

Deploy to a staging slot and hot swap

The Publish-Azure-Website cmdlet lets you deploy a .NET webdeploy package to a named deployment slot on  you running Web App.  When you are ready to hot swap the blue and green enviornments call Switch-AzureWebsiteSlot

function Publish-Website([string]$name, [string]$package)
Switch-AzureMode AzureServiceManagement
Write-Host "publishing " + $name + "..."
Publish-AzureWebsiteProject -Name $name -Package $package

function Stage-Website([string]$name, [string]$package)
Switch-AzureMode AzureServiceManagement
Write-Host "publishing " + $name + "..."
Publish-AzureWebsiteProject -Name $name -Package $package -Slot "Stage"

function Swap-Website([string]$name)
Switch-AzureWebsiteSlot –Name $name -Force

A word on database changes

Handling database changes is the most complex part of a  blue-green deployment. the simplest approach is to have your blue and green applications share the same database. You need to ensure that all schema changes are backward compatible with both versions of the running application. The simplest way to run the database upgrade scripts is to bootstrap it to the Application_Start method in your WebApi project. FluentMigrator is a great open source tool which allows you to define your database schema changes in .NET code.

That’s, pretty much it, one last point about resource groups. When you are done with your stack and you’d like to tear everything down you can delete all the resources in your resource group by calling Remove-AzureResourceGroup

Useful Reading

Martin Fowler on Blue Green Deployments

Quickstart Resource Manager Templates

hope this helps !

Build a Hybrid Application with the Ionic Framework and Microsoft Azure Mobile Services


In this post I’ll show you how to create a hybrid mobile application using the open source ionic framework and how to integrate the application securely to a Microsoft Azure Mobile Services backend. By integrating  with azure mobile service you get to connect to reliable cloud storage via a simple JavaScript SDK.

I’ve made the complete source code is available on GitHub  – feel free to fork the code or use it however you like.

Ionic Framework

Ionic is an open source JavaScript and CSS framework for building cross-platform HTML5 mobile applications. It’s built with Cordova and AngularJS and it comes with a nifty command life interface that lets you build  IOS and Android apps. My experience working with ionic has been really good. It is very easy to get started with ionic and there is a strong community behind with about 200 contributors to the codebase. Work is well underway to remove the dependency on AngularJS which means you’ll be able to plug the inoic framework into your JavaScript framework of choice in the near future.

Microsoft Azure Mobile Services

Microsoft Azure Mobile Services is a mobile backend as a service (MBaaS) offering from Microsoft. Almost every mobile application needs to store some data, deal with push notifications, service monitoring and authentication. With microsoft azure mobile services you get all of this as a “platform as a service” offering. You don’t need to spin up a single server to go live with a mobile app, simply provision yourself a mobile services backend in azure and  and you have access to all of these features along with a a fully programmable node.js or .NET backend.

The todo appapp I’ll take you through the key steps and code needed to build a simple todo mobile app. Users will authenticate with a google plus account. Once logged in they’ll be able to create new tasks and maintain a list of outstanding tasks.

Creating the ionic app

First up install apache cordova and the ionic framework via npm

npm install -g ionic
$ npm install -g cordova

Next,  create a new ionic application using one of the sidemenu starter template

$ ionic start Azure-Ionic-App sidemenu

That’s it! if you run the command “ionic serve” a browser will open running your application.The application we are building consists three simple views and controllers – login, add task, and  view all tasks.

Implementing Google Authentication

Firstly, create a new azure mobile service  via the azure portal. If you don’t have an account you can sign up for a free trial here. You’ll get more than enough credits with a trial account to develop  and test an application.

Once your mobile service is provisioned take a note of the mobile service url  on the dashboard page this is your unique the API endpoint for your mobile app.

create mobile back end

The process of how to register your app for a google login is already well documented here . You’ll need to login to the google developer console, create a new web project and register for “Google +API” OAuth authentication then in the credentials tab, enter the authorization redirect URL for you application. This will be “your-mobile-services-url/login/google” , in my case its “”google console

Now to associate your google application with the mobile service backend. Go back to the azure portal and enter the google client id and secret on the identity tab..

google register

To integrate a google login into your ionic application you’ll need to bundle the mobile services JavaScript client library with your application and add the following AngularJS factory class which returns a mobile service client object .

angular.module('azure', [])
  .factory('client', [function () {
    var client = new WindowsAzure.MobileServiceClient(
    return client;

Using the client object we can take care of the OAuth flow by simply calling client.Login(“google”);

//the login controller calls client.login("google") to perform the oAuth dance
angular.module('azure', [])
angular.module('app.controllers', ['azure'])
.controller('LoginCtrl', function(client,$scope, $state) {
    $scope.login = function() {
      client.login("google").then(function succes(data){
        console.log('logged in succesfully..')
      }, function(error){

        //login failed.

The OAuth flow happens in a separate browser window.You must install the Cordova InAppBrowser plugin from to be able to show the login popup on a mobile device. Run the following command to install the plugin locally

$ ionic plugin add cordova-plugin-inappbrowser

then add the plugin to your config.xml file to bundle it as part of the ionic build process.

  <feature name="InAppBrowser">
    <param name="ios-package" value="CDVInAppBrowser"/>
    <param name="android-package" value="org.apache.cordova.inappbrowser.InAppBrowser"/>

Once logged in, a user context is set on the client object. You can access this at any time by simply injecting the client factory class we created earlier into the relevant controller; client.currentUser.userId

We’ll store the unique userId property as part of each todo item in the data store to make the solution multi-tenanted and to ensure you only get to see your own todo list tasks, I know I for one don’t want to complete other peoples tasks!

Storing Data in Azure Mobile Services

When you provisioned a mobile services on azure  you also provisioned a SQL Server backend but chances are you won’t treat the backend like a relational database. When you provision a new table in the database it gets allocated a dynamic schema by default ( very cool but you can turn this off if you like). As you make API calls to store data in a table new columns are generated based on the properties of the JSON object you send to the API.

Go to the data tab in mobile services and a create a table called “Task” to store the todo items. Set the table permissions for insert, update,delete and read operations to “authenticated users”.  You can now read and write data to this table via the mobile services client factory class we created earlier.

Its worth highlighting that the backend API that writes to the database table is fully programmable and you can write node.js scriptlets to validate and transform your data before it gets written to the data store. If you’d prefer to write your data to a MongoDB or DocumentDB data store its pretty easy to swap out the database completely  – more information here

For the purposes of this post we won’t create any server side scriptlets, we’ll let the data go straight through to the tasks table.To keep the ionic application more modular I created a separate AngularJS factory to class to connect to the azure backend.

There’s a few things worth highlighting in the code

  • when saving data in the addTask() function I tack on a userId property.
  • when reading data in the getAll() function I create a filter expression to filter to return the tasks for the logged in user.
angular.module('', ['azure'])
  .factory('azureAPI', ['client' ,'$q', '$rootScope', function (client, $q, $rootScope) {

    return {
      getAll: function () {
        var deferred = $q.defer();

        var userId = client.currentUser.userId;

        //filter by user id
        client.getTable('Task').&amp;amp;lt;strong&amp;amp;gt;where({userId:userId})&amp;amp;lt;/strong&amp;amp;gt;.read().then(function () {
          deferred.resolve.apply(this, arguments);
        }, function () {
          deferred.reject.apply(this, arguments);

        return deferred.promise;

      addTask: function (task) {

        &amp;amp;lt;strong&amp;amp;gt;task.userId = client.currentUser.userId;&amp;amp;lt;/strong&amp;amp;gt;
        var deferred = $q.defer();

        client.getTable('Task').insert(task).then(function (data) {
          deferred.resolve.apply(this, arguments);
        }, function (error) {
          deferred.reject.apply(this, arguments);
        return deferred.promise;

      updateTask: function (task) {
        var deferred = $q.defer();
        task.userId = client.currentUser.userId;

        client.getTable('Task').update(task).then(function (data) {
          deferred.resolve.apply(this, arguments);
        }, function (error) {
          deferred.reject.apply(this, arguments);
        return deferred.promise;


That’s it. We’ve now hooked up google authentication and our mobile is storing data against a microsoft azure mobile services backend.

All the code is available here :

Hope this helps

Microservices – Size Doesn’t Matter!

tape_measureI work as part of a team at MYOB that has been building a new platform for accountants for the past year using a microservices architecture. We’re now running approximately 20 microservices in production and we’ve learned a lot about the right size and granularity of microservices during the project.

“Microservice” is a very loaded word, it immediately evokes connotations about the size of what you are building – what is the right size and granularity for a microservice? If I go too large am I doing it all wrong? It’s gotta be small because it’s a microservice , right?

Well, when it comes to determining the right size for a microservice, unfortunately the answer is “it really depends”.

If you go too small you’ll end up with a very chatty system where a lot of processing time is spent waiting on remote calls between services. If you go too large then there is a danger that your code will become too complex and more difficult to maintain as more and more functionality gets added. Your services need to be the right size to make them composable and reusable.

So when you are determining the appropiate size for a microservice I think that it really depends on three things

  • the maturity of your team and automation
  • the complexity of your business domain
  • how long your project has been running

Size varies depending on the maturity of your team and your automation

Firstly, you need to acknowledge the fact that you are introducing an overhead every time you create a new service. Martin Fowler describes this as a microservice premium, which I think is a pretty appropiate term. This overhead is bigger or smaller depending on your team’s maturity with microservices and how good or bad your automation is. For example, if you have invested in building a template project with the right scaffolding to make a new service immediately deployable then the overhead to create a new service is much smaller but you need to invest in good automation to avoid the high microservice premiums. We’ve built a template project for this purpose and we are constantly looking for ways to reduce the friction involved in building and deploying our code.

Size varies depending on the complexity of your business domain

Lines of code isn’t a good way to measure the size of a microservice. Different programming languages are more or less expressive and even in the same programming language the number of lines of code it takes different developers to solve the same problem varies dramatically. It is more useful to focus on the functionality inside the microservice. If the microservice implementation is doing little more than exposing a basic CRUD style interface chances are you’ve gone too small and you will end up building an anaemic microservice. It is better to think in terms of buisness capabilities, think about the right bounded context that will give you a reusable business capability and take it from there.

Size varies depending on how long your project has been running

If you are in the early stages of a project it’s best to start with a smaller number of services. Chances are you don’t fully understand the problem domain and whatever you are building will invariably change as your understanding grows. If you are going to implement microservices you need to be prepared to refactor your services, combinining them and splitting them from time to time till you get the balance just right and you need to fully embrace an evolutionary design. Last year, one of our product teams started out with a single microservice and the guys are still happily working away on the same service. I’m pretty sure that the code will split in the future but it’s not time yet and they are wary of premature optimizations.

So there you have it, in summary if you plan to adopt microservices remember that size isn’t something that you should get hung up on. Its better to think in terms of where your team is at and how far along you are in your project.

Hope this helps !

(This article was first published on the UnderTheHood technology blog)

Sharing a folder between Windows VMWare workstation host & Ubuntu guest

So I lost a few hours of my life today trying to set up a folder share between a Windows 7 host machine running VMWare workstation 11.0 and an Ubuntu guest machine.
I went through the following steps  to link a new created shared windows folder .

After restarting the VM I ran the default wmware-config tools, accepting all the defaults


and ended up with the following compile time error from the vmhgfs module

“The filesystem driver (vmhgfs module) is used only for the shared folder
feature. The rest of the software provided by VMware Tools is designed to work
independently of this feature.

If you wish to have the shared folders feature, you can install the driver by
running again after making sure that gcc, binutils, make
and the kernel sources for your running kernel are installed on your machine.
These packages are available on your distribution’s installation CD.
[ Press Enter key to continue ] “

After a lot hunting around I finally discovered that this was a known problem with older versions VMWares tooling , not sure how I managed  to recreate it but for what its worth here are the steps to fix the problem

(1)Ensure that latest Workstation 11 is installed.
(2)Uninstall open-vm-tools

$sudo apt-get remove open-vm-tools

(3) Check for any missing package updates

$sudo apt-get update

(4) Grab the following source

$sudo git clone

(5) Untar & Patch !

$cd vmware-tools-patches
$sudo ./
$sudo ./

your shared folder should appear under the following directory


On choosing between Azure Database and Table Storage

How often does this happen –  your teammate has just started developing a  new service and they need to store some data  in the cloud. “I need a database, therefore I’ll use SQL Server”. All too often relational databases are introduced into an architecture without any thought being given to type of data that is being stored or the way it will be accessed.

When building on Widows Azure you’ve got access to a number of database technologies including  SQL Database, Table Storage and DocumentDB. Changing your storage technology can be a very costly exercise so its wise to spend the time upfront to consider where you data should really live.

SQL Database

SQL Database a relational database-as-a-service offering. Its great for storing relational data and supports a rich complex query operations. A single SQL Database can store up to 500GB of data. All of the database concepts are very familiar to anyone that has worked with SQL Server. SQL Database also offers features that enable migration, export, and ongoing synchronization of on-premises SQL Server databases with Azure SQL databases (through SQL Data Sync). If you need transnational support for batch operations then SQL Azure is the way to go.

But…. all of this it comes at a price! when compare to table storage SQL Database is far more expensive.

Table Storage

Table storage is a fault-tolerant NoSQL key-value store database.It comes in handy when you need to store very large amounts of non relational data. Where SQL Database has an upper limit of 500GB, Table storage maxes out at 200TB per table. The querying ability of table storage is much less rich. You store data by  partition key and row key. Its not possible to join tables so you always need to know the partition key to work with.

Check out this great blog post by Troy Hunt where he explains how he works with 150 million rows of data using table storage.

Technology Selection Considerations

you should first consider using table storage when..

  • you need to store terrabytes of data and cost if a factor
  • there are no complex relationships in the data that require server side joins and secondary indexes
  • you need to store unstructured data or the structure of every object may be different
  • you plan to scale out without sharding

you should first consider using SQL database when…

  • you are storing highly structured, relational data
  • you need to store less than 500GB of data
  • you need to access and query the data in many different ways
  • you want to be able to run your application on premise and in the cloud

Key Feature Comparison

Criteria SQL Database Table Storage
Relational data yes no
Transaction Support yes full ACID transactions very limited
Table Schema fixed each row can have different properties
Maximum Storage 500GB per database 200TB per table
REST client support yes yes
Availability SLA 99.9% per month 99.9% per month

Chrome DevTools – console.table() function

It’s the simple things in life that can really make you happy… like the chrome developer tools console.table() function. It was released in 2013 but I only learned about its awesomeness today!

Logging to the console the old way

way, way back in the day (i.e. yesterday) I was logging objects straight to the console

var nirvana = [
{ name: "Kurt Cobain", instrument: "guitar" },
{ name: "Krist Novoselic", instrument: "bass" },
 { name: "Dave Grohl", instrument: "drums" }


and expanding out tree views to see what was going on…

old skool

Logging to the console with console.tree()

Enter console.table()

var nirvana = [
{ name: "Kurt Cobain", instrument: "guitar" },
{ name: "Krist Novoselic", instrument: "bass" },
 { name: "Dave Grohl", instrument: "drums" }


A nice formatted table with sortable  columns.

new skool1


Thank you Chrome DevTools for caring !