Windows Azure – My top 3 feature requests for 2013

So I thought I’d start off my 2013 blog with a wishlist of my top 3 features for Windows Azure, here goes !

#1 – Custom SSL certificates for Azure Websites.

I’m a big fan of azure websites. Its a really nice little product offering which allows you to rapidly deploy ASP.NET, PHP, or Node.js apps to the cloud via  FTP, Git, or Team Foundation Server. I expect to see azure websites really take off this year , the UI is nice and clean and deployment times rival any other cloud provider. However Azure Websites currently only support a shared SSL certificate. This is definitely my number one feature request – it must be keeping lots of serious customers away from making the move to azure websites.

#2 – Auto scaling at the platform level. 

One of the tenants of cloud computing is elasticity. You grow and shrink your cloud resources to scale with the current demand. Currently there is no native auto scaling functionality baked into the platform. I’d like to be able to set rules to scale up and down the number of web roles, worker roles and website instances based on pre-canned system watches, custom metrics and time based rules. 

The current solutions in the market include the free Windows Azure Autoscaling Application Block (WASABi) by the Patterns and Practices team but you need to manage and deploy WASABi yourself and its a single point of failure.

There’s also AzureWatch which looks great but this should be part of the platform. I’d like to see something baked into the platform with its own API similar to AWS CloudWatch 

#3 Changing instance size of web role (extra small,medium etc) without having to redeploy your application

Finally at number three I’d love to see the ability to change instance sizes of compute resources on Azure without the need to redeploy. You just pause your instance and re-size it dynamically .. now you’re talking!

So there you have it , what would you like to see added to Windows Azure this year ?

Aidan

Advertisements

Hybrid on premise / cloud architectures with the azure service bus relay

In this post I’ll take you through the steps involved in exposing your on-premise database and .NET code as a simple RESTful service using the service bus relay binding. I’ve also built out a an ASP.NET MVC client to consume the service.

All the source code is available here

https://github.com/aidancasey/CloudBurster

Background

Recently I’ve been delving into the world of hybrid on premise / cloud architecture. When it comes to start-up’s and companies building out new products the decision to build out a cloud solution is often be a no-brainer. But what about companies that have a significant investment in existing desktop / on-premise solutions? As much as we’d all love to, it’s not always possible to throw the baby out with the bath water and start again from scratch, enter the service bus relay binding!

I work for a large ISV that has a significant investment in an on premise line of business accounting system. The product has taken close to a decade to develop. Even with an aggressive online strategy it will take years to migrate this system to a true cloud solution. The service bus relay binding gives us the ability to quickly expose existing business logic and data to the cloud with only a small development overhead.

“I feel that Microsoft haven’t done themselves justice selling the service bus relay to the community”

Personally, I feel that Microsoft haven’t done themselves justice selling the service bus relay to the community. Its a really clever piece of technology and I’m surprised it isn’t being adopted more widely. I put the following solution together to demonstrate to the business how they could surface on-premise reports in the cloud.

Architecture

diagram – wrap existing business logic / stored procedure calls in a RESTful API exposed as a webHttpRelayBinding endpoint.

1. Creating a service namespace on Windows Azure

a) Log on to the Windows Azure Management Portal.

b) Click Service Bus, Create and enter the name of your service bus namespace (e.g “cloudburst”). For the best performance you should ensure your RESTful client is also deployed to the same Location, in this case US-West.

c) Once created click on Access Key and take note of the default issuer (“owner“) and default key(“Rd+I2mw7CaJ4pdJ7faf4yZKzI92PkYKVnE3qAA7QOIc=“). You’ll need to enter these into the app.config file in the project OnPremise.ServiceHost to enable the serviceHost to burst out to the service bus.

2. On premise service host

The Windows Azure Service Bus NuGet package pulls in all the service bus dependencies into your project. I’ve used the WebServiceHost to expose a RESTful service definition.

string serviceNamespace = "cloudburster";

Uri address = ServiceBusEnvironment.CreateServiceUri("https", serviceNamespace, "reports");

WebServiceHost host = new WebServiceHost(typeof(ReportingService), address);

host.AddDefaultEndpoints();
host.Open();

I’ve configured the relay binding to use the access key when establishing the relay binding.

3. Exposing a RESTful API

I’m exposing an API to query contact information from a legacy database. The RESTful API takes the following format:

RESOURCE URL VERBS
all contacts https://myobconnector.servicebus.windows.net/Contact/ GET
single contact https://myobconnector.servicebus.windows.net/Contact/{id} GET

The WebGet attribute  allows me to configure a JSON response type and to overlay a logical RESTful API over the WCF service contract.

    [ServiceContract(Name = "ContactContract", Namespace = "http://samples.microsoft.com/ServiceModel/Relay/")]
     public interface IContactService
    {

        [OperationContract]
        [WebGet(ResponseFormat = WebMessageFormat.Json, UriTemplate = "/{id}")]
        ContactEntity GetContact(string id);

        [OperationContract]
        [WebGet(ResponseFormat = WebMessageFormat.Json, UriTemplate = "/")]
        List GetAllContacts();

    }

4. Beware! there be dragons when running in a secured network !

Once the service host is running your data is exposed as a RESTful endpoint. For the purposes of this code sample I haven’t secured the client endpoint and I’m using a plain WebHttpBinding. This requires that the http ports 80/443 ports are open for outbound traffic on your network. If you are running in any sort of secured corporate environment you’ll likely run into firewall problems. This link will point you on the right path. This is one area where the documentation lets you down slightly. If you just read the brochures you’ll be led to believe that the relay binding can cope with NAT devices and internal firewalls but if your network administrator is doing their job properly you’ll likely need to get some firewall rules put in place.

5. consuming the API from a REST client (ASP.NET MVC)

Consuming the RESTful services is pretty straight , please refer to Cloud.App for a working solution. NewtonSoft’s free JSON serializer does a pretty reasonable job of hydrating your JSON payloads back into .NET types

        public ContactEntity Get(int Id)
        {

            string url = "https://cloudburst.servicebus.windows.net/contact/" + Id.ToString();

            using (WebClient serviceRequest = new WebClient())
            {
                string response = serviceRequest.DownloadString(new Uri(url));

                var data = JsonConvert.DeserializeObject(response);

                return data;
            }
        }

5. bench marking, performance & latency

work in progress – I’m working on a simple test harness to benchmark performance and latency in sending different sized payloads over a relay binding. From running the MVC rest client it looks like establishing the channel can be expensive (approx 1 second),  the first time but then subsequent service calls are pretty responsive. I’ll be publishing some test results soon. The plan is to build a simple ping service and instrument the timings.

Conclusion

I’m sure you’ll agree that its a pretty painless process to pick up your existing .NET code and start to expose it using the relay binding. There wasn’t a lot of examples out there so I decided to write this post and open source the code.This hybrid on-premise-cloud architecture has lots of possibilities in the real world.

It offers a pretty compelling alternative for companies that are slow to store their data in the cloud for data sovereignty issues (remember the data is still stored on premise) or for applications that need to surface some of their functionality to the cloud.

May the source be with you !

Aidan

TechRepublic Podcast

Earlier this week I was a guest on TechRepublic’s “The Upside” podcast where I talked to Chris Duckett  about the Mass Mobile Experiment – an open source collaboration platform I developed with Simon Raik-Allen from MYOB. We recently showcased the technology at Tech Ed Australia.

“One of the most interesting talks at this year’s Australian TechEd event was the Mass Mobile Experiment. The platform’s Pong implementation was used to entertain attendees before the conference’s keynote.”

We chat about the inspiration for project  – Loren Carpenter’s crowd swarming pong experiment from 1991 and we geek out about TypeScript and node.js

Here’s a link to the podcast

http://www.techrepublic.com/blog/australia/the-upside-nodejs-your-own-business/1475

Enjoy !

AWS S3 and Azure blob storage compared- same same but different

At first glance Amazons Simple Storage Service and Windows Azure blob storage appear to offer the same functionality but there are a few  subtle between these  two storage abstractions in the cloud. In this post I’ll explain what the differences are.

Here’s a quick refresher on the terminology

Amazon simple storage service (S3)

In Amazon speak every object stored in S3 is contained in a top level bucket. Bucket names are unique across all of Amazon S3. Within a bucket  you can use any names you like for your objects. Although the hierarchy is two levels deep you can fake deep object graphs suing a naming prefixes. Plenty of folks store static content for their websites in S3 and back the data store with a CDN to ensure fast delivery to the browser. S3 is great for WORM data ( write once read many times).

Azure’s blob storage

In azure speak objects are stored in blob storage. Every object stored in blob storage is  associated with one top level container. The container partitions everything by a unique namespace across all of azure blob storage. Within a container  you can use any names for your objects.Again its common to store static content and back it with a CDN to serve out static data to your website.

What they have in common

  • cheap, durable, reliable storage
  • REST API to get at the data
  • Hierarchy 2 levels deep
  • Versioning
  • ACL’s to lock down who can see what
  • Multi-part upload

The main differences

Blob Storage S3
 REST API Protocols  HTTP  HTTP & SOAP
 Bittorrent P2P N Y
(optional) delete item after a specified amount of time N Y
Encrypt data at rest N Y

hope this helps!

Aidan

Choosing the right storage technology on Window’s Azure

How do you choose the right storage technology when building on Window’s Azure? You have got several native storage abstractions at your disposal on the platform and with the newly released IaaS offerings things just got even more confusing. Firstly you need to take a step back and ask a few questions?

What type of application are you building?

Martin Fowler has a great blog post on polygot persistence –  its becoming more common for companies to  have a variety of different data storage technologies for different kinds of data. If you are designing a financial application with complex reports then the relational Windows Azure SQL Database is a good place to start but just for the transactional stuff. Off loading auditing and logging data to table storage could save you some cost and this data is likely to grow over time.Product catalogs and data that changes infrequently are good candidates for table storage.

How deep are you pockets?

2TB of relational data in an azure SQL database is likely to cost you approx $50000 a year  and that’s just for the storage alone.The same amount of data in blob storage / table storage will come in around $7000.

How much data are you dealing with?

When looking at sheer volume, table storage is far more scalable than Windows Azure SQL Database.  A single storage account (storage accounts hold blobs, queues and tables) can grow to 100TB in size, in theory your table could consume all 100TB. Azure databases have a hard limit of 150GB.

Will the data only live in the cloud ?

SQL Server 2012 and Windows Azure SQL database are very close in structure and the gap will continue to close. There are plenty of migration tools that allow you to move data from an on premise SQL database to one in the cloud. SQL Data Sync takes it a step further and allows you to synchronise changes between an on premise database and the cloud. Table storage ties you to the cloud.

description Uses size transaction support
Blob storage Unstructured WORM (write once read many) data images, binaries, files, installers, back ups 100 TB maximum per storage account
Windows Azure Drives For exposing a volume accessible to code running in your Windows Azure service Use NTFS APIs to access a durable drive 100 TB maximum per storage account
Table storage NoSQL data store –  a Table is a set of entities; an entity is a set of properties product catalogue, logs, audit trails 100 TB maximum per storage account Supports transactions for entities in the same table and table partition, but not across tables or partitions.
Queue Storage Durable message Queue but the  order is not guaranteed passing messages in a distributed system messages up to 64K Not transactional, messages can get picked up more than once
Windows Azure SQL Database relational database as a service reports, financial apps 150GB per database full ACID support

Bench marking node.js on Windows Azure

Background

I’ve been working on a pretty cool side project that I presented at Tech Ed Australia 2012 – “The Mass Mobile Experiment!”

It’s a generic collaboration framework that enables lots of people (say at a conference) to enjoy a shared experience in real time using their mobile phone, tablet or other internet device. At my Tech Ed session I had over100 people playing a single game of pong on the big screen using their mobile phones to control the paddles in real time! The platform is built using node.js, websockets (socket.io) and it supports a plug-in architecture enabling other games / experiences  to plug in pretty easily. So far I’ve got a multi-player quiz game, pong, a political worm and an interactive presentation.

Conceptual Architecture – MME ( Mass Mobile Experiment)

  • Client ( mobile phone) sends data to server over long running websocket
  • Server (node.js) aggregates the data and sends to the playfield over websockets
  • Playfield (browser on a big screen) runs the game loop and process the aggregated data from the server.
  • Control Panel allows you to change games and throttle the client and server

Bench marking on Windows Azure

In order to load test the platform I built yet another game! This time I got 200 people in the office to “play the game”. It involved leaving a web page open for 20 minutes while I stepped up the number of websocket connections on each browser and started to send data to the server.

  • The client connects to the server over websockets and sends typical game data on a timer
  • The contol panel broadcasts messages to all clients telling them to step up the number of websocket connections to the server and to increase the send frequency. In effect a single browser is then multi-plexing several websockets to the server. I found that a single browser can easily multiplex 10 different web socket connections without slowing down the client side JavaScript Code.
  • The server collects interesting metrics such as requests per second and CPU and sends this to the playfield
  • The playfield ( load test app) listens for data over another websocket and plots the data in real time

Results

  • Node.js server running on a medium size worker role on Window Azure, 3.5 GB RAM, Allocated Bandwidth 200 (Mbps).
  • 2000 concurrent WebSockets ( multiplexing over 200 different laptops in the office)
  • Requests per second 8500
  • Memory Usage on Azure 76%
  • Message send frequency from client – 4 messages per second

Check out this screenshot from the azure management portal – I managed to push the CPU to a 89% at 11:40 when the system ramped up to 2000 concurrent users!

Conclusions

  • Node.js running on an azure worker role scales really really nicely. In my case a medium sized VM scaled to 2000 concurrent web sockets processing 8000 requests per second. Not a single message got lost between the browser and the server even when the server was under stress!

Why you should distrust this post !

  • The measurements were taken using node.js v0.8.9, socket.io v 9.0, these technologies are evolving rapidly.
  • For the mass mobile experiment the node server is pretty simplistic, it aggregates data and sends it to the playfield. This may not represent what your application is doing.

All of the results along with all of the source code is open sourced here on GitHub

May the source be with you !

Aidan