Welcome to the Messaging Party Google

This week Google announced the public release of Cloud Pub/Sub and we on the Azure Service Bus team would like to welcome them to the cloud messaging space. This is a big and growing market with a diverse set of competitors, technologies, and strategies. We feel that Google’s decision to enter this space validates our investments in Service Bus Messaging (Queues and Topics) and reflects the growing realization in the industry that messaging is a critical component of scalable applications and a vital part of any cloud architecture and of any cloud platform.

While it may at first appear that Google Cloud Pub/Sub and Service Bus Messaging are directly competing with each other the services are quite different and each has its own strengths. More importantly the real competition for both our services, and the other players in the space, is not each other, but direct application integration.

There has always been a tendency to directly wire applications to each other in a piecemeal organic fashion that results in brittle tightly coupled software. This tendency predates cloud computing and even network computing. Experienced architects know the problems that arise from this design – and know to avoid it. The cloud amplifies these problems. Hopefully Google will now impress on another group of engineers and architects the importance of well-established architectural principles of loose coupling and separation of concerns. Principles that have always been at the center of  messaging architecture and guiding points for the Azure Service Bus.

Azure – The Operating System for the 21st Century

Now that I truly live Cloud computing every day as part of the Microsoft Azure product team I thought I’d share a few reflections about the evolution of Cloud computing over the past few years and how I think we’ve really crossed a threshold with the technology of cloud computing.

I recently commented that the cloud really is the operating system of the 21st century and I genuinely mean that.  Here’s why.  When you look at an Operating System (think back to your Concepts of Operating Systems class if you took one) what you’re talking about is a piece of software that manages the hardware of a machine.  It’s job is to enable us to use the machine to do our bidding.  This ranges from basic features such as facilitating I/O, storage, and computational capabilities to more complex tasks such as networking, multitasking, and job scheduling.

Over time operating systems evolved to be very rich environments that we know today.  Looking through the current Azure feature set it quickly becomes apparent that Azure really has matured into a true Cloud OS – the Operating System of the 21st century.  Storage and compute are some of the oldest services and also mimic the evolution of operating systems – think way back – when the von Neumann architecture was a cutting edge concept.  Maybe even in the OS/360 timeframe.  Personal computers followed a similar path: from my Apple IIe which was really just storage, compute, and I/O to current operating systems that are truly rich experiences.  The cloud is on the same path – and Azure has progressed in a very short time from the cloud equivalent of DOS to a rich computing experience like nothing the world has ever known before.  This includes many concepts we would recall from Operating Systems: a job scheduler, compute, storage, I/O, and a powerful communications bus (yes, Service Bus).  The most striking part is that this really isn’t a Windows OS – it is an OS unto itself that is based very much on open protocols and can be leveraged by any client, or even server, OS.

It was a big risk for Microsoft to invest so heavily on the cloud – I appreciate that more being here and seeing how all in the company is.  At first I wasn’t sure if this was really sure about this, but viewed in the context of the cloud being an Operating System for the future – it makes perfect sense.

The Time Value of Data

I am doing more work than ever with the Internet of Things these days and I’ve wanted to write on this topic for some time. A larger article is in the works for publication, but I’ll give the high level here. Over the last few years my work with Smart Grid in particular and Big Data in general has made me acutely aware of a concept I have started calling the Time Value of Data. This was inspired by my interest in economics and draws its inspiration from the Time Value of Money which dates back nearly 500 years and to a city in Spain that I have always enjoyed visiting.

The theory behind the time value of money is quite straightforward: money today has a future value that is different from the current value. That is capital has a value that changes over time: in a “normal” environment this means some amount of money today is worth that amount plus some more in the future. This is actually a rather complex topic, but plenty has been written about it.

What I want to focus on here is the value of data over time. Data generally has a unique value curve that is different from most other commodities – and yes, data is a commodity (or at least is becoming one). When we think about the Internet of Things in particular – devices, appliances, sensors, and telemetry – it becomes quite apparent that some of this data is going to have high immediate value. A fire alarm is a great example. Knowing about a fire is extremely valuable as it starts. This may allow for safe evacuation or event containment. As time passes the value of this information drops. Do I really care that my building had a fire several hours or days ago? Many of the sensors in use today are focused on this immediate value area.

There is also a secondary data story that is historical or collective data. This is where you can save data in a raw form long enough to gain value from it. Good examples of this are climate data, defect rates, energy usage. As more of this data is collected over longer periods of time the value of it increases dramatically. Although the individual data points may not be as valuable, collectively the data set becomes even more valuable. This is depicted in the chart below (I said this was a rough draft).

TimeValueOfData

As I mentioned, this is an idea I am still formalizing and will have an article about soon – so I invite any comment or contributions on this. Perhaps this is more of a U than a V shaped curve or maybe the right side doesn’t rise as high, but the concept is fairly robust when examining use cases.

More details on this and the implications will follow.

Wayfinding, Simplicity, and Design

Looking back on the last few years and the amount of travel I’ve done I’ve realized that the art and science of Wayfinding is an excellent tool for user experience testing and specifically for testing devices or apps.  According to Wikipedia: “Wayfinding encompasses all of the ways in which people and animals orient themselves in physical space and navigate from place to place”.

I’ve begun testing this theory out after long haul flights.  I have found that this is a peculiar time in human consciousness when your normal abilities of reason and logic are deeply impaired.  When flying long haul everyone experiences a certain amount of discomfort even when travelling in style.  It could be the dry recycled air or the small and highly used lavatories, or the lack of space in the back of the plane, or even the abundance of libations in the front.  After an epic journey (especially transpacific) everyone is out of sorts.  Yet we all find our way through customs and to the train or taxi that we’re looking for.  I recently pulled a 28 hour 11 time zone journey that involved four airports, three flights, and two sets of immigration.  At the end I found my rental car shuttle (yes, I am an American, I rent cars), found my car, and then found my way to the hotel.  Believe me none of this is due to any special abilities I have in navigation or even common sense – it is completely due to the wayfinding design principles that have been used throughout the world to show us where to go.  This idea first came to me after reading one of Garr Reynolds books.  I thought his presentation of this was brilliant.  This is design that must work, for a large variety and number of people.

This is what has lead me to testing my new apps and devices in this state of mind.  Case in point I learned on this particular journey they my non-model specific mobile phone windshield mount has a terrible design flaw with my Nokia Lumia 1020 – or for that matter any Windows Phone: the camera button is in the area where the side clamps hold the phone in place.  Result: I’m looking at a live (and small) image of the nighttime road ahead of me instead of my Nokia Drive app.  Fortunately getting back to an app on Windows Phone is easy – even after a 28 hour trip (there’s some good design).

Now whenever I build an app – or my team does – I always try to get that same level of detachment when I review it.  I’ve even begun to extend this to mock ups, concepts, and presentations.  Sometimes I learn where a user flow is confusing or the next step in unclear.  Since I started writing this I traversed the Atlantic – twice – after the first flight I learned that my presentation on Real World Business Activity Monitoring for BizTalk Summit 2014 had a rather strange sequence in it that didn’t flow as well in this reduced functionality state.  I rearranged some content and dropped some that didn’t fit as well, then it seemed strong.  The crowd seems to have agreed thankfully!

I suppose this last part of Wayfinding is sort of the key to it all: remove that which is not completely necessary to convey the message / information.  Anything else is waste or distraction.  Next time you travel anywhere check out the signage and notice how relatively easy it is to navigate.  This is a good inspiration.  When searching for simplicity use that long day or that sleepless night to your advantage to review something you’ve been thinking about too much, this will give you a different perspective on the topic.

BizTalk Summit 2014 London

This week I had the pleasure to speak at the BizTalk Summit 2013 in London which was sponsored by BizTalk 360.  I have to say it was the best BizTalk event I’ve ever been to.  My presentation is posted here but the slides aren’t much without the presenter… or the videos showing how I implement BAM on an order processing solution with zero code – fortunately there is a video to be posted by BizTalk 360 if you’re interested in seeing the message.  The sample solution I use, with all artifacts, is located at https://danrosanova.wordpress.com/rwb.

The talk was about Real World Business Activity Monitoring and the message was well received.  In short I drive home that we owe it to our customers (i.e. business people) to provide BAM so they can see what’s happening in the way they are comfortable – which is normally Excel or Reporting Services.  I took away two strong points from giving this talk.

First – more BizTalk shops use BAM than I had ever imagined.  Its use is fairly limited in the US, sadly, and some markets, but clearly many people use it.  About half the audience said they use it in production – I expected 1-2%.  This is encouraging as BAM is really worth doing.  It’s easy to implement and delivers high value at both technical and business levels.

Second I learned that there is still a lot of interest in building these sorts of dashboards and that many people were eager to give BAM a try given how easy my presentation makes it look.  There is also a lot of interest in BizTalk – which I was really glad to see.  It is a great platform and used correctly has a strong place in many organizations.  I will be blogging a lot about BAM in the coming weeks and about Big Data, which brings me to my final take away.

Big Data is a big topic and it’s in the news a lot.  My next four presentations all focus on Big Data and Advanced Analytics and I’m going to dedicate some time to bringing these together with BizTalk and with BAM.  My next talk at the Informs Conference on Business Analytics and Operations Research on April 1 in Boston is focused on Situational Awareness with Big Data tools.  It is very much like BAM with Big Data with even more dimensions.

Inspiration from Dyson CEO and West Monroe Partners

A little while back I read a great interview with Dyson Chief Executive Max Conze that really made me think about my career and the firm that I work for: West Monroe Partners.  In this article, which really was fascinating to me considering how innovative Dyson is and how high profile their founder is, Mr. Conze states: “The best you can do is hire a lot of smart young people, give them a lot of responsibility and they’re going to grow on it”. 

That is a truly profound statement – and the exact opposite of how most organizations work, but not all.  Nowhere in my fifteen year career have I seen this done more effectively than at West Monroe Partners, where I have been for nearly three years.  We hire bright young people and very quickly they have a lot of responsibility and a lot of freedom.  This helps them develop extremely fast and I find myself trusting people ten or more years my junior with tasks I hardly trust myself with. 

Dyson is obviously doing amazing and innovative work, and so is West Monroe.  I am really coming to realize it is because the way we hire and the people we hire that we are able to be so agile and so innovative.  It can be very tempting to not delegate to junior staff, but is important for them, for you, and for your company to do so. 

Mr. Conze credits his military background with this philosophy and it reminds me of a quote by a military legend: “Never tell people how to do things. Tell them what to do and they will surprise you with their ingenuity” – General George S. Patton Jr.

I really am proud and fortunate to work at an organization that places such value on its people.  At the end of the day it is all a consulting firm has.  We do hire smart young people (and smart older people like me as well).  And we’re hiring now!

 

Configuring the AS2 Runtime on BizTalk 2013 – one small caveat

I’ve been doing a lot of BizTalk lately, even by my standards, and a lot of BizTalk installations as well.  I’ve blogged and written (in BizTalk 2010 Patterns) about how important it is to change the default configurations that come out of the BizTalk Configuration Wizard.  The most important item is to create separate BizTalk hosts for processing.  I normally do at least four: send, receive, processing (orchestration), and tracking.  In keeping with this I also create handlers for all the adapters for the send and receive hosts.  I also remove the handlers for the default BizTalkServerApplication host. 

 

This has led me a small but important detail.  Although BizTalk easily accommodates configuring further features later (like EDI or BAM), I did run into one issue.  The AS2 Runtime configuration creates a SQL receive location (Classic SQL not the WCF kind) and it expects, nay demands, the default host to have a receive handler for the SQL Adapter.  This wasn’t that hard to track down, but could cause headaches for others so I thought I would share.