A video on Microsoft’s Channel9 gives a great illustration of the way Microsoft is morphing its business to become a true champion of open source. I took some time to pick some of the important bits and go over them.
I remember the time when Microsoft was actually the darling of developers, the open source equivalent of the day as I entered this profession. I was a young student, eager to learn but couldn’t afford to buy any of the really cool stuff. Microsoft technology was the main thing I could lay my hands on, Linux wasn’t anywhere yet, I had learned Unix, TCP/IP, networking, and I loved all of that. Microsoft had the technical talent and a good vision, but they baked everything into Windows, both desktop and server, when they could have evolved Ms-DOS properly as a headless kernel that would get windowing and other things stacked upon it. They never did, until now. The biggest fundamental enabler was probably just a change in the leadership mindset.
The video presents Nano Server, described as a Cloud Optimized Windows Server for Developers. On a single diagram, Microsoft shows how they’ve evolved Windows Servers.
Considering this diagram from left to right, it is clear that Microsoft became increasingly aware of the need to strip out the GUI from unattended services for an improved experience. That’s refreshing, but to me, it has always been mind-boggling that they didn’t do this many years ago.
Things could have been different
In fact, back in mid-90’s, when I had completed my deep dives into Windows NT systems architecture and technology, I was a bit disappointed to see that everything was tied up to the GUI. Back then, I wanted a Unix-like architecture, an architecture that was available even before I knew anything about computers. I wanted the ability to pipe one command’s output into the input of another command. Software that requires a human present and clicking on buttons should only be present on the desktop, not on the server. With Microsoft, there was always some windows popping up and buttons to be clicked. I couldn’t see a benefit to the user (systems administrators), in the way Microsoft had built its server solutions. It wasn’t a surprise that Linux continued to spread, scale and adapt to Cloud work loads and scenarios, while Windows was mainly confined to corporate and SMB environments. I use the term confined to contrast the growth in departmental IT with Internet companies, the latter having mushroomed tremendously over last decade. So, where the serious growth is, Microsoft technology was being relegated.
Times change
When deploying server solutions mainly covered collaboration services and some departmental applications needs, people managed a few number of servers. The task could be overseen and manned by a few people, although in practice IT departments became larger and larger. Adding more memory and storage capacity was the most common way of scaling deployments. Although, still rather inconvenient, software came in CD-ROMS and someone had to physically go and sit a console to install and manage applications. This is still the case for lots of companies. In these scenari, physical sever hardware are managed a bit like buildings, they have well known names, locations and functions, administrators care discriminately for the servers. The jargon term associated to this is server as pet. With the best effort, Data Centre resource utilisation remained low (the typical figure is 15% utilisation) compared to the available was high and large.
Increasingly however, companies grew aware of the gain in operations and scalability when adopting cloud scaling techniques. Such scaling techniques, popularised by large Internet companies such as Google, Facebook, Netflix, and many others, mandate that servers are commodities that are expected to crash and can be easily replaced. It doesn’t matter what a server is called, workloads can be distributed and deployed anywhere, and relocated on any available servers. Failing servers are simply replaced, and mostly without any downtime. The jargon term associated to this approach is server as cattle, implying they exist in large numbers, are anonymous and disposable. In this new world, Microsoft would have always struggled for relevance because, until recently with Azure and newer offerings, their technology just wouldn’t fit.
So, Microsoft now really needed to rethink their server offerings, with a focus on the customer. This is customer-first, driven by user demands, a technology pull, instead of the classical model which was a technology-first, I build it and they will come, a push model, in which the customer needs come after many other considerations. In this reimagined architecture, the GUI is no longer baked into everything, instead it’s an optional element. You can bet that Microsoft had heard these same complaints from legions of open source and Linux advocates many times over.
Additionally, managing servers required to either sit in front of the machines, or firing up remote desktop sessions so that you could happily go on clicking all day. This is something that Microsoft appear to be addressing now, although in the demos I did see some authentication windows popping up. But, to be fair, this was about an early preview, I don’t think they even have a logo yet. So I would expect that when Nano Server eventually ships, authentication would no longer require popup windows. 😉
The rise of containers
Over last couple of years, the surge in container technologies really helped to bring home the message that the days of bloated servers were numbered. This is the time when servers-as-cattle takes hold, where it’s more about workload balancing and distribution rather than servers dedicated to application tiers. Microsoft got the message too.
I have long held the view that Microsoft only needed to rebuild a few key components in order to come up with a decent headless version of their technology. I often joked that only common controls needed rewriting, but I had no doubt that it was more to do with a political decision. Looking at the next slide, I wasn’t too far off.
Now, with Nano Server, Microsoft joins the Linux and Unix container movement in a proper way. You see that Microsoft takes container technologies very seriously, they’ve embedded it into their Azure portfolio, Microsoft support for Docker container technologies.
Although this is a laudable effort, that should bear fruits in time, I still see that there is a long way to go before users, all types of users, become truly centre for technology vendors. For example, Desktop systems must still be architected properly to save hours of nonsense. There is no reason why a micro-kernel like Nano Server wouldn’t be the foundation for desktop systems too. Mind you, even with multi-core machines with tons of memory and storage, you still get totally unresponsive desktops when one application hogs up everything. This shouldn’t be allowed to ever happen, user should always be able to preempt his/her system and get immediate feedback. That’s how computing experience for people should be. It’s not there yet, it’s not likely to happen soon, but there is good progress, partially forced by the success coming from free and open source advocacy.
If you want to get a feel for how radical Microsoft has changed their philosophy, and you are a technical minded person, this video is the best I’ve seen so far. You will see that the stuff is new and being built as they spoke, but listen carefully to everything being said, watch the demos, you will recognise many things that come straight from the free open source and other popular technology practices: continuous integration, continuous delivery, pets vs. cattle, open source, etc. I didn’t hear them say if Nano Server would be developed in the open too, but that would have been interesting. Nano Server, cloud optimised Windows server for developers.