OS X Yosemite, why block my view when you should’ve known better?

I now frequently experience several apps freezing for no apparent reason. Standard apps like Finder.app, Preview.app, or Mail.app or Safari.app, would just stop responding. After digging it up a bit, I found out that when Spotlight get into such an aggressive reindexing, Finder.app also stops being responsive. My conclusion, from here, was that, some of the standard apps that ship with OS X Yosemite contain a certain amount of code that are very old or simply badly designed. Such code, typically the work of UI framework enthusiasts or design principles of another era, would traverse a UI layer stack for tasks like disk and network access, although they shouldn’t have to.

I should have made this a series, another one on OS X annoyances. I now frequently experience several apps freezing for no apparent reason. Yet again, a new behaviour that until now, 8 years after switching from Windows to Mac, I didn’t expect to experience. Standard apps like Finder.app, Preview.app, or Mail.app or Safari.app, would just stop responding.

safari_preview_finderNormally, if an app stops responding then this will show in Console.app. In these instances, Console.app was showing a clean health situation, nothing is stuck. But as a user, I could type any number of keys and move the mouse around, Finder.app doesn’t respond, Spotlight doesn’t instantly find any answers – whereas it normally does as you type characters. I use Spotlight to launch apps, so when it doesn’t respond then that interrupts my work flow. Then I immediately turned to Alfred.app, and surely enough Alfred was working fine and could carry out any task I usually throw at it. What the heck was going on now?

Screen Shot 2015-05-20 at 22.43.43

I started to guess a deadlock situation, invisible to the regular app monitor. I then looked for what might be hogging up resources and saw something interesting.

dropbox_and_Spotlight_max_out_the_cpu

Two processes are occupying 130% of the CPU, effectively 2 out of 4 CPUs on my machine are fully utilised. I have 2 more CPUs that can potentially do work for me. And they do try, only soon to get stuck. ‘Dropbox’ app is easy to recognise, the second hungry process ‘mds‘  is actually the indexer of Spotlight.

Dropbox was clearly working hard on synchronising files to the Cloud, but what was mds doing? I did recently move around a large number of files, this may have invalidated Spotlight index, and it is trying to rebuild it. All fine, but I always thought that only happened when the machine was not being used. Furthermore, I expected that Spotlight indexer wouldn’t make the UI unresponsive. I was wrong in both cases.

I found out that when Spotlight get into such an aggressive reindexing, Finder.app also stops being responsive. This has some consequences: some apps appear to work fine, I can launch other apps and they may be snappy and all, as long as they don’t go anywhere near Finder.app. The overall impression is that the Mac is unstable without any app appearing to be hanging. How is this possible? Then I remembered what I always chided Windows, the fact that some tasks were unnecessarily channelled via UI layer stack, making them sluggish and prone to get stuck. That’s the same behaviour I was now observing.

force_quit_spotlight_indexer_for_responsiveness

 

To confirm my hypothesis, as soon as I killed the Spotlight indexer, Finder.app, Preview.app an others immediately became responsive again. I repeated the experiment many times over before writing this post.

I found another sure way to get Preview.app stuck, any attempt to rename a file, move it to a new location, or add tags to it directly from Preview.app menu, will cause both Preview.app and Finder.app to become unresponsive for a long time.

Screen Shot 2015-05-20 at 22.45.27

 

My conclusion, from here, was that, some of the standard apps that ship with OS X Yosemite contain a certain amount of code that are very old or simply badly designed. Such code, typically the work of UI framework enthusiasts or design principles of another era, would traverse a UI layer stack for tasks like disk and network access, although they shouldn’t have to.

Most users would typically get frustrated and decide that OS X is just bad software, others might think about rebuilding their machine. I just looked briefly into it, didn’t bother digging up too much into the SDKs, APIs and other kernel debugging tricks to get to the true bottom of it.

 

 

Leadership drive: From ‘despises Open Source’ To ‘inspired by Open Source’, Microsoft’s journey

With a change of mind at the top leadership level, Microsoft showed that even a very large company is able to turn around and adopt a customer focused approach to running a business. By announcing Nano Server, essentially a micro-kernel architecture, Microsoft is truly joining the large scale technology players in a fundamental way.

A video on Microsoft’s Channel9 gives a great illustration of the way Microsoft is morphing its business to become a true champion of open source. I took some time to pick some of the important bits and go over them.

I remember the time when Microsoft was actually the darling of developers, the open source equivalent of the day as I entered this profession. I was a young student, eager to learn but couldn’t afford to buy any of the really cool stuff. Microsoft technology was the main thing I could lie my hands on, Linux wasn’t anywhere yet, I had learned Unix, TCP/IP, networking, and I loved all of that. Microsoft had the technical talent and a good vision, but they baked everything into Windows, both desktop and server, when they could have evolved Ms-DOS properly as a headless kernel that would get windowing and other things stacked upon it. They never did, until now. The biggest fundamental enabler was probably just a change in the leadership mindset.

The video presents Nano Server, described as a Cloud Optimized Windows Server for Developers. On a single diagram, Microsoft shows how they’ve evolved Windows Servers.

Microsoft Windows Server Journey
Microsoft Windows Server Journey

Considering this diagram from left to right, it is clear that Microsoft became increasingly aware of the need to strip out the GUI from unattended services for an improved experience. That’s refreshing, but to me, it has always been mind-boggling that they didn’t do this many years ago.

Things could have been different

In fact, back in mid-90’s, when I had completed my deep dives into Windows NT systems architecture and technology, I was a bit disappointed to see that everything was tied up to the GUI. Back then, I wanted a Unix-like architecture, an architecture that was available even before I knew anything about computers. I wanted the ability to pipe one command’s output into the input of another command. Software that requires a human present and clicking on buttons should only be present on the desktop, not on the server. With Microsoft, there was always some windows popping up and buttons to be clicked. I couldn’t see a benefit to the user (systems administrators), in the way Microsoft had built its server solutions. It wasn’t a surprise that Linux continued to spread, scale and adapt to Cloud work loads and scenarios, while Windows was mainly confined to corporate and SMB environments. I use the term confined to contrast the growth in departmental IT with Internet companies, the latter having mushroomed tremendously over last decade. So, where the serious growth is, Microsoft technology was being relegated.

Times change

When deploying server solutions mainly covered collaboration services and some departmental applications needs, people managed a few number of servers. The task could be overseen and manned by a few people, although in practice IT departments became larger and larger. Adding more memory and storage capacity was the most common way of scaling deployments. Although, still rather inconvenient, software came in CD-ROMS and someone had to physically go and sit a console to install and manage applications. This is still the case for lots of companies. In these scenari, physical sever hardware are managed a bit like buildings, they have well known names, locations and functions, administrators care discriminately for the servers. The jargon term associated to this is server as pet. With the best effort, Data Centre resource utilisation remained low (the typical figure is 15% utilisation) compared to the available was high and large.

Increasingly however, companies grew aware of the gain in operations and scalability when adopting cloud scaling techniques. Such scaling techniques, popularised by large Internet companies such as Google, Facebook, Netflix, and many others, mandate that servers are commodities that are expected to crash and can be easily replaced. It doesn’t matter what a server is called, workloads can be distributed and deployed anywhere, and relocated on any available servers. Failing servers are simply replaced, and mostly without any downtime. The jargon term associated to this approach is server as cattle, implying they exist in large numbers, are anonymous and disposable. In this new world, Microsoft would have always struggled for relevance because, until recently with Azure and newer offerings, their technology just wouldn’t fit.

the voice of the customer
the voice of the customer

So, Microsoft now really needed to rethink their server offerings, with a focus on the customer. This is customer-first, driven by user demands, a technology pull, instead of the classical model which was a technology-first, I build it and they will come, a push model, in which the customer needs come after many other considerations. In this reimagined architecture, the GUI is no longer baked into everything, instead it’s an optional element. You can bet that Microsoft had heard these same complaints from legions of open source and Linux advocates many times over.

Additionally, managing servers required to either sit in front of the machines, or firing up remote desktop sessions so that you could happily go on clicking all day. This is something that Microsoft appear to be addressing now, although in the demos I did see some authentication windows popping up. But, to be fair, this was about an early preview, I don’t think they even have a logo yet. So I would expect that when Nano Server eventually ships, authentication would no longer require popup windows. 😉

the new server application model
the new server model, the GUI is no longer baked in, it can be skipped.

The rise of containers

Over last couple of years, the surge in container technologies really helped to bring home the message that the days of bloated servers were numbered. This is the time when servers-as-cattle takes hold, where it’s more about workload balancing and distribution rather than servers dedicated to application tiers. Microsoft got the message too.

microsoft nano server solution
microsoft nano server solution

I have long held the view that Microsoft only needed to rebuild a few key components in order to come up with a decent headless version of their technology. I often joked that only common controls needed rewriting, but I had no doubt that it was more to do with a political decision. Looking at the next slide, I wasn’t too far off.

reverse forwarders
Reverse forwarders, a technical term to mean that these are now decent headless components

Now, with Nano Server, Microsoft joins the Linux and Unix container movement in a proper way. You see that Microsoft takes container technologies very seriously, they’ve embedded it into their Azure portfolio, Microsoft support for Docker container technologies.

Although this is a laudable effort, that should bear fruits in time, I still see that there is a long way to go before users, all types of users, become truly centre for technology vendors. For example, Desktop systems must still be architected properly to save hours of nonsense. There is no reason why a micro-kernel like Nano Server wouldn’t be the foundation for desktop systems too. Mind you, even with multi-core machines with tons of memory and storage, you still get totally unresponsive desktops when one application hogs up everything. This shouldn’t be allowed to ever happen, user should always be able to preempt his/her system and get immediate feedback. That’s how computing experience for people should be. It’s not there yet, it’s not likely to happen soon, but there is good progress, partially forced by the success coming from free and open source advocacy.

If you want to get a feel for how radical Microsoft has changed their philosophy, and you are a technical minded person, this video is the best I’ve seen so far. You will see that the stuff is new and being built as they spoke, but listen carefully to everything being said, watch the demos, you will recognise many things that come straight from the free open source and other popular technology practices: continuous integration, continuous delivery, pets vs. cattle, open source, etc. I didn’t hear them say if Nano Server would be developed in the open too, but that would have been interesting. Nano Server, cloud optimised Windows server for developers.