Martin Fowler’s article is barely a year old, folks have exceeded my expectations

Martin Fowler is a brilliant technologist. Needless to say. This post is going to be a recap of some of my tweets on the subject of micro-services (or “microservices” as I see commonly being written). I would have quoted a bunch of other people instead, had I seen many. But that wasn’t the case, so I’ve got to quote myself then.

The first article I read about micro-services was on InfoQ.

Some time later, I saw a blog post by Martin Fowler’s article on the same subject. Then I immediately thought, as is typically the case, that the developer community was going to go crazy about the concept. I had the following reaction.

Naturally I value the thoughts and the content of the article. But I was merely concerned that many would jump straight in and make a total mess of a rather valuable insight. The topic gained popularity quite quickly, faster than I had expected though I couldn’t say I was surprised either. Reputed analysts picked up on this.

Time going by didn’t assuage my concerns, rather, I was only getting more and more confirmations. I thought that perhaps nobody is going to adjust perceptions and expectations until disaster stories would abound. I tweeted my thought on that.

Soon enough, people started posting thoughts on what was going on.

And, to keep this relatively short, here we are, somewhat full circle, with Martin Fowler inviting for some sanity. Martin opens his latest blog post

As I hear stories about teams using a microservices architecture, I’ve noticed a common pattern.

Almost all the successful microservice stories have started with a monolith that got too big and was broken up
Almost all the cases where I’ve heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

Read Martin’s blog post here: Monolith First, by Fowler

OS X Yosemite, why block my view when you should’ve known better?

I should have made this a series, another one on OS X annoyances. I now frequently experience several apps freezing for no apparent reason. Yet again, a new behaviour that until now, 8 years after switching from Windows to Mac, I didn’t expect to experience. Standard apps like Finder.app, Preview.app, or Mail.app or Safari.app, would just stop responding.

safari_preview_finderNormally, if an app stops responding then this will show in Console.app. In these instances, Console.app was showing a clean health situation, nothing is stuck. But as a user, I could type any number of keys and move the mouse around, Finder.app doesn’t respond, Spotlight doesn’t instantly find any answers – whereas it normally does as you type characters. I use Spotlight to launch apps, so when it doesn’t respond then that interrupts my work flow. Then I immediately turned to Alfred.app, and surely enough Alfred was working fine and could carry out any task I usually throw at it. What the heck was going on now?

Screen Shot 2015-05-20 at 22.43.43

I started to guess a deadlock situation, invisible to the regular app monitor. I then looked for what might be hogging up resources and saw something interesting.

dropbox_and_Spotlight_max_out_the_cpu

Two processes are occupying 130% of the CPU, effectively 2 out of 4 CPUs on my machine are fully utilised. I have 2 more CPUs that can potentially do work for me. And they do try, only soon to get stuck. ‘Dropbox’ app is easy to recognise, the second hungry process ‘mds‘  is actually the indexer of Spotlight.

Dropbox was clearly working hard on synchronising files to the Cloud, but what was mds doing? I did recently move around a large number of files, this may have invalidated Spotlight index, and it is trying to rebuild it. All fine, but I always thought that only happened when the machine was not being used. Furthermore, I expected that Spotlight indexer wouldn’t make the UI unresponsive. I was wrong in both cases.

I found out that when Spotlight get into such an aggressive reindexing, Finder.app also stops being responsive. This has some consequences: some apps appear to work fine, I can launch other apps and they may be snappy and all, as long as they don’t go anywhere near Finder.app. The overall impression is that the Mac is unstable without any app appearing to be hanging. How is this possible? Then I remembered what I always chided Windows, the fact that some tasks were unnecessarily channelled via UI layer stack, making them sluggish and prone to get stuck. That’s the same behaviour I was now observing.

force_quit_spotlight_indexer_for_responsiveness

 

To confirm my hypothesis, as soon as I killed the Spotlight indexer, Finder.app, Preview.app an others immediately became responsive again. I repeated the experiment many times over before writing this post.

I found another sure way to get Preview.app stuck, any attempt to rename a file, move it to a new location, or add tags to it directly from Preview.app menu, will cause both Preview.app and Finder.app to become unresponsive for a long time.

Screen Shot 2015-05-20 at 22.45.27

 

My conclusion, from here, was that, some of the standard apps that ship with OS X Yosemite contain a certain amount of code that are very old or simply badly designed. Such code, typically the work of UI framework enthusiasts or design principles of another era, would traverse a UI layer stack for tasks like disk and network access, although they shouldn’t have to.

Most users would typically get frustrated and decide that OS X is just bad software, others might think about rebuilding their machine. I just looked briefly into it, didn’t bother digging up too much into the SDKs, APIs and other kernel debugging tricks to get to the true bottom of it.

 

 

Leadership drive: From ‘despises Open Source’ To ‘inspired by Open Source’, Microsoft’s journey

A video on Microsoft’s Channel9 gives a great illustration of the way Microsoft is morphing its business to become a true champion of open source. I took some time to pick some of the important bits and go over them.

I remember the time when Microsoft was actually the darling of developers, the open source equivalent of the day as I entered this profession. I was a young student, eager to learn but couldn’t afford to buy any of the really cool stuff. Microsoft technology was the main thing I could lie my hands on, Linux wasn’t anywhere yet, I had learned Unix, TCP/IP, networking, and I loved all of that. Microsoft had the technical talent and a good vision, but they baked everything into Windows, both desktop and server, when they could have evolved Ms-DOS properly as a headless kernel that would get windowing and other things stacked upon it. They never did, until now. The biggest fundamental enabler was probably just a change in the leadership mindset.

The video presents Nano Server, described as a Cloud Optimized Windows Server for Developers. On a single diagram, Microsoft shows how they’ve evolved Windows Servers.

Microsoft Windows Server JourneyMicrosoft Windows Server Journey

Considering this diagram from left to right, it is clear that Microsoft became increasingly aware of the need to strip out the GUI from unattended services for an improved experience. That’s refreshing, but to me, it has always been mind-boggling that they didn’t do this many years ago.

Things could have been different

In fact, back in mid-90’s, when I had completed my deep dives into Windows NT systems architecture and technology, I was a bit disappointed to see that everything was tied up to the GUI. Back then, I wanted a Unix-like architecture, an architecture that was available even before I knew anything about computers. I wanted the ability to pipe one command’s output into the input of another command. Software that requires a human present and clicking on buttons should only be present on the desktop, not on the server. With Microsoft, there was always some windows popping up and buttons to be clicked. I couldn’t see a benefit to the user (systems administrators), in the way Microsoft had built its server solutions. It wasn’t a surprise that Linux continued to spread, scale and adapt to Cloud work loads and scenarios, while Windows was mainly confined to corporate and SMB environments. I use the term confined to contrast the growth in departmental IT with Internet companies, the latter having mushroomed tremendously over last decade. So, where the serious growth is, Microsoft technology was being relegated.

Times change

When deploying server solutions mainly covered collaboration services and some departmental applications needs, people managed a few number of servers. The task could be overseen and manned by a few people, although in practice IT departments became larger and larger. Adding more memory and storage capacity was the most common way of scaling deployments. Although, still rather inconvenient, software came in CD-ROMS and someone had to physically go and sit a console to install and manage applications. This is still the case for lots of companies. In these scenari, physical sever hardware are managed a bit like buildings, they have well known names, locations and functions, administrators care discriminately for the servers. The jargon term associated to this is server as pet. With the best effort, Data Centre resource utilisation remained low (the typical figure is 15% utilisation) compared to the available was high and large.

Increasingly however, companies grew aware of the gain in operations and scalability when adopting cloud scaling techniques. Such scaling techniques, popularised by large Internet companies such as Google, Facebook, Netflix, and many others, mandate that servers are commodities that are expected to crash and can be easily replaced. It doesn’t matter what a server is called, workloads can be distributed and deployed anywhere, and relocated on any available servers. Failing servers are simply replaced, and mostly without any downtime. The jargon term associated to this approach is server as cattle, implying they exist in large numbers, are anonymous and disposable. In this new world, Microsoft would have always struggled for relevance because, until recently with Azure and newer offerings, their technology just wouldn’t fit.

the voice of the customerthe voice of the customer

So, Microsoft now really needed to rethink their server offerings, with a focus on the customer. This is customer-first, driven by user demands, a technology pull, instead of the classical model which was a technology-first, I build it and they will come, a push model, in which the customer needs come after many other considerations. In this reimagined architecture, the GUI is no longer baked into everything, instead it’s an optional element. You can bet that Microsoft had heard these same complaints from legions of open source and Linux advocates many times over.

Additionally, managing servers required to either sit in front of the machines, or firing up remote desktop sessions so that you could happily go on clicking all day. This is something that Microsoft appear to be addressing now, although in the demos I did see some authentication windows popping up. But, to be fair, this was about an early preview, I don’t think they even have a logo yet. So I would expect that when Nano Server eventually ships, authentication would no longer require popup windows. 😉

the new server application modelthe new server model, the GUI is no longer baked in, it can be skipped.

The rise of containers

Over last couple of years, the surge in container technologies really helped to bring home the message that the days of bloated servers were numbered. This is the time when servers-as-cattle takes hold, where it’s more about workload balancing and distribution rather than servers dedicated to application tiers. Microsoft got the message too.

microsoft nano server solutionmicrosoft nano server solution

I have long held the view that Microsoft only needed to rebuild a few key components in order to come up with a decent headless version of their technology. I often joked that only common controls needed rewriting, but I had no doubt that it was more to do with a political decision. Looking at the next slide, I wasn’t too far off.

reverse forwardersReverse forwarders, a technical term to mean that these are now decent headless components

Now, with Nano Server, Microsoft joins the Linux and Unix container movement in a proper way. You see that Microsoft takes container technologies very seriously, they’ve embedded it into their Azure portfolio, Microsoft support for Docker container technologies.

Although this is a laudable effort, that should bear fruits in time, I still see that there is a long way to go before users, all types of users, become truly centre for technology vendors. For example, Desktop systems must still be architected properly to save hours of nonsense. There is no reason why a micro-kernel like Nano Server wouldn’t be the foundation for desktop systems too. Mind you, even with multi-core machines with tons of memory and storage, you still get totally unresponsive desktops when one application hogs up everything. This shouldn’t be allowed to ever happen, user should always be able to preempt his/her system and get immediate feedback. That’s how computing experience for people should be. It’s not there yet, it’s not likely to happen soon, but there is good progress, partially forced by the success coming from free and open source advocacy.

If you want to get a feel for how radical Microsoft has changed their philosophy, and you are a technical minded person, this video is the best I’ve seen so far. You will see that the stuff is new and being built as they spoke, but listen carefully to everything being said, watch the demos, you will recognise many things that come straight from the free open source and other popular technology practices: continuous integration, continuous delivery, pets vs. cattle, open source, etc. I didn’t hear them say if Nano Server would be developed in the open too, but that would have been interesting. Nano Server, cloud optimised Windows server for developers.

OS X Yosemite adaptive networking, a blessing that’s been a curse for my MacBook lately

I experienced quite some frustrations when my computer networking becomes unstable without ever notifying me of any problems. After some trial & error, if found out that the problem occurs whenever I happen to have more than one eligible network within reach. In the end, I had to manually enforce some fixed connections to regain a decent stability.

Apple introduced a neat feature in OS X Yosemite, the computer can automatically switch to the best network it can find without interrupting your programs. They also introduced another feature, OS X randomises the mac address (if useful, the physical network adapter) which should make the computer a little more secure. These have been hailed and are quite useful updates. The first one comes in handy if a network connection would suddenly drop, but the computer is able to reestablish it or hop on another network, for example while downloading a large file, you’d appreciate that the download would just progress to the end, and not get restarted from scratch. That’s a nice time saver. The second feature, the mac address randomiser, helps to prevent that for example coffeeshop wi-fi routers (or malware) could identify someone’s machine. I believe I’ve been at the stick end of these features lately however. After several updates and trying various things, I’ve come to the conclusion that these features are up against me.

I only use Wi-Fi on my laptops for years already. Over the past weeks, I’ve been getting a torrid networking experience whereby my computer would intermittently lose network connection without notifying me of any problems. I’d check and see full strength Wi-Fi signal and that I am still connected to the network, yet none of the applications that I am running is able to reach the outside world on the Internet.  Without me doing anything, the Internet connection comes alive again and I can do a few things, then it would drop once again. The pattern repeats many times. At first I didn’t think much of it, but quickly grew annoyed and set out to resolve the issue. After scouring forums, stack overflows and other random sources, in vain, I was on my own. What eventually brought stability back to my networking was this:

  • Let the computer look for Wi-Fi network
  • Once it establishes a connection, turn off the automatic network detection
  • Delete any other known network in my vicinity, that was already registered by my computer.
MBP OSX Yosemite don't ask for networksMBP OSX Yosemite don’t ask for networks

After I’ve done this, I get a stable connection. But at home, it turns out I have one more complication. My UPC (Ziggo) subscription includes a wi-fi router, but I also have Apple Airport Express. When both are activated, my computer detects two known wi-fi  networks, so it will start hoping back and forth between the two without telling me. So I end up in the same situation, can’t get on the Internet and for no apparent reason. To resolve this, I’ve turned off the Ziggo wi-fi router.

What I think may be happening, is the following. The computer detects a Wi-Fi network, requests and obtains an address, thus can get on the network. But shortly afterwards, it detects another network with perhaps a slight but (intermittent) better signal strength, it would hop on to that new one. Wi-Fi being a radio signal, the vacillations of the two (or more) signals cause the computer to keep jumping around. When this is combined with the randomised mac address (physical address) allocation, the Wi-Fi routers would then temporarily quarantine the computer before allowing it back in. The user (me!) then experiences that the computer simply loses any network connectivity for reasonably long spells of time, 3 to 5 minutes, then inexplicably regain it again. It would then loops the back into the same game. On and on. This is what I think has been going on with my MBP, and that’s why I decided to try forcing a semi-manual network setup, so essentially try and stop it being too clever.

Adobe Slate, an attractive tool for publishing nicely laid out content

It is time to revise a past blog post where I talked about missed opportunity for word processors. With Adobe Slate, it is fair to say that iPad users can now easily publish nicely laid out content.  I tried it briefly, it’s effortless to start with, all you need is some content. Well, I think this should have always been possible with the word processors that have been on the market all this time.

There is one drawback to Adobe Slate though, it requires an account with Adobe Cloud. I understand the rationale, but this, to me, is an unnecessary barrier to adoption.

Adobe Slate web site.

 

An open source port scanner that is helping some bad guys

Port scanning is one of those annoying activities that the bad guys may use while attempting to try and find back doors on systems. The principle is simple, find out what ports a system has left open, if you recognise any, try a dictionary like attack on these ports. All it takes is a simple bot.

Last few months, I have noticed multiple port scan attacks at my web sites from a user agent “masscan/1.0″. I dug a little and found this to be coming from an open source tool, the project on Github:

robertdavidgraham/masscan

So, it seems that some people have found this tool, and are now randomly targeting web sites with it. To what aim, I can’t tell for sure. It is certainly reprehensible to be poking at someone’s doors without their consent, everybody knows this.

I’ve also noticed lots of attempts to run PHP scripts, they seem to be looking for PhpMyAdmin. Fortunately I don’t run anything with PHP. If I did, I would harden it significantly and have it permanently monitored for possible attacks.

Most of the attacks on my web sites originate from, in this order: China, Ukraine, Russia, Poland, Romania, and occasionally, the US.

You don’t need anything sophisticated to detect these kind of attacks, your web server log is an obvious place. Putting a firewall in place is a no-brainer, just block everything except normal web site http and https traffics. You can invest also more in tools, then the question is if you’re not better off just hosting at a well known provider.

This is just one instance, and there are infinitely many, where even the dumbest criminals are getting their hands on tools to try and break into systems. Cloud hosting are getting cheaper all the time, soon it will cost nothing to host some program that can wander about the Internet unfettered. Proportionally, it is getting exponentially easier to attack web sites, while at the same time, it is getting an order of magnitude higher to keep sites secure.

I do see a shimmering light, container technologies provide for a perfect throwable computing experience. Just start a container, keep it stateless, carry out some tasks, when done, throw it away. Just like that. This may reduce the exposure in some cases, it won’t be sustainable for providing an on-going long-running service.

IT security is a never ending quest that is best left to dedicated professionals. I am just casually checking these web sites that I run. At the moment, I haven’t deployed any sensitive data on these sites yet. When I do, I will make sure they are super hardened and manned properly, likely a SaaS provider rather than spending my time dealing with this.