Some believe “Google is evil”. Apple is vocal on privacy. Apple going to use Google Cloud?

Funny. If you’ve followed tech news recently, you couldn’t have missed Apple’s high profile court battle on user privacy. A lot of people, namely tech savvy people, are rather vocal in their belief that Google might be too casual with user privacy. The news that Apple is signing up to use Google Cloud should sound kind of ironic. To the fanbois at least.

This is business, though. As it should always be. Whether this turns out to be true or not, despite all the fuss made about Steve Jobs’ alleged vindictiveness, Apple has demonstrated pragmatism time and again. I remember the early days of Apple’s iCloud, some tinkerer had found out that it was using Microsoft Azure. Apple never said a thing about that back then.

This kind of news item should also send a message to the business decision maker. There are just too many decision makers out there that would rather not think for themselves. Whatever provider you might feel more trustworthy, at the end of the day, building the capability to leverage any Cloud service would wind up the winning strategy.

A chosen quote from the news article:

Apple signed a contract reportedly worth as much as $600 million to use Google’s cloud platform.

Source: How Google Just Landed Apple as a Customer–and Beat Amazon to It | Inc.com

Eclipse Che looks promising, the cheese’s moved around

A very quick look at Eclipse Che shows a promising concept. I thought let’s have a look. When I’m serious about a technology I take the time to read the documentation before diving in. In this case I wanted to follow the typical journey that most folks take, just dive in, never bother with documentation, upon the first hurdle start complaining like a bewitched mad dog with an exaggerated sense of entitlement – ok, minus the last bit of attitude.

I installed Eclipse Che, easy peasy. Then I fired it up. Oops! I can’t connect to it. The first time ever I couldn’t just use an Eclipse release after installing it. It was time to look under the bonnet. So I did. I saw it’s deployed on Docker… What!? Why!? Ahem, ok, move on. I stopped it, also stopped Docker Machine. Then I manually started Docker Machine, readied the environment, then started Che again. This time I tried http://localhost:8080 and I got in. Cool. Everything looks familiar, except it’s all now in one web browser window.

Time to look back and reflect on what I’ve learned here. The fact I couldn’t connect the first time might have to do with RTFM that I didn’t. Anyway, not a big deal, it took me a couple of minutes.

Nothing much to it, just an IDE inside a web browser. It’s the same old thing, in a new cloak. The most obvious/visible differences I spotted can be depicted in a simple diagram, BEFORE and AFTER.

before_reinvention_classic_eclipse_ide

With Eclipse Che,

after_reinvention_eclipse_che

I’m oversimplifying, but highlighting the most visible changes. It seems that when we get to modernising our software stack, adding Docker and JavaScript are passage-obligé. So, somehow people think that deploying a Java app on Docker is a better architectural choice than only targeting the JVM? In my case, since I’m using a Mac, which runs OSX, hence requires an extra VM (VirtualBox in my case) in order to run Docker containers, I actually end up with a more complicated stack for just an IDE. I don’t know where this is going. Now trying the IDE.

eclise_che_ide_in_action

 

I haven’t gone further than this. The concept of Developer WorkStation Server can be interesting for pair programming. The Server option is perhaps more appealing. I just wonder why this couldn’t be just a Java App and why Docker was actually necessary.

A brave, new post open source world, or Fly-by Software License pollution

I just read an interesting article with the title We’re in a brave, new post open source world. The article goes into the evolution of Open Source movement and the numerous licensing policies. On particularly notable phrase I saw read as follows:

…if you use someone else’s code revision from Stack Overflow, you would have to add a comment in your code that attributes the code to them.

What this means is that, if a developer uses a snippet of code taken from StackOverflow, and fail to add such an attribution, then technically the project might be in breach of StackOverflow license. I am curious how many organisations actually check this.

The whole article is a good read.

Original Article: We’re in a brave, new post open source world — Medium

Open source is a development methodology; free software is a social movement. by Richard Stallman

I just read a nice essay by Richard Stallman with the title Why Open Source Misses the Point of Free Software – GNU Project – Free Software Foundation. A chosen quote from this essay poses perfectly the problem

Open source is a development methodology; free software is a social movement.

Most people probably aren’t even aware of this difference. I never understood why and how the term open source came to be applied to hardware, government and many other areas when in fact even the English language doesn’t see any notion of source in such contexts.

The article I refer to is concerned about correct definitions, I want to look at some  of the misunderstandings.

There is an angle to this discussion, a lot of people and organisations look to Open Source Software (OSS) in search for cheap (but not cheerful) opportunities to solve their problems.  You can’t blame them for it, but this can raise several issues. I will ignore any moral aspects for now, and focus on a few practical implications.

  • Some individuals or organisations release their work as Open Source with the explicit intention to invite others to contribute to it. This is often an acknowledgement that one’s work can be bettered and perfected if others would gain access and be allowed to contribute.
  • By releasing a work as open source, there is no implicit or explicit guarantee of quality or defect. It just means use it at your own risks, your contribution would be appreciated if only in terms of signalling any defects found, or improvements that you might have been able to add to it.
  • FOSS doesn’t  opposed nor condone gainful use. Statistically however, there exist far fewer people and organisations able to contribute than those who actually use OSS. This is well understood and accepted by most. However, it is astonishing to see some people throwing a tantrum and launching on diatribes when they get frustrated by some open source software. This is just plain crazy behaviour, they not only miss the point and are showing preposterous entitlement that deserves to be frowned at.
  • Increasingly, many organisations are using OSS as a mean for attracting and retaining talent. This is an instance that stretches the notions of free and open in an interesting way, a subtle form of free promotion and marketing.

Article: Why Open Source Misses the Point of Free Software – GNU Project – Free Software Foundation

Qubes OS Project, a secure desktop computing platform

Given that the majority of security annoyances stem from antiquated design considerations, considering the progress made in computing, affordable computing power, this is probably how Operating Systems should now be built and delivered.

Qubes is a security-oriented, open-source operating system for personal computers.

Source: Qubes OS Project

Martin Fowler’s article is barely a year old, folks have exceeded my expectations

When I first I saw a blog post by Martin Fowler’s on the micro-services, I immediately thought that the developer community was going to go crazy about the concept. I wasn’t disappointed. But thankfully, many people caught on the mania before it got totally out of hand. Martin, in his latest blog post, is among those calling for some sanity. Read Martin’s blog post here: Monolith First, by Fowler

Martin Fowler is a brilliant technologist. Needless to say. This post is going to be a recap of some of my tweets on the subject of micro-services (or “microservices” as I see commonly being written). I would have quoted a bunch of other people instead, had I seen many. But that wasn’t the case, so I’ve got to quote myself then.

The first article I read about micro-services was on InfoQ.

Some time later, I saw a blog post by Martin Fowler’s article on the same subject. Then I immediately thought, as is typically the case, that the developer community was going to go crazy about the concept. I had the following reaction.

Naturally I value the thoughts and the content of the article. But I was merely concerned that many would jump straight in and make a total mess of a rather valuable insight. The topic gained popularity quite quickly, faster than I had expected though I couldn’t say I was surprised either. Reputed analysts picked up on this.

Time going by didn’t assuage my concerns, rather, I was only getting more and more confirmations. I thought that perhaps nobody is going to adjust perceptions and expectations until disaster stories would abound. I tweeted my thought on that.

Soon enough, people started posting thoughts on what was going on.

And, to keep this relatively short, here we are, somewhat full circle, with Martin Fowler inviting for some sanity. Martin opens his latest blog post

As I hear stories about teams using a microservices architecture, I’ve noticed a common pattern.

Almost all the successful microservice stories have started with a monolith that got too big and was broken up
Almost all the cases where I’ve heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

Read Martin’s blog post here: Monolith First, by Fowler

OS X Yosemite, why block my view when you should’ve known better?

I now frequently experience several apps freezing for no apparent reason. Standard apps like Finder.app, Preview.app, or Mail.app or Safari.app, would just stop responding. After digging it up a bit, I found out that when Spotlight get into such an aggressive reindexing, Finder.app also stops being responsive. My conclusion, from here, was that, some of the standard apps that ship with OS X Yosemite contain a certain amount of code that are very old or simply badly designed. Such code, typically the work of UI framework enthusiasts or design principles of another era, would traverse a UI layer stack for tasks like disk and network access, although they shouldn’t have to.

I should have made this a series, another one on OS X annoyances. I now frequently experience several apps freezing for no apparent reason. Yet again, a new behaviour that until now, 8 years after switching from Windows to Mac, I didn’t expect to experience. Standard apps like Finder.app, Preview.app, or Mail.app or Safari.app, would just stop responding.

safari_preview_finderNormally, if an app stops responding then this will show in Console.app. In these instances, Console.app was showing a clean health situation, nothing is stuck. But as a user, I could type any number of keys and move the mouse around, Finder.app doesn’t respond, Spotlight doesn’t instantly find any answers – whereas it normally does as you type characters. I use Spotlight to launch apps, so when it doesn’t respond then that interrupts my work flow. Then I immediately turned to Alfred.app, and surely enough Alfred was working fine and could carry out any task I usually throw at it. What the heck was going on now?

Screen Shot 2015-05-20 at 22.43.43

I started to guess a deadlock situation, invisible to the regular app monitor. I then looked for what might be hogging up resources and saw something interesting.

dropbox_and_Spotlight_max_out_the_cpu

Two processes are occupying 130% of the CPU, effectively 2 out of 4 CPUs on my machine are fully utilised. I have 2 more CPUs that can potentially do work for me. And they do try, only soon to get stuck. ‘Dropbox’ app is easy to recognise, the second hungry process ‘mds‘  is actually the indexer of Spotlight.

Dropbox was clearly working hard on synchronising files to the Cloud, but what was mds doing? I did recently move around a large number of files, this may have invalidated Spotlight index, and it is trying to rebuild it. All fine, but I always thought that only happened when the machine was not being used. Furthermore, I expected that Spotlight indexer wouldn’t make the UI unresponsive. I was wrong in both cases.

I found out that when Spotlight get into such an aggressive reindexing, Finder.app also stops being responsive. This has some consequences: some apps appear to work fine, I can launch other apps and they may be snappy and all, as long as they don’t go anywhere near Finder.app. The overall impression is that the Mac is unstable without any app appearing to be hanging. How is this possible? Then I remembered what I always chided Windows, the fact that some tasks were unnecessarily channelled via UI layer stack, making them sluggish and prone to get stuck. That’s the same behaviour I was now observing.

force_quit_spotlight_indexer_for_responsiveness

 

To confirm my hypothesis, as soon as I killed the Spotlight indexer, Finder.app, Preview.app an others immediately became responsive again. I repeated the experiment many times over before writing this post.

I found another sure way to get Preview.app stuck, any attempt to rename a file, move it to a new location, or add tags to it directly from Preview.app menu, will cause both Preview.app and Finder.app to become unresponsive for a long time.

Screen Shot 2015-05-20 at 22.45.27

 

My conclusion, from here, was that, some of the standard apps that ship with OS X Yosemite contain a certain amount of code that are very old or simply badly designed. Such code, typically the work of UI framework enthusiasts or design principles of another era, would traverse a UI layer stack for tasks like disk and network access, although they shouldn’t have to.

Most users would typically get frustrated and decide that OS X is just bad software, others might think about rebuilding their machine. I just looked briefly into it, didn’t bother digging up too much into the SDKs, APIs and other kernel debugging tricks to get to the true bottom of it.

 

 

Leadership drive: From ‘despises Open Source’ To ‘inspired by Open Source’, Microsoft’s journey

With a change of mind at the top leadership level, Microsoft showed that even a very large company is able to turn around and adopt a customer focused approach to running a business. By announcing Nano Server, essentially a micro-kernel architecture, Microsoft is truly joining the large scale technology players in a fundamental way.

A video on Microsoft’s Channel9 gives a great illustration of the way Microsoft is morphing its business to become a true champion of open source. I took some time to pick some of the important bits and go over them.

I remember the time when Microsoft was actually the darling of developers, the open source equivalent of the day as I entered this profession. I was a young student, eager to learn but couldn’t afford to buy any of the really cool stuff. Microsoft technology was the main thing I could lie my hands on, Linux wasn’t anywhere yet, I had learned Unix, TCP/IP, networking, and I loved all of that. Microsoft had the technical talent and a good vision, but they baked everything into Windows, both desktop and server, when they could have evolved Ms-DOS properly as a headless kernel that would get windowing and other things stacked upon it. They never did, until now. The biggest fundamental enabler was probably just a change in the leadership mindset.

The video presents Nano Server, described as a Cloud Optimized Windows Server for Developers. On a single diagram, Microsoft shows how they’ve evolved Windows Servers.

Microsoft Windows Server Journey
Microsoft Windows Server Journey

Considering this diagram from left to right, it is clear that Microsoft became increasingly aware of the need to strip out the GUI from unattended services for an improved experience. That’s refreshing, but to me, it has always been mind-boggling that they didn’t do this many years ago.

Things could have been different

In fact, back in mid-90’s, when I had completed my deep dives into Windows NT systems architecture and technology, I was a bit disappointed to see that everything was tied up to the GUI. Back then, I wanted a Unix-like architecture, an architecture that was available even before I knew anything about computers. I wanted the ability to pipe one command’s output into the input of another command. Software that requires a human present and clicking on buttons should only be present on the desktop, not on the server. With Microsoft, there was always some windows popping up and buttons to be clicked. I couldn’t see a benefit to the user (systems administrators), in the way Microsoft had built its server solutions. It wasn’t a surprise that Linux continued to spread, scale and adapt to Cloud work loads and scenarios, while Windows was mainly confined to corporate and SMB environments. I use the term confined to contrast the growth in departmental IT with Internet companies, the latter having mushroomed tremendously over last decade. So, where the serious growth is, Microsoft technology was being relegated.

Times change

When deploying server solutions mainly covered collaboration services and some departmental applications needs, people managed a few number of servers. The task could be overseen and manned by a few people, although in practice IT departments became larger and larger. Adding more memory and storage capacity was the most common way of scaling deployments. Although, still rather inconvenient, software came in CD-ROMS and someone had to physically go and sit a console to install and manage applications. This is still the case for lots of companies. In these scenari, physical sever hardware are managed a bit like buildings, they have well known names, locations and functions, administrators care discriminately for the servers. The jargon term associated to this is server as pet. With the best effort, Data Centre resource utilisation remained low (the typical figure is 15% utilisation) compared to the available was high and large.

Increasingly however, companies grew aware of the gain in operations and scalability when adopting cloud scaling techniques. Such scaling techniques, popularised by large Internet companies such as Google, Facebook, Netflix, and many others, mandate that servers are commodities that are expected to crash and can be easily replaced. It doesn’t matter what a server is called, workloads can be distributed and deployed anywhere, and relocated on any available servers. Failing servers are simply replaced, and mostly without any downtime. The jargon term associated to this approach is server as cattle, implying they exist in large numbers, are anonymous and disposable. In this new world, Microsoft would have always struggled for relevance because, until recently with Azure and newer offerings, their technology just wouldn’t fit.

the voice of the customer
the voice of the customer

So, Microsoft now really needed to rethink their server offerings, with a focus on the customer. This is customer-first, driven by user demands, a technology pull, instead of the classical model which was a technology-first, I build it and they will come, a push model, in which the customer needs come after many other considerations. In this reimagined architecture, the GUI is no longer baked into everything, instead it’s an optional element. You can bet that Microsoft had heard these same complaints from legions of open source and Linux advocates many times over.

Additionally, managing servers required to either sit in front of the machines, or firing up remote desktop sessions so that you could happily go on clicking all day. This is something that Microsoft appear to be addressing now, although in the demos I did see some authentication windows popping up. But, to be fair, this was about an early preview, I don’t think they even have a logo yet. So I would expect that when Nano Server eventually ships, authentication would no longer require popup windows. 😉

the new server application model
the new server model, the GUI is no longer baked in, it can be skipped.

The rise of containers

Over last couple of years, the surge in container technologies really helped to bring home the message that the days of bloated servers were numbered. This is the time when servers-as-cattle takes hold, where it’s more about workload balancing and distribution rather than servers dedicated to application tiers. Microsoft got the message too.

microsoft nano server solution
microsoft nano server solution

I have long held the view that Microsoft only needed to rebuild a few key components in order to come up with a decent headless version of their technology. I often joked that only common controls needed rewriting, but I had no doubt that it was more to do with a political decision. Looking at the next slide, I wasn’t too far off.

reverse forwarders
Reverse forwarders, a technical term to mean that these are now decent headless components

Now, with Nano Server, Microsoft joins the Linux and Unix container movement in a proper way. You see that Microsoft takes container technologies very seriously, they’ve embedded it into their Azure portfolio, Microsoft support for Docker container technologies.

Although this is a laudable effort, that should bear fruits in time, I still see that there is a long way to go before users, all types of users, become truly centre for technology vendors. For example, Desktop systems must still be architected properly to save hours of nonsense. There is no reason why a micro-kernel like Nano Server wouldn’t be the foundation for desktop systems too. Mind you, even with multi-core machines with tons of memory and storage, you still get totally unresponsive desktops when one application hogs up everything. This shouldn’t be allowed to ever happen, user should always be able to preempt his/her system and get immediate feedback. That’s how computing experience for people should be. It’s not there yet, it’s not likely to happen soon, but there is good progress, partially forced by the success coming from free and open source advocacy.

If you want to get a feel for how radical Microsoft has changed their philosophy, and you are a technical minded person, this video is the best I’ve seen so far. You will see that the stuff is new and being built as they spoke, but listen carefully to everything being said, watch the demos, you will recognise many things that come straight from the free open source and other popular technology practices: continuous integration, continuous delivery, pets vs. cattle, open source, etc. I didn’t hear them say if Nano Server would be developed in the open too, but that would have been interesting. Nano Server, cloud optimised Windows server for developers.

OS X Yosemite adaptive networking, a blessing that’s been a curse for my MacBook lately

When my computer detects several known (or eligible) networks that it can connect to, networking becomes unstable without the system ever showing any errors. I resorted to forcing only one network, to regain stability.

I experienced quite some frustrations when my computer networking becomes unstable without ever notifying me of any problems. After some trial & error, if found out that the problem occurs whenever I happen to have more than one eligible network within reach. In the end, I had to manually enforce some fixed connections to regain a decent stability.

Apple introduced a neat feature in OS X Yosemite, the computer can automatically switch to the best network it can find without interrupting your programs. They also introduced another feature, OS X randomises the mac address (if useful, the physical network adapter) which should make the computer a little more secure. These have been hailed and are quite useful updates. The first one comes in handy if a network connection would suddenly drop, but the computer is able to reestablish it or hop on another network, for example while downloading a large file, you’d appreciate that the download would just progress to the end, and not get restarted from scratch. That’s a nice time saver. The second feature, the mac address randomiser, helps to prevent that for example coffeeshop wi-fi routers (or malware) could identify someone’s machine. I believe I’ve been at the stick end of these features lately however. After several updates and trying various things, I’ve come to the conclusion that these features are up against me.

I only use Wi-Fi on my laptops for years already. Over the past weeks, I’ve been getting a torrid networking experience whereby my computer would intermittently lose network connection without notifying me of any problems. I’d check and see full strength Wi-Fi signal and that I am still connected to the network, yet none of the applications that I am running is able to reach the outside world on the Internet.  Without me doing anything, the Internet connection comes alive again and I can do a few things, then it would drop once again. The pattern repeats many times. At first I didn’t think much of it, but quickly grew annoyed and set out to resolve the issue. After scouring forums, stack overflows and other random sources, in vain, I was on my own. What eventually brought stability back to my networking was this:

  • Let the computer look for Wi-Fi network
  • Once it establishes a connection, turn off the automatic network detection
  • Delete any other known network in my vicinity, that was already registered by my computer.
MBP OSX Yosemite don't ask for networks
MBP OSX Yosemite don’t ask for networks

After I’ve done this, I get a stable connection. But at home, it turns out I have one more complication. My UPC (Ziggo) subscription includes a wi-fi router, but I also have Apple Airport Express. When both are activated, my computer detects two known wi-fi  networks, so it will start hoping back and forth between the two without telling me. So I end up in the same situation, can’t get on the Internet and for no apparent reason. To resolve this, I’ve turned off the Ziggo wi-fi router.

What I think may be happening, is the following. The computer detects a Wi-Fi network, requests and obtains an address, thus can get on the network. But shortly afterwards, it detects another network with perhaps a slight but (intermittent) better signal strength, it would hop on to that new one. Wi-Fi being a radio signal, the vacillations of the two (or more) signals cause the computer to keep jumping around. When this is combined with the randomised mac address (physical address) allocation, the Wi-Fi routers would then temporarily quarantine the computer before allowing it back in. The user (me!) then experiences that the computer simply loses any network connectivity for reasonably long spells of time, 3 to 5 minutes, then inexplicably regain it again. It would then loops the back into the same game. On and on. This is what I think has been going on with my MBP, and that’s why I decided to try forcing a semi-manual network setup, so essentially try and stop it being too clever.

Adobe Slate, an attractive tool for publishing nicely laid out content

It should have been possible to easily author and publish polished web content with word processors. It wasn’t, and still largely isn’t. A recently introduced product, Adobe Slate, seem to have solved this for iPad users.

It is time to revise a past blog post where I talked about missed opportunity for word processors. With Adobe Slate, it is fair to say that iPad users can now easily publish nicely laid out content.  I tried it briefly, it’s effortless to start with, all you need is some content. Well, I think this should have always been possible with the word processors that have been on the market all this time.

There is one drawback to Adobe Slate though, it requires an account with Adobe Cloud. I understand the rationale, but this, to me, is an unnecessary barrier to adoption.

Adobe Slate web site.