Programming concepts: who needs to know about Monads? who doesn’t?

Programming language users don’t need to know about Monads. No I won’t get into explaining what a Monad is or isn’t, instead I will focus on the potential usefulness as an angle to this hot topic.

I drafted this post in February 2015 then forgot about it. I recently stumbled upon it again, so I thought to publish it, after some minor editing.

There is a clear chasm in the way people communicate thoughts and concepts within programming language communities. And this chasm manifest itself in the multiple layers of conversations in the communities, interacting at different levels. In the mix, some are discussing language design issues whereas others are just trying to get started with basic concepts.

Let me paraphrase Martin Odersky – I got the following rough definition from him, though I’ve lost the reference, I can’t provide an exact quote, just the idea. I emphasise the main idea:

There are three categories of people in most programming language communities: language designers, framework or toolkit designers, and language users.

These groups of people have different concerns, although they intermingle on social media they seldom speak the same language. This becomes quite obvious when you read their tweets.

Recent years have seen a significant focus on functional programming, the cool kids can’t rave enough about it. These are typically functional programming advocates. Monad, a word that comes up quite frequently in conversations. Wow, it sounds so cool! Nobody wants to be left out, nobody wants to sound old-fashioned or inferior, so everyone think they too should be involved with this monad stuff. I think it’s misguided to just dive in without clear justification. Most regular developers really belong in the language users group, they need not bother about monads too soon.

A regular developer is someone who’s main job is to write software aimed at end users. The work doesn’t result in programming artefacts for other software writers, instead the software is produced for people will just run them. If this is your main job, then you really don’t need to know about monads. If you’re feeling some kind of peer pressure, a sense of being important, if that’s what’s driving you into this monad business, then you are wasting your time.

Having got that away, let me give a really short take on what Monads are useful for, the one definition for the non-initiated (sorry, this is you, software non-librarian):

In essence, Monad is important because it gives us a principled way to wrap context information around a value, then propagate and evolve that context as the value evolves. Hence, it minimises coupling between the values and contexts while the presence of the Monad wrapper informs the reader of the context’s existence.

Excerpts from the book: Programming Scala, by Dean Wampler & Alex Payne. Any typos or mistakes are mine.

This explanation is really good, easy to understand, no PhD in Math required. The word value might be puzzling if your choice of programming language doesn’t have such concept, you could just substitute it with something. Ultimately, this is about writing a more general solution, that can be trusted for soundness and solidity if its architectural concepts are backed by math theories. The notion is popularised by functional programming theories, concepts, programming languages and tools. Monad is one powerful tool from such a trove. The  programming language, its libraries and the compiler, combine to provide such power.

The discipline emerged from academia, researchers working on formal proof of algorithm correctness started it all. During my studies at the university, we got introductory lectures on language theories and formal proofs, but never went much too deep. At the time I thought it was really for people who wanted a career in academia, I wanted to make software for people. In recent years, functional programming picked up steam and is ever growing in popularity. I won’t dive further into the background here, there is plenty of material on that.

In the early stages, as your solution is only starting to emerge, it doesn’t make sense to try and reason in terms of monads or not. I think it best to simply write something that works, gives the user what they need. Then only after that first goal is achieved, would you take the time to contemplate what you wrote, look for opportunities to remove boilerplates, repetitions, ambiguities, and so on.

As you engage in this type of exercise, that’s when you start to to think: wouldn’t it be handy if a tool existed, or the compiler, to allow me to make my algorithms and recipes more general, without me having to invent something? That is the point where it is good to look at what your language and its compiler offer, the libraries available. Chances are, you could soon be using some monads without even knowing to call them so.

The more you progress in this journey, the more concise your code becomes, fewer components can perform increasingly more work. The process helps you improve quality, reduce maintenance burden, and even save yourself valuable testing time. If your programming language is built for this, like Scala, Haskell, Idris, OCaml, and so on, then you reap much more rewards. In fact you’d be expected to be building on such concepts. These benefits can be discussed and reasoned about without getting lost in strange cabalistic terminology. Cabalistic is as math may sound like to the uninitiated.

The process illustrated in the previous paragraph would probably be trivial for framework designers. Indeed they need to achieve high levels of reusability without sacrificing performance and legibility. Folks engaged in domain specific language (DSL) design would also have similar aims. Math savvy framework designers would probably go a formal way, that’s where category theory comes into play. Unfortunately, math savvy people can behave like doctors, speaking in tongues that only specialists can understand. And this is how, although well meaning, most people get lost when trying to follow conversations or reasoning involving such concepts.

On social media and blogs, software people who regularly talk about monads and various kinds of math infused concepts, are either language designers, or framework designers, or are perhaps aspiring to become one. They seem to be either demonstrating proficiency, or trying to informally engage people they acknowledge as peers.

That’s it. A programming language user doesn’t need to know about monads. It is useful if they do and can actually take advantage of it, but one can be productive with no knowledge of monads. To those wondering what a monad is, whether they need one or not, I would suggest to check out Dean Wampler’s quote above as a way to assess one’s motivation. If that qualifies their quest, then it is worth reading up many different code samples from multiple languages to find out how to take advantage of it. No need to get mystified.

Martin Fowler’s article is barely a year old, folks have exceeded my expectations

When I first I saw a blog post by Martin Fowler’s on the micro-services, I immediately thought that the developer community was going to go crazy about the concept. I wasn’t disappointed. But thankfully, many people caught on the mania before it got totally out of hand. Martin, in his latest blog post, is among those calling for some sanity. Read Martin’s blog post here: Monolith First, by Fowler

Martin Fowler is a brilliant technologist. Needless to say. This post is going to be a recap of some of my tweets on the subject of micro-services (or “microservices” as I see commonly being written). I would have quoted a bunch of other people instead, had I seen many. But that wasn’t the case, so I’ve got to quote myself then.

The first article I read about micro-services was on InfoQ.

Some time later, I saw a blog post by Martin Fowler’s article on the same subject. Then I immediately thought, as is typically the case, that the developer community was going to go crazy about the concept. I had the following reaction.

Naturally I value the thoughts and the content of the article. But I was merely concerned that many would jump straight in and make a total mess of a rather valuable insight. The topic gained popularity quite quickly, faster than I had expected though I couldn’t say I was surprised either. Reputed analysts picked up on this.

Time going by didn’t assuage my concerns, rather, I was only getting more and more confirmations. I thought that perhaps nobody is going to adjust perceptions and expectations until disaster stories would abound. I tweeted my thought on that.

Soon enough, people started posting thoughts on what was going on.

And, to keep this relatively short, here we are, somewhat full circle, with Martin Fowler inviting for some sanity. Martin opens his latest blog post

As I hear stories about teams using a microservices architecture, I’ve noticed a common pattern.

Almost all the successful microservice stories have started with a monolith that got too big and was broken up
Almost all the cases where I’ve heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

Read Martin’s blog post here: Monolith First, by Fowler

OS X Yosemite, why block my view when you should’ve known better?

I now frequently experience several apps freezing for no apparent reason. Standard apps like Finder.app, Preview.app, or Mail.app or Safari.app, would just stop responding. After digging it up a bit, I found out that when Spotlight get into such an aggressive reindexing, Finder.app also stops being responsive. My conclusion, from here, was that, some of the standard apps that ship with OS X Yosemite contain a certain amount of code that are very old or simply badly designed. Such code, typically the work of UI framework enthusiasts or design principles of another era, would traverse a UI layer stack for tasks like disk and network access, although they shouldn’t have to.

I should have made this a series, another one on OS X annoyances. I now frequently experience several apps freezing for no apparent reason. Yet again, a new behaviour that until now, 8 years after switching from Windows to Mac, I didn’t expect to experience. Standard apps like Finder.app, Preview.app, or Mail.app or Safari.app, would just stop responding.

safari_preview_finderNormally, if an app stops responding then this will show in Console.app. In these instances, Console.app was showing a clean health situation, nothing is stuck. But as a user, I could type any number of keys and move the mouse around, Finder.app doesn’t respond, Spotlight doesn’t instantly find any answers – whereas it normally does as you type characters. I use Spotlight to launch apps, so when it doesn’t respond then that interrupts my work flow. Then I immediately turned to Alfred.app, and surely enough Alfred was working fine and could carry out any task I usually throw at it. What the heck was going on now?

Screen Shot 2015-05-20 at 22.43.43

I started to guess a deadlock situation, invisible to the regular app monitor. I then looked for what might be hogging up resources and saw something interesting.

dropbox_and_Spotlight_max_out_the_cpu

Two processes are occupying 130% of the CPU, effectively 2 out of 4 CPUs on my machine are fully utilised. I have 2 more CPUs that can potentially do work for me. And they do try, only soon to get stuck. ‘Dropbox’ app is easy to recognise, the second hungry process ‘mds‘  is actually the indexer of Spotlight.

Dropbox was clearly working hard on synchronising files to the Cloud, but what was mds doing? I did recently move around a large number of files, this may have invalidated Spotlight index, and it is trying to rebuild it. All fine, but I always thought that only happened when the machine was not being used. Furthermore, I expected that Spotlight indexer wouldn’t make the UI unresponsive. I was wrong in both cases.

I found out that when Spotlight get into such an aggressive reindexing, Finder.app also stops being responsive. This has some consequences: some apps appear to work fine, I can launch other apps and they may be snappy and all, as long as they don’t go anywhere near Finder.app. The overall impression is that the Mac is unstable without any app appearing to be hanging. How is this possible? Then I remembered what I always chided Windows, the fact that some tasks were unnecessarily channelled via UI layer stack, making them sluggish and prone to get stuck. That’s the same behaviour I was now observing.

force_quit_spotlight_indexer_for_responsiveness

 

To confirm my hypothesis, as soon as I killed the Spotlight indexer, Finder.app, Preview.app an others immediately became responsive again. I repeated the experiment many times over before writing this post.

I found another sure way to get Preview.app stuck, any attempt to rename a file, move it to a new location, or add tags to it directly from Preview.app menu, will cause both Preview.app and Finder.app to become unresponsive for a long time.

Screen Shot 2015-05-20 at 22.45.27

 

My conclusion, from here, was that, some of the standard apps that ship with OS X Yosemite contain a certain amount of code that are very old or simply badly designed. Such code, typically the work of UI framework enthusiasts or design principles of another era, would traverse a UI layer stack for tasks like disk and network access, although they shouldn’t have to.

Most users would typically get frustrated and decide that OS X is just bad software, others might think about rebuilding their machine. I just looked briefly into it, didn’t bother digging up too much into the SDKs, APIs and other kernel debugging tricks to get to the true bottom of it.

 

 

Leadership drive: From ‘despises Open Source’ To ‘inspired by Open Source’, Microsoft’s journey

With a change of mind at the top leadership level, Microsoft showed that even a very large company is able to turn around and adopt a customer focused approach to running a business. By announcing Nano Server, essentially a micro-kernel architecture, Microsoft is truly joining the large scale technology players in a fundamental way.

A video on Microsoft’s Channel9 gives a great illustration of the way Microsoft is morphing its business to become a true champion of open source. I took some time to pick some of the important bits and go over them.

I remember the time when Microsoft was actually the darling of developers, the open source equivalent of the day as I entered this profession. I was a young student, eager to learn but couldn’t afford to buy any of the really cool stuff. Microsoft technology was the main thing I could lie my hands on, Linux wasn’t anywhere yet, I had learned Unix, TCP/IP, networking, and I loved all of that. Microsoft had the technical talent and a good vision, but they baked everything into Windows, both desktop and server, when they could have evolved Ms-DOS properly as a headless kernel that would get windowing and other things stacked upon it. They never did, until now. The biggest fundamental enabler was probably just a change in the leadership mindset.

The video presents Nano Server, described as a Cloud Optimized Windows Server for Developers. On a single diagram, Microsoft shows how they’ve evolved Windows Servers.

Microsoft Windows Server Journey
Microsoft Windows Server Journey

Considering this diagram from left to right, it is clear that Microsoft became increasingly aware of the need to strip out the GUI from unattended services for an improved experience. That’s refreshing, but to me, it has always been mind-boggling that they didn’t do this many years ago.

Things could have been different

In fact, back in mid-90’s, when I had completed my deep dives into Windows NT systems architecture and technology, I was a bit disappointed to see that everything was tied up to the GUI. Back then, I wanted a Unix-like architecture, an architecture that was available even before I knew anything about computers. I wanted the ability to pipe one command’s output into the input of another command. Software that requires a human present and clicking on buttons should only be present on the desktop, not on the server. With Microsoft, there was always some windows popping up and buttons to be clicked. I couldn’t see a benefit to the user (systems administrators), in the way Microsoft had built its server solutions. It wasn’t a surprise that Linux continued to spread, scale and adapt to Cloud work loads and scenarios, while Windows was mainly confined to corporate and SMB environments. I use the term confined to contrast the growth in departmental IT with Internet companies, the latter having mushroomed tremendously over last decade. So, where the serious growth is, Microsoft technology was being relegated.

Times change

When deploying server solutions mainly covered collaboration services and some departmental applications needs, people managed a few number of servers. The task could be overseen and manned by a few people, although in practice IT departments became larger and larger. Adding more memory and storage capacity was the most common way of scaling deployments. Although, still rather inconvenient, software came in CD-ROMS and someone had to physically go and sit a console to install and manage applications. This is still the case for lots of companies. In these scenari, physical sever hardware are managed a bit like buildings, they have well known names, locations and functions, administrators care discriminately for the servers. The jargon term associated to this is server as pet. With the best effort, Data Centre resource utilisation remained low (the typical figure is 15% utilisation) compared to the available was high and large.

Increasingly however, companies grew aware of the gain in operations and scalability when adopting cloud scaling techniques. Such scaling techniques, popularised by large Internet companies such as Google, Facebook, Netflix, and many others, mandate that servers are commodities that are expected to crash and can be easily replaced. It doesn’t matter what a server is called, workloads can be distributed and deployed anywhere, and relocated on any available servers. Failing servers are simply replaced, and mostly without any downtime. The jargon term associated to this approach is server as cattle, implying they exist in large numbers, are anonymous and disposable. In this new world, Microsoft would have always struggled for relevance because, until recently with Azure and newer offerings, their technology just wouldn’t fit.

the voice of the customer
the voice of the customer

So, Microsoft now really needed to rethink their server offerings, with a focus on the customer. This is customer-first, driven by user demands, a technology pull, instead of the classical model which was a technology-first, I build it and they will come, a push model, in which the customer needs come after many other considerations. In this reimagined architecture, the GUI is no longer baked into everything, instead it’s an optional element. You can bet that Microsoft had heard these same complaints from legions of open source and Linux advocates many times over.

Additionally, managing servers required to either sit in front of the machines, or firing up remote desktop sessions so that you could happily go on clicking all day. This is something that Microsoft appear to be addressing now, although in the demos I did see some authentication windows popping up. But, to be fair, this was about an early preview, I don’t think they even have a logo yet. So I would expect that when Nano Server eventually ships, authentication would no longer require popup windows. 😉

the new server application model
the new server model, the GUI is no longer baked in, it can be skipped.

The rise of containers

Over last couple of years, the surge in container technologies really helped to bring home the message that the days of bloated servers were numbered. This is the time when servers-as-cattle takes hold, where it’s more about workload balancing and distribution rather than servers dedicated to application tiers. Microsoft got the message too.

microsoft nano server solution
microsoft nano server solution

I have long held the view that Microsoft only needed to rebuild a few key components in order to come up with a decent headless version of their technology. I often joked that only common controls needed rewriting, but I had no doubt that it was more to do with a political decision. Looking at the next slide, I wasn’t too far off.

reverse forwarders
Reverse forwarders, a technical term to mean that these are now decent headless components

Now, with Nano Server, Microsoft joins the Linux and Unix container movement in a proper way. You see that Microsoft takes container technologies very seriously, they’ve embedded it into their Azure portfolio, Microsoft support for Docker container technologies.

Although this is a laudable effort, that should bear fruits in time, I still see that there is a long way to go before users, all types of users, become truly centre for technology vendors. For example, Desktop systems must still be architected properly to save hours of nonsense. There is no reason why a micro-kernel like Nano Server wouldn’t be the foundation for desktop systems too. Mind you, even with multi-core machines with tons of memory and storage, you still get totally unresponsive desktops when one application hogs up everything. This shouldn’t be allowed to ever happen, user should always be able to preempt his/her system and get immediate feedback. That’s how computing experience for people should be. It’s not there yet, it’s not likely to happen soon, but there is good progress, partially forced by the success coming from free and open source advocacy.

If you want to get a feel for how radical Microsoft has changed their philosophy, and you are a technical minded person, this video is the best I’ve seen so far. You will see that the stuff is new and being built as they spoke, but listen carefully to everything being said, watch the demos, you will recognise many things that come straight from the free open source and other popular technology practices: continuous integration, continuous delivery, pets vs. cattle, open source, etc. I didn’t hear them say if Nano Server would be developed in the open too, but that would have been interesting. Nano Server, cloud optimised Windows server for developers.

Trying to oppose Open Web to Native App Technologies is so misguided that it beggars belief

Open web is not in opposition of native apps. There is just one single entity called Web, and it is open by definition. There are numerous forms of applications that make use of Internet technologies, a subset of such Internet-based technologies serve the web. You get nowhere trying to oppose those two things.

People who ought to know better spend valuable energy dissing native app technologies. I’ve ignored the fracas for a long time, finally I thought I’d pen a couple of simple thoughts. I got motivated to do this upon seeing Roy Fielding’s tweets. I also read @daringfireball’s recent blog post explaining in his usually articulate way how he sees Apple in the whole picture. Here, I just want to look at a couple of definitions, pure and simple.

To claim that open web technologies would be the only safe bet is akin to saying that all you would ever need is a hammer. Good luck with that if you never come across any nails.

I think both are very good and very useful in their own right, one isn’t going to push the other into irrelevance anytime soon because they serve different use cases. The signs actually are that web addressing model, by way of URIs, might get challenged soon once Internet connected devices start to proliferate (as they’ve been hyped to do for some time now). So, I actually think that a safer bet would be Internet Technologies, Web could increasingly be pushed into a niche. But ok, I actually didn’t intend to get in the fray.

Incidentally, I only see claims that native platform apps would be some kind of conspiracy to lock users down, but apparently open web would be the gentlemen benevolent dictator’s choice. I am skeptical in the face of such claims, because Mother Teresa wasn’t a web advocate for example. I remember that Apple, who triggered what-you-know, initially only wanted people to write HTML apps for the iPhone when it launched. It was only when they got pressured by developers that they finally released native SDKs. That’s such a terrible showing for a would-be mal-intended user-locker.

There is no such thing as an open web versus anything else. There is just the web and then there is an ever growing generation of native platform apps that also take advantage of Internet technologies. That’s it. Trying to oppose those two things is just rubbish. Plainly put. If something is a web technology and actually adheres to the web definition, it can only be open. A close web technology would be an oxymoron, the definition doesn’t hold. If something is built using Internet technology, it may borrow web technology for some purposes, but it is not part of the web per se.

In the instances where web technology is best fit, it will continue to be that way and get better. Conversely, in the  cases where native platform apps work best, those will continue to get better and may use the Internet. There is a finite number of web technologies bound by the standards and specifications implemented. But, there is an infinite number of native app technologies since the implementers can write anything they like and get it translated to machine code following any protocol they may devise.

The Web doesn’t equate to the Internet. There are open and closed Internet Technologies, but there isn’t such a thing for the Web. The Internet contains the Web, not the other way around.

In the outset, there is a platform battle going on where parties are vying for depleted user attention. In such battles, every party attracting people to their platform is self serving and can make as much morale (or whatever you call it) claim as any other. The only exception are those set to gain nothing in getting their platform adopted. There aren’t any of those around.

My observation of the ongoing discussions so far is simple. Some individuals grew increasingly insecure of having missed the boat on native platform apps. Whatever the reason, including own choices. Some other individuals, a different group, burned their fingers trying to follow some hypes, learned from that and turned to native platform apps. The two groups started having a go at each other. That got all the rousing started, everyone lost their cool and indulged in mud slinging. But there is no point to all of this.

If you are building software intended to solve a category of problems, then the most important technology selection criteria you should consider is ‘fit for purpose’. Next to that, if the problem is user bound, then you want to find the most user friendly way of achieving your aim. If you do this correctly, you will invariably pick what serves best the most valuable use cases of your intended audience. Sometimes this will work well with web technologies, sometimes it won’t, and other times you will mix and match as appropriate.

I don’t see web technologies being opposed to native app platforms, at all. Whatever developers find more attractive and profitable will eventually prevail, and use case is the only metric that truly matters. It is understandable that any vendor should vie for relevance. That’s what’s at stake, and that is important to everyone. It’s only once people and organisation would face up to this cold fact and start talking genuinely that they begin to make some progress.

 

Scalability is basic hygiene for Internet Services, trade it off at your own risk

Scaling can mean a lot of things, the way companies address it and make trade-off decisions have a large impact on the user experience. I am tempted to believe that Apple may be making many software trade-off decisions by sacrificing scalability, and that is a bad idea for Internet services. Apple talks a lot about creating the best possible user experience, and it is believable judging by their success: haven’t they been at the forefront if IT consumerisation? However, many of their Internet based services just don’t seem to scale up to good user experience. With Apple’s clout and the abundant supply of talent for a company like that, I don’t understand why they’re still not plugging gaps in these Internet services. That was the reasoning behind my tweet around mid-October, where I speculated that an acqui-hire was in order for Apple.

Scaling can mean a lot of things, the way companies address it and make trade-off decisions have a large impact on the user experience. In these days of dwindling attention span, users expect snappy experience regardless of the amount of data or people interacting on any platform. I am tempted to think that Apple may have made many software trade-off decisions by sacrificing service scalability, and that is a bad idea for Internet services.

Here are some examples of what I mean:

  • Safari Reading List feature: it works well when you have just a few of items bookmarked. Since it’s easy to use, my reading list grew quite fast and this is causing Safari to become less responsive whenever I try to view the list from one of my devices.
  • iTunes Match: this is quite handy, all your music on iCloud and you can listen to them on up to 5 iOS device. However, if you have a large music library and that your Internet connection quality fluctuates, you quickly get an non responsive music playing experience. So, it appears that the Music App isn’t able to gracefully degrade iTunes Match service.
  • Using documents from iCloud: this works well with a good Internet connection, but Pages or Numbers tend to get stuck whenever something isn’t smooth in the network connection. Furthermore, iCloud documents created with a Mac are not fully supported by iOS versions, it converts them or duplicate them, it’s a pity that it works that way.
  • I can’t directly share my purchased books and PDFs between iBooks on iPhone and iPad, these need to be manually copied around and sync’ed with iTunes.
  • Apple’s App Store is getting slower and slower all the time. In the early days, it used to be fast. But nowadays, with everybody putting up App Stores, Apple AppStore service or its client applications don’t appear to be coping very well.
  • Server Manager is now a stand alone app that can be installed on Mountain Lion on any Mac and turn it into a server. That’s great, but I discovered a couple of annoying issues with it. First, if you ever touch the embedded RubyGems package then you could be in for a ride. When you dig deep into it, you see that Server Manager ships with its own PostgeSQL and Ruby on Rails distributions, so why not completely sandbox these? The second issue I found is that, as I move my laptop around it gets assigned new IP addresses that cause problems with the embedded DNS Service. Sleep/wake cause Server Manager to start up really slowly and become non-responsive for a while. I know how to work around these but not before hitting a problem.

These are different types of shortcomings that all relate to scaling trade-offs, sometimes the volume of data is causing problems, other times it’s just the way of sharing objects that doesn’t scale out across devices. If a service has an upper-bound scaling threshold, why not either advertise it or adapt the user experience to reflect that? None of the examples above could have escaped Apple’s legendary experience design and iterative refinement crafting. These have to be happening because someone thought them good trade-offs, but I can’t find a good justification for trading these off for anything else. The use cases that are covered in my examples are all too simple, predictable if not obvious for a large scale product usage.

Apple talks a lot about creating the best possible user experience, and it is mostly believable when you use their devices and also judging by their success so far: haven’t they been at the forefront if the current consumerisation? However, several Apple Internet based services just don’t seem to scale up to good user experience. This is surprising to me, because it’s impossible to imagine Apple not knowing the consequences of the trade-offs they made in this area. Yes sure, it’s hard to excel everywhere, but with Apple’s clout and the abundant supply of talent for a company like that, I don’t understand why they’re still not plugging gaps in these Internet services. That was the reasoning behind my tweet around mid-October, where I speculated that an acqui-hire was in order for Apple.

I am basing my examples here on Apple, because I experience many of their products every day. But, in fact, these observations apply to any organisation putting out Internet based services. For Internet services, designing for Scale isn’t luxury, it certainly must not be a second thought, it is fundamental to any ambitious endeavour.

Scalability is difficult to get right. Inexperienced teams would typically cry foul, the catch phrase “premature optimisation” is bandied about by people who are not sure how to go about it. That’s fair, if you don’t know how to address scaling it’s best not to try. But large companies that ship products to hundreds of millions of people cannot trade-off scalability without paying a heavy price for it. Competition is heating up, Microsoft and Google are getting better at responding to Apple’s dominance, and this will force all three to ship products that scale smoothly.

A good example of hands-on Architecture work

A good example of hands-on architecture work is given by the Microsoft Patterns & Practice group in the article titled CQRS Journey. If the architect isn’t hands-on, he/she is not well equipped to perform assessments thoroughly, he/she would need to go by what is being told in blind faith. Architecting a software solution in blind faith implies that one of the main objectives of architecting a software solution, risk management, becomes weak.

This article by Microsoft Patterns and Practice people is a good example of what I often talk about, that a software architect needs to be hands-on. The credits here show a large number of people, that isn’t necessarily needed to architect a solution properly, this is probably just how they collaboratively put it all together.

Not understanding enough about all aspects of the architecture is often the cause for things going wrong down the road. Such risk is mitigated if teams dive into all corners of the problem and solution domain. As an architect, if folks try to take your ideas in unintended directions you would at least be able to spot that and assess any potential issues or risks that may result.

If the architect isn’t hands-on, he/she is not well equipped to perform such assessment adequately, he/she would have to go by what is being told in blind faith. When that happens then one of the main objectives of architecting a software solution, risk management, becomes weak.

LMAX Disruptor: Mechanical sympathy, a quality of great C/C++ programmers, rediscovered in Java world?

We learn this all the time, testing is a fundamental part of crafting great software – yet it seems to often be relegated for all sorts of reasons. This oldish article by Martin Fowler on LMAX Disruptor shines a light on the value of performance testing, and mechanical sympathy as a crucial skill for crafting great software solutions.

Two interesting take aways from Martin Fowler’s article on LMAX are: the importance of performance testing, and mechanical sympathy.

Testing is actually one of the most efficient ways of learning modern software systems, so if nothing else it’s going to bring that edge to people. Mechanical sympathy, on the other hand, is one of the enduring qualities of great C/C++ programmers, I’d say that in that case the expression covers a combined affinity with the C++ language standard, the vendor compiler idiosyncrasies, and the particular OS being targeted.  I could infer this back in the 1990s as I read books by folks like Herb Sutter, Stan Lippmann, or Scott Meyers. At the time I had a good understanding of the x86 architecture and could see how mechanical sympathy played out – though, I didn’t know the expression until I read Martin Fowler’s blog.

I’ve been focusing on actor based programming models last year or so, I think the LMAX Disruptor is definitely a very interesting and exciting concept.

Zachman framework presented as an Ontology, finally a more fitting name!

Zachman framework finally presented as an Enterprise Ontology, in my opinion this is the most fitting description that was ever given to this piece of work. I hope some folks would finally stop creating confusions around it.

It’ nice to see that John Zachman is now presenting his framework as an Ontology, this shows that the author didn’t want his work to be “sabotaged” by the misleading misreadings that folks through out daily about this and other Enterprise Architecture body of works.

There are lots of confusion around Enterprise Architecture, and that is an understatement. I’ve briefly participated in a discussion thread on LinkedIn about a quote of Zachman, I realised that I had to stop before I too would start getting confused.

I hope that this release finally settles the debates going on about what Zachman framework is, ought to be, and may be useful for or not. From the horses mouth, it’s an Ontology. And that’s exactly how I’ve always seen it to be, it’s not a cookbook like “How to make chicken tikka masala at home in 10 steps”. I’m not going to delve into defining what an ontology is or may be useful for, I don’t want to give a cheap recipe that someone would run with and may cause “brain damage” to others.

A/B Split testing a major platform: Windows re-imagined

Windows 8, what’s not in the name is that the traditional desktop OS will not be loaded by default. In a Microsoft Windows world most features were “opt-out sometimes”, but with Windows 8 it seems that even the OS becomes “opt-in always”. Coming from Microsoft, this is the most significant sign that talks of post-PC aren’t exaggerated at all.

This is the most significant sign yet that the IT industry is admitting we are heading to a post-PC era, Microsoft’s last drop makes this quite clear. In this blog of Sinofsky (yes, it’s a Steve’s World), Microsoft is saying that Windows 8 may run without even loading Windows OS. The new OS is definitely positioned as a post-Windows OS, Windows+ perhaps? Once marketing settles on a name, I think it may not even include the word “Windows”.

This is Microsoft on the offensive, big time. Such a bold move must be aimed at taking the wind out of the sails of Google and Apple. HP’s stutterings indicates that they are no longer in this game, certainly not focused enough to be a contender in a post-PC market.

As I read it, Metro platform (and not just the UI) will be the default boot experience for Windows 8, this will surely not allow any traditional Windows applications to run. That should relegate the traditional Windows OS experience to a secondary role (if you really insist in having it, you can have it but we’re not pushing). It doesn’t take a pundit to imagine what that means: this is how Internet Explorer trounced Netscape, it was the default browser on the PC. Microsoft could not possibly be doing this lightly.

Where is the A/B split testing then? Well, it’s a two phase testing as I see it. By announcing the decision so early in a blog posting, Microsoft is asking the community to comment. If there is any significant outcry, then Microsoft would be vindicated that the masses badly wants to stick to the Windows experience. If not then the new OS may launch with Metro as its default experience, at that point a second split testing kicks in. If Metro UI is a runaway success, it’s game on in the new era. Microsoft stands to win whatever the outcome.

The only group that may have some hesitation here would be the partner ecosystem, folks who have invested their soul into the traditional Windows OS experience might be nervous. But I suppose there is not much choice here, the industry is no longer ruled by the laws that prevailed when vendors decided what users would be getting.