Nomad Computing as ultimate scalability concept

Nomad people take their cattle around to better grazing areas year in year out. They are always locating the best resources and have no problems migrating constantly. I see the future of computing following a similar model, swarms of rudimentary computing units self-organise to deliver the best service at their point of consumption. Resources are truly allocated on-demand, scalability becomes transparent.

In my take of Nomad Computing, object-orientation reaches its pinnacle. The traditional separation of databases and applications and web tiers will become meaningless. Each of these concepts will become more ‘nomad’, programmers need not worry about them. Databases systems and application server systems would have to mutate into entities that can register themselves and join in on local computing resource hubs. Once they’re in they start to figure out the best ways to self-organise for maximum throughput, spontaneously creating clusters based on usage patterns. Writing applications for Nomad Computing would be easier: no more coding login windows or data access objects, such usage patterns will be retired. Instead developers would concentrate on object graphs they intend to create, this would encourage crafting and creativity. Operating System platforms will also become largely irrelevant and pushed further in the background, taking away arguably one of the biggest pains in custom application development: deployment concerns.

A possible drawback to Nomad Computing will be hunting and fixing bugs and removing malicious software. These problems would possibly become intractable. Governance and safety would also become horrendously complex since unforeseen outcomes would be commonplace. Perhaps all these disciplines would need to evolve in entirely new ways. Specialised software will need to emerge to provide answers and keep ahead.

I realise this is all far fetched but it just seems that we might not be too far from it.

In a way we’ve already started to see early implementations of Nomad Computing, Amazon’s S3 and Apache Hadoop are good examples in the right direction. Power grids in electricity distribution industry are perhaps the closest model I see as incarnating Nomad Computing. Once we’ve really figured out how to do Nomad Computing properly we would be in a position to leverage massively multi-core systems as they become available.

Modality obsolescence

Now that we have got all manners of multi-core and kernel mode programming I think modality should be on its way out. Few things are less irritating than an un-responsive computer, computers should always respond full stop. With GUI systems, modality is often the cause of computer freezes, regardless of the ‘root cause’ of the issue. It’s the lack of modality in Unix system command line interface that make them mostly manageable and more resilient.

In the early 90s Microsoft Windows programming involved creating well … ‘windows’ and ‘dialogs’ in C++. The same thing could be achieved with Visual Basic and various other development platforms. Dialogs could be modal or non modal. You relied on the underlying messaging system to orchestrate functionality between modules. The whole concept was fairly simple, the complexity really came from the high number of APIs and libraries to code against. With C++ the other half of complexity came from the challenge for programmers to write code that truly reflected what they really had in mind and what they really ought to know about the tools and platforms being used, a gigantic ‘expectation’ gap. Writing my first dialogs and seeing ‘hello word’ was an exciting moment. Microsoft Windows and most graphical user interface systems still build on the fundamental concepts of ‘modal’ and ‘non modal’ dialogs and windows.

Looking back I think modality’s raison d’etre was and still is to try and preserve the integrity of the data being manipulated. You wanted to be sure that the program’s context is in a predictable state before proceeding further. This is inherently a sequential concept that ought to be left behind soon. In a true parallel computing world I would expect hardware and software modules to be even more self-contained, able to ‘move on’ if some desired state was not reached. This should rid us of computers totally freezing under certain conditions. This might never happen with silicon chips based Moore Law abiding platforms. Perhaps nano technology would help if it departs completely from ‘old’ models. Off to learning a bit about nanotechnologies then. Who knows.