Parallel programming awaiting its John Von Neumann

Multi-core architecture is awaiting a decent computing model. This normally takes a generation to come to fruition. So far we’ve followed the John Von Neumann model, that’s how we naturally think: in blocks of sequences. This model suited us very well for a long time. No longer.

The big iron companies have been supplying multi-processor systems for ages. These systems were mainly designed for sharing expensive resources, time-sharing and similar models. We’re comfortable with that, design and develop programs to be run as smaller sequences of blocks, all well orchestrated and sharing memory units. The bulk of the algorithms available was conceived to exploit the Von Neumann model. Donald Knuth classics provided the bedrock for generations to program against.

Recently processing costs have come down significantly and chips manufacturers have been pushing out multi-core systems. We’re now stuttering to make a good use of these systems. What we’ve found so far is really a smaller and cheaper replica of the big irons’ model: it’s called virtualisation. Data Centre’s and CIO conferences buzz about this, it’s possibly over-hyped. It’s not surprising that we’re struggling to leverage multi-core architectures: the body of knowledge we live by is not really suitable for parallelism. Everything I’ve seen so far boil down (sooner or later) to arrays, loops and control blocks, some data access synchronisation, but that’s it. All of this is old news, Von Neumann-esque and is not truly parallelism.

Human beings provide a good approximation for illustrating true parallelism. When I take a step there’s a huge amount of activity (potentially a lot of coordination and collaboration) going on inside my body. I can’t begin to imagine how all that works. I’m sure some scientists could step forward and tell me a thing or two about it, but that’s going to be approximation too (current science as we know it). It’s tempting to think that a large amount of tiny steps are triggered and coordinated centrally within my brain. That’s the most logical explanation given what we think we know about human body and the role we think the brain plays in that. But it remains a guess, a very good one, the best we have. But who is to say that many parts of my body are not activated simultaneously? It it were we couldn’t possibly fathom the outcome, it’s therefore simpler to say that it doesn’t work that way. This is how I see the challenge with parallelism: an unknown unknown (not comfortable).

One reason parallelism eludes us is that we can’t figure out how to design and build systems unless we could predict the outcome and future behaviour of such systems. We might be reaching the limits of our current design ability. Where do you begin conceiving of systems that you cannot imagine how they will really operate? Is robotics a suitable field for this? Probably not quite, but certainly a good model to try and understand the challenges.

We can’t do parallelism very well because we’re missing Parallelism’s John Von Neumann, its Donald Knuth, its Grady Booch and so on and so forth. You can’t simply invent all of that so quickly, these great folks will slowly emerge, over time. Somebody will wake up one day and have an epiphany, many will flock in to hear more about the good news, marketers will learn new buzzwords, the whole cycle will repeat. That’s the way it is.

PS: I’m not a scientist, I conducted no surveys or lab analysis to come these conclusions. I’m just conjecturing, just some thoughts.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.