Information Systems Architect should write solid code, Part 2, how?

In an earlier post, I stated that an Architect should write solid code. I’ve also tweeted about this, pointing out that an IT architect who doesn’t code cannot be trusted. To illustrate my point, I chose a fictitious test case organisation, Trotters, where Kermit works as an Architect. I described some of the consequences when someone like Kermit is far removed from code. If you are interested in this discussion but haven’t read my earlier post, I invite you to do so here.

In this post I will discuss some of the ways that Kermit could write code to make the architecture practice more effective in his organisation.

To begin with, here are some more (bold) statements – I could always prove the points at a later stage.

  • If you can draw diagrams, you could also provide framing code base that realise the diagram
  • If you cannot create code representative of the diagrams that you are drawing, then either:
    • you are unsure how to do this (no further comment on that), or
    • your ideas are not fully fleshed out yet (so you are not done), or
    • you are reluctant to get into the details, for some reason
    • In any case, this is a problem.

Writing wireframe code goes much faster than writing long documents, and it harmonises interpretation much more efficiently. Text and diagrams leave a lot of room for interpretation, every programmer is likely to interpret things his/her own singular way.

Code is the most efficient way to convey ideas to programmers. Architecture code should be embryonic, a starting point for a solution. Architecture principles help when they define some ground rules for composing solutions out of components, the boundaries and contexts of these compositions and the special conditions that should be guarded. All this can be expressed much faster in wireframe code, than in complicated text.

How should Kermit go about this?

To simplify I will focus on web application as the solution domain for Trotters. Kermit and the team are working with object-oriented (OO) design and development concepts, they standardise on UML notation. They could be programming on Java or .NET platform, that doesn’t matter here.

Here is a simple guidance for Kermit:

  • UML supports drawing component diagrams. Kermit has probably designed a layered architecture diagram. Therefore, Kermit can create equivalent UML artefacts in code by designing classes for relevant components using the most appropriate realisation path.
  • Components in a layered architecture will typically interact via interfaces. Kermit can create (mostly one-to-one) interface classes between each connection points in the architecture. OO design skills are necessary to do this right. This is a first test of Kermit’s ability to say it in code. This can be further refined when for example using a concept such as design by contract, of which ample free resources can be found on the Internet
  • Boundary conditions and constraints: modern programming languages like C# or Java offer an array of features to deal with constraints. (similar concepts are fairly trivial in other languages) In addition, boundary conditions and constraints can be expressed in test harnesses, in turn included in the codebase as part of a continuous integration setup. Such productivity measures are invaluable and they often teach teams aspects of the solutions that might not be trivial, let alone described in text documents.
  • Enforcing architecture: this aspect is made trivial when considering the development environments (IDEs) available to programmers these days. These IDEs ship (or can be complemented) with code analysis and validation tools. The most recurrent patterns and errors can typically be verified with little or no effort, thanks to the richness of the validation rules that are available for these tools. An organisation like Trotters, as discussed in part 1, is typically weak in this area, and that is a shame.

Once expressed in code, the following become enabled:

  • Efficient two-way feedback on the architecture: programmers can immediately spot any issues and raise them for resolution with the architect. Kermit’s ability to communicate in code will come in handy, as he is able to grasp implementation issues quickly and can improve his design in the process
  • Enforcing architecture principles: nearly every modern programming environment offer tools for validating code and testing boundary conditions.
  • Platform constraints are immediately brought to light: with architecture code, architects and programmers are immediately confronted with infrastructure constraints. This makes it possible to analyse the situation, convey feedback quickly to all stakeholders before anything is really built. This helps in reducing gaps in expectations across all groups.
  • Guarding against basic mistakes or repetitions: there are elements that are typically common to every solution in a given business domain. There is no value in painstakingly recreating such elements, as that would only expose Trotters to unnecessary mistakes (typing, errors or omissions due to human factors, learning the hard way).

An easy argument could be to claim that Kermit, as an architect, does not have enough time to delve into programming details. Another easy argument would be to claim that should Kermit delve in code, he would be overstepping his role (micro-managing?), or that the programmers’ creativity could be hampered. Such arguments are easy to bring up, it is very tempting when one knows nothing better to do. But such approach is misguided, in most cases. First of all, I argue that an architect can write solid code much faster than he/she can create useful documents with legible text and diagrams. A second reason why I think this is misguided, is that the tools and techniques available today are amazingly powerful and simple to use, not using them to their true potential equates to disservice to the organisation. As a way to illustrate this last point, I’ll take a totally unrelated example: as I look over the shoulder of some people using Microsoft Word, I’m often amazed to see how they go through the pain of manually formatting documents by inserting spaces and page breaks. These people don’t seem to be aware of features that were in the product since version 2.0! (the first I’ve used). That is the point. And this example is actually nothing compared to the inefficiencies caused by programmers and architects enforcing early 90’s work processes with 2010 generation of tools and techniques.

To summarise, if you can express architecture principles in diagrams and text, be sure to also express them in solid code to reduce communication gaps. I call this: say it in code. Users typically don’t relate to design documents (architecture is part of design in this context). Often case, design documents would pass approval gates without users noticing faults or shortcomings in them. And this should be expected, precisely because users hire architects for their solution skills and experience. Architects shouldn’t expect that users could validate solution architecture, which is what normally happens a lot – again I’m using the term users in a broad context here. On the other hand, once solutions are shown to users, they can more effectively provide meaningful feedback. So, Architects should express technical solutions in solution, and code is the closest form that can be less immune to human interpretation induce deviations.

So far, I’ve focused on the application architecture aspect of the Architect role, in my simplified view. Looking at the broader architect role, including business analysis and infrastructure design, saying it in code will involve other aspects that I won’t analyse too much here for a number of reasons. This posting is already quite long, I’m still trying to keep it short enough yet complete in its illustration. In future postings, I will tackle these other architect roles.

In the next instalment I will explore the limits of this concept. If I get more time, I will develop the concept further for Trotters by showing specific examples of the type of code that Kermit would be writing to help his organisation (eat your own dog food idea).

Project Volta: Microsoft is quietly redefining web application development

Microsoft might be up to something that would really boost programmer productivity. What took them so long? I suppose the tools are maturing, I bet Open Source has been of tremendous help, programmer productivity tools are getting so much better.

Anyway, Microsoft appears to be incubating something that is long overdue: a tool that makes it possible to design and build web application coherently ignoring the front-end/back-end chams, then deal with component deployment on web tiers much later. The potential productivity gains are obvious, this could herald a small revolution on its own. With all manners of aspect orientation and dynamic scripting languages available, this is clearly the next logical evolutionary step.

At first, specialist front-end developers might look at this with suspicion, but the model is definitely sound. At some point the hype will kick in. Check it out: http://labs.live.com/volta

Nomad Computing as ultimate scalability concept

Nomad people take their cattle around to better grazing areas year in year out. They are always locating the best resources and have no problems migrating constantly. I see the future of computing following a similar model, swarms of rudimentary computing units self-organise to deliver the best service at their point of consumption. Resources are truly allocated on-demand, scalability becomes transparent.

In my take of Nomad Computing, object-orientation reaches its pinnacle. The traditional separation of databases and applications and web tiers will become meaningless. Each of these concepts will become more ‘nomad’, programmers need not worry about them. Databases systems and application server systems would have to mutate into entities that can register themselves and join in on local computing resource hubs. Once they’re in they start to figure out the best ways to self-organise for maximum throughput, spontaneously creating clusters based on usage patterns. Writing applications for Nomad Computing would be easier: no more coding login windows or data access objects, such usage patterns will be retired. Instead developers would concentrate on object graphs they intend to create, this would encourage crafting and creativity. Operating System platforms will also become largely irrelevant and pushed further in the background, taking away arguably one of the biggest pains in custom application development: deployment concerns.

A possible drawback to Nomad Computing will be hunting and fixing bugs and removing malicious software. These problems would possibly become intractable. Governance and safety would also become horrendously complex since unforeseen outcomes would be commonplace. Perhaps all these disciplines would need to evolve in entirely new ways. Specialised software will need to emerge to provide answers and keep ahead.

I realise this is all far fetched but it just seems that we might not be too far from it.

In a way we’ve already started to see early implementations of Nomad Computing, Amazon’s S3 and Apache Hadoop are good examples in the right direction. Power grids in electricity distribution industry are perhaps the closest model I see as incarnating Nomad Computing. Once we’ve really figured out how to do Nomad Computing properly we would be in a position to leverage massively multi-core systems as they become available.

Modality obsolescence

Now that we have got all manners of multi-core and kernel mode programming I think modality should be on its way out. Few things are less irritating than an un-responsive computer, computers should always respond full stop. With GUI systems, modality is often the cause of computer freezes, regardless of the ‘root cause’ of the issue. It’s the lack of modality in Unix system command line interface that make them mostly manageable and more resilient.

In the early 90s Microsoft Windows programming involved creating well … ‘windows’ and ‘dialogs’ in C++. The same thing could be achieved with Visual Basic and various other development platforms. Dialogs could be modal or non modal. You relied on the underlying messaging system to orchestrate functionality between modules. The whole concept was fairly simple, the complexity really came from the high number of APIs and libraries to code against. With C++ the other half of complexity came from the challenge for programmers to write code that truly reflected what they really had in mind and what they really ought to know about the tools and platforms being used, a gigantic ‘expectation’ gap. Writing my first dialogs and seeing ‘hello word’ was an exciting moment. Microsoft Windows and most graphical user interface systems still build on the fundamental concepts of ‘modal’ and ‘non modal’ dialogs and windows.

Looking back I think modality’s raison d’etre was and still is to try and preserve the integrity of the data being manipulated. You wanted to be sure that the program’s context is in a predictable state before proceeding further. This is inherently a sequential concept that ought to be left behind soon. In a true parallel computing world I would expect hardware and software modules to be even more self-contained, able to ‘move on’ if some desired state was not reached. This should rid us of computers totally freezing under certain conditions. This might never happen with silicon chips based Moore Law abiding platforms. Perhaps nano technology would help if it departs completely from ‘old’ models. Off to learning a bit about nanotechnologies then. Who knows.

Voyage in the Agile Memeplex

I think this is a really good article on Agile Development. Philippe Kruchten is writing here for those equipped with academic lenses using semi-scientific overtones. It’s a nice reading for the experienced practitioner.

Read: Voyage in the Agile memeplex here

Often overlooked PC security challenges

A less talked about fact: PC security challenges often lie with a weaker link, the user indeed. Here are examples of why that might be:

  • [Microsoft Windows] update notifications are often ignored by users. Looking over people shoulder I’ve seen many simply click away without ever bothering to read call-to-action messages and never letting the software update install
  • Web browser security also rely on people reviewing SSL certificates prior to visiting a page, users routinely ignore such warnings and carry on anyway. In fact users would happily follow any URL they get, they rarely check what they’re clicking on
  • Lugging around “garbage” : anything that can be installed gets installed, often case never or rarely used afterwards. This is a waste of system resources, PCs become irremediably cluttered and potentially damaging software is kept around. Only a rebuild will remedy such situations.

Enterprise deployments often remedy these risks by locking down PCs and forcing users through ever lasting roaming profile upload/downloads. Let’s get heavy handed and deprive people of their “liberty”. I’ve seen login and logout processes taking up to 10 minutes to complete, that’s insane! It gets even worse when using systems management software that jump in willy-nilly and start downloading huge software upgrades while you’re trying to get on with your work. Clearly you are working for your PC, not the other way around. If managers would calculate the productivity loss due to such soviet-style systems they’d have a fit. The next frontier in enterprise productivity battles is in fighting these clunky systems management software.

It seems as though people are pitching usability against security. Making users responsible for the security of their own PCs is probably as risky as leaving those systems wide open. This is not because people are dumb, it’s mainly because the whole notion of computer security and the tools of the trade are esoteric and pose totally unreasonable demands on users.

Good computer security starts with a good design, if it’s not build to be usable and secure it can never be properly usable and/or secure to use.