Can the battle of rich content delivery platforms determine the future of the web?

The future of the web platforms is not just a fight for developer mindshare, it’s also largely a fight for the consumer/producer audience that is the media user. Speaking of content, Google, Microsoft, Facebook and Twitter in some ways are the big contenders. If both Google and Facebook would put their weight behind HTML5, the development platform debate could be reduced to HTML5 versus Silverlight, and Flash will be pushed to a niche and legacy corner.

Jeremy Allaire’s article on TechCrunch offers an interesting perspective of the current debate on rich platform technologies. While the article’s coverage was broad, a couple of omissions left me wondering a bit.

I was surprised that Jeremy didn’t have a view on Silverlight, in my opinion a faster growing alternative to Flash. Given Microsoft’s reach, it doesn’t take a genius to figure that Silverlight could trump both Flash and HTML5. Windows 7 is proving that Microsoft have not lost its touch.

Apple has always been a niche player, the iPad could change that fundamentally. It is too early to tell if the iPad will just extend the iPhone and iPod Touch reach, or if Apple will have the hunger to go for the masses at large.

When starting a new project, folks want to maximise their reach while minimising their costs going forward. This is most likely the place where the debate heats up in many organisations: which platform should we base a brand new codebase on? It would have been nice to hear Jeremy ‘s opinion on this issue, because I think Flash is under serious pressure here.

I doubt that this particular platform debate can really swing the future of the web one way or another, because there is a 3rd party in this dance, the user. This hungry content producer and consumer will have a massive impact on the future of the web. Come to speak of content, Google, Microsoft, Facebook and Twitter in some ways are the big contenders. In my mind, if both Google and Facebook put their weight behind HTML5, the debate could be reduced to HTML5 versus Silverlight, and Flash will be pushed to a niche and legacy corner.

If Facebook keeps its momentum, and actually come up with their own rich media delivery platform (on the back of HipHop for example, it makes little sense to me at this point but I’m speculating), then we’re in for an entirely new debate.

This article makes me also reflect on words attributed to Steve Jobs (that “Adobe is lazy” and “flash is buggy”, article here). I shared those two views for two reasons,

  1. I’ve been thinking that Adobe should make a splash amidst all the assault they’re subjected to, I’ve yet to see that splash and was curious why that was. That doesn’t warrant the label “lazy” though, I have to say.
  2. whenever examining application crash dumps on my MacBook I noticed that the Flash plugin was always there. I was seeing application crashes showing the Flash plugin, yet it seemed to me they had nothing to do with the plugin. I took note for myself, but I didn’t dig it any further.

So when I came across the article mentioning Steve’s alleged comments, it fitted my thought.

So yes, there is a healthy debate going on, it has many facets, thinking in terms of “developer mindshare” alone is reductive. If the means of producing and consuming content on the web keeps shifting towards the end-user, and that Cloud computing takes hold and becomes ubiquitous, the IT platforms could eventually turn into a kind of “modern-day plumbing”. That’s also an aspect that plays a big part in this platform debate.

You can read Jeremy’s article here:

Effective architecture documentation practice on a budget

In documenting Software Architecture, boxes and lines are the norm and they are nice, they can even be aesthetic. But unless they shed light on otherwise unclear semantics, boxes and lines can be wasteful. Architecture documentation is more than the diagrams, it concerns everything that bears knowledge of the system and can inform decisions taken.

Think of good practices in waste reduction, since you have scarce resources.

Use architectural frameworks as guidance to good practice, don’t bother creating all kinds of artefacts that nobody in your teams will miss.

Regular architecture reviews and code reviews will do much to ensure quality and share knowledge within your teams. Run them effectively.

Boxes and lines are nice, can even be aesthetic. But unless they shed light on otherwise unclear semantics, boxes and lines can be wasteful.

To be effective, your architecture models must capture and maintain essential knowledge about your system, this includes user stories and issue and feature databases, and source code!

When you’re clear on what your team should produce and maintain and the scheduling, then and only then look for cheap tools that can help.

Here is a take:

You can get away with a few simple documents:

  • screen flows: use a simple drawing tool, or scan sketches into image files
  • user stories (or use cases), it’s your software promise to its users
  • design principles, essential Do’s and Dont’s for your technical folks
  • design trade-offs, technical decisions and their limits
  • contracts (internal and external APIs for your modules)
  • business rules, the knowledge-bearing decision points in your system

In the list above, one can argue whether business rules should really be included in an architecture documentation. The same goes for screenflows and user stories. But when running on tight budget, decisions tend to be made on the flight and built into the code, people rarely go back and update otherwise stale diagrams.

User stories are created at the initiation stages, but over time they are superseded by the issues logged against iterations of the product. The features could change shape as a result of the learning process, to a point where the initial architecture might become significantly altered.

What happens is that, over time, knowledge increasingly migrate to the issue database and the source code, and it stays there. That is the rationale for considering the issue database as part of the architecture documentation.

Some tips:

  • Avoid waste: any document that doesn’t hold unique information is a waste
  • Be economical with the boxes and lines, focus on contracts and boundaries
  • Use spreadsheets to capture your architecture principles, trade-offs, business rules
  • Maintain an active issue and feature database. You would probably be doing this anyway, but I’m saying that consider it an architecture documentation artefact
  • Encourage in-code documentation, enforce through standard quality checking and code reviews
  • If your tool-kit permit it, (thinking JavaDoc) generate documentation from source code and include them in your code review sessions

There are more tactics for capturing and maintaining knowledge within your organisation, but that is perhaps a topic for another post.

Leverage the best of community and agile approaches for results

It is a good idea to leverage the best of communities and agile approaches to product development in Software Architecture practices. Last semester product releases from Microsoft are indication that they are applying these concepts, Architects should learn from that.

It seems to me that Microsoft is applying some valuable lessons from open-source and community driven software processes: get user buy-in as early as possible, deliver early and frequently incomplete or partially flawed products. People will happily participate, you would get a more accurate feel for the reception when the product finally ships. I don’t know what the pundits are saying, but to me (as a software architect and user), this strategy pays off handsomely.

Every new Microsoft product I laid my hands on over last half-year has been nicely thought out and clearly user-oriented by design. You can almost feel that product development received a lot of attention, and that is good news for the 90% of us who use Microsoft software.

I’ve started using Microsoft Office 2010 Beta on my Windows 7 box. You immediately see the impact of some of the lessons they’ve learned with the previous major release, the ribbon concept has been tweaked, the experience feels more natural yet innovative. I’ve not had to spend any time reading documentation (though that would be a mistake for a final product, reading the documentation is always for the best), I had no troubles working just like before.

Similarly, when I first learned about the early Microsoft Azure visions, I was having a strange feeling of “lipstick on a pig” treatment. After PDC2009, I saw lots of improvements and change of heart on some early ideas that seem to be, without a doubt, the result of extensive community participation.

The early concepts and beta releases of SharePoint 2010 and Office 2010 have stunned me in their clarity of vision, for the first time I’m getting excited about Microsoft’s web-based software. Having spent the best part of last decade delivering non-Microsoft solutions, albeit I’ve never lost sight of what they were doing, I am seeing a lot of good vibes coming from Redmont these days.

Another potential idea that can be read here, would be that Microsoft is directly engaging users and hence doing away with their former approach where their partners supplied the bulk of feature requirements (I’ve read a lot of Michael Cusumano and Mary Jo Foley throughout the years, any misreading would be mine).

To me, these are all signs that Microsoft’s products are improving, they are increasingly addressing unmet user needs. This would be a software delivery equivalent of a “growth business”, I buy it.

I see a parallel with the practice of software architecture, whether its Enterprise Architecture or Solution Architecture on a smaller scoped project. Software Architects can achieve much success by adopting some of the same recipes hinted at earlier, by no means a complete list:

  • don’t seek completeness on any significant topic before getting stakeholder communities fully engaged (no, they won’t think you’re daft)
  • don’t think you have all the answers (many thought leaders are saying this), actively seek and incorporate ideas from the receiving parties – they’d have a hard-time to reject their own input, wouldn’t they?
  • delegate major chunks to smaller and dedicated groups, see to it that the inter-group communication is fluid and sustained (I don’t know if Microsoft does this, it seems many of their products are still large silos).

With this type of approach, the outcome tend to feel much more natural and the acceptance will probably be easier. You see it for example when, using the product, you guess how something might work and could verify that the feature was implemented almost exactly as you guessed. This happens a lot when I use Apple products. I used to think that Microsoft would never be able to match such feat, but I now see that they are changing their approach for the better.

Twitter can expand its vocabulary a bit

Twitter can be a powerful ontology engine, for viewing, creating or manipulating ontologies for interest groups.

I find it strange that advances in technology seem to somehow induce us to learn less and less: we become more and more ignorant as technology improves. Examples abound, semi-randomly: the advent of the pocket calculator means we can’t do mental calculation any more, email, SMS, and now Twitter! Arguably email and SMS have induced people to write badly, just check your last few emails – I bet people wrote beautifully crafted letter before all this appeared. As the Twitter generation comes of age, what’s communication going to end up like?

To expand on the Twitter example, hermits aside Twitter has been all the rage lately. Yet on Twitter, the verbs that can be conjugated are limited to tweet and retweet, both of which suggest that we have become birds. Incidentally, people don’t appreciate being given bird names 🙂 . But even birds are known to have a larger vocabulary, if you watch some of David Attenborough’s gems. So why wouldn’t Twitter expand its vocabulary, or just open up an expression platform?

There’s a whole range of impressions, feelings and knowledge that could be usefully expressed on Twitter:

  • like, nice: I like it (or, I dislike it)
  • interest: Interesting (or, Not Interesting)
  • buy: I would buy this  (or, I wouldn’t)
  • recommend
  • know: I know about this (or, I would love to learn more about this)
  • explain or reference: here is more information on this topic (or references)
  • etc.

I think there’s potential for a large ontology to grow here, making the platform even more powerful. Twitter would be terrific as a core ontology engine: create, visualise, browse knowledge and connections. Knowledge would organically build up, there’s an untapped crowd-sourcing opportunity in this. I’m sure there are groups out there harvesting and mining the massive amount of data generated on Twitter, but what if some of the mining was also partially occurring at real time on the platform itself?

The challenge in designing useful ontologies would also be entertaining, but it needs not be difficult or even be moderated.  So like any upgrade path, I would make such a service an opt-in branch and see if it catches on. Clearly this would only apply to a small group of the users, but who knows what might develop if it were available? It could be fun to see large crowds playing with ontologies in new ways and in real time. This would be more than just staring at a tag cloud evolve.

Note: this is an oldish post that sat as draft for a while, time to publish it.