Code frameworks and ghettos, where creativity gets a lock-in

An acute issue in programmer productivity lies in framework learning curve. More often than not, no framework can be more productive than having to learn a new one. A code framework can be a force for good when it solves infrastructural issues and allow developers to concentrate on the specific job at hand. When the job is about addressing a user need, then it is well worth investigating the benefits of any framework before diving in head first. But when the job itself is about infrastructure, then the story could get more complicated.

I once read a rant by an open source developer called Zed Shaw, where he was saying that Rails is a ghetto. Zed was venting frustration about some members of the Rails (of Ruby on Rails) community, apparently he couldn’t monetise his skills properly due to politics in the community. Hence it felt like a ghetto to him. An interesting point I took from his rant was that you could get stuck in a framework and not achieve your goals, easy to guess but also easily overlooked. As I write this post, I checked the rant again for reference and I found that Zed withdrew it and replaced it with a much more toned down text, you can read that here.

A code framework can be a force for good when it solves infrastructural issues and allow developers to concentrate on the specific job at hand. When the job is about addressing a user need, then it is well worth investigating the benefits of any framework before diving in head first. But when the job itself is about infrastructure, then the story could get more complicated. In this post, I am only addressing situations when the job is about addressing user needs.

Many code frameworks are useful for beginners, useful for creating an initial solution framework. That’s the first phase of bringing ideas to life, this is where you’ve realised your Hello World and you’re all pumped up about how easy that was.

A second phase kicks in as you start learning more about the problem domain and potential solutions, you usually iterate through your design and code to reflect changes. This is a good test of the chosen framework’s flexibility, can you improve your design easily? can you improve your code easily? The further in the development, the more acute the questions become. If you’re lucky to think like the framework designers do, then progress should be swift. If not then a tough reality would start to dawn on you. Either way, your creativity start to be shaped by the frameworks that you’re using. This is where the potential pains with frameworks start to surface, it is a good learning stage. I liken this phase to betting on horse race: it always seems that the next bet will be a hit, people keep on betting. Invariably, there will come a time when the developer could feel that he was cheated somehow, he wasn’t told something or that he didn’t realise something.

The third phase in this framework journey is when the coding is done, and the application need to be put into production use. In this phase, performance and reliability issues usually start to pop up. The inevitable questions arise: is there anything wrong in my code? was the infrastructure correctly setup? If you don’t think the way your framework designers want you to think, then you could make missteps and issues would compound quickly. And maybe you do, but you just can’t help much since the framework ties you down. In this stage, only an elite few is able to efficiently address such issues. In the unfortunate case where the troubles call for an audit or project rescue by another party, the business owners tend to feel a bigger pain as their purse is severely tapped because an entirely different way of thinking and seeing the world is brought in.

Code frameworks are like giant bags of assumptions about technology, people, problem and solution domains. They help if the assumptions largely apply in the most significant aspects. When choosing frameworks, do people realise the extent of the assumptions they are about to make? How often do people bother to (or are equipped to) check such assumptions? These are interesting questions for a survey.

Creativity lock-in is not a bad thing if it helps people focus on specific and relevant problem and solution domains. When that is not the case, creativity lock-in can have devastating consequences. Ghetto can be a bit of a hyperbole, but it is a powerful paradigm for reminding people to consider a broader picture when selecting frameworks. A code framework should help with project technical hygiene, open up more opportunities. When the selection is mostly based on product marketing, then the assumptions are not being checked, who’s at fault then? – I wouldn’t blame the product vendors.

Buzzing, tweeting, now everybody can get a taste of being stalked

There is something sneaky about all this social media activity, people are turning into stalkers in their droves. Google Buzz just made this dawn on me. It can be quite uncomfortable to think that you are unwitting turning yourself into a “self strip view broadcaster”.

I’ve started buzzing, stumbling upon it’s activation at a browser restart. After a couple of days, I’m reconsidering the whole thing: why am I getting involved in all this?

Having been involved in innovations in social networking and online communities for a number of years, I was always going to try the latest stuff to see if it could help in my job. Little did I realise the slippery nature of all this. It is somewhat like “zapping” with your TV remote control, pointless but you somehow keep doing it until something urgent drag you away from the sofa. Before you know it you’ve spent a couple of hours watching the TV but seeing absolutely nothing. What a waste that is!

I started tweeting because I was too lazy to blog, it was just an excuse for me and I hoped to learn something in the process.

My idea of blogging has always been measured, I don’t like all the self broadcasting that goes with it, I didn’t want to write “hello world” program codes or regurgitate what others have been saying somewhere else.

I tried Google Wave and I never really found a reason to stick to it. I could imagine tons of use though, I just wasn’t ready for any of those – still am not.

Now I’ve joined Google Buzz, it’s giving me the goose bumps for it’s eerie stalker feeling. I never felt stalked like this before, it’s not due to my contact list or my friends activities, but rather it is the fact that I kept seeing a mirror reflection of my activities in many places of my Google account. Then you start to think: if I click on any link, it’s likely being broadcast. For someone  who is a private person by nature, this freaks me out.

A few years ago, I’d already taken a giant step by going out there and putting pictures, comments and various things online. I could always do it again, when I get a chance, and it felt ok. But, the idea that something follows me and tells the world about what I’m doing doesn’t feel right, I have no control over it. It is bad enough that this is happening, street CCTV cameras are an example, it’s much worse when you can actually see realtime the footages. That is what is troubling, and Google seemed to have thought none of that. Of course, when you’ve not known anything else you would just adapt to it.

There you go, the bucket must stop somewhere. I’ve got nothing to hide, but I don’t want to broadcast everything either. This is a case of a feature’s default option in user’s disadvantage. That is a hindrance to user acceptance from my point of view, and Google failed in this case. This case inspires me to write an architecture principle: design for acceptance, the default features and options must cater for user’s natural preferences and behaviour.

Can the battle of rich content delivery platforms determine the future of the web?

The future of the web platforms is not just a fight for developer mindshare, it’s also largely a fight for the consumer/producer audience that is the media user. Speaking of content, Google, Microsoft, Facebook and Twitter in some ways are the big contenders. If both Google and Facebook would put their weight behind HTML5, the development platform debate could be reduced to HTML5 versus Silverlight, and Flash will be pushed to a niche and legacy corner.

Jeremy Allaire’s article on TechCrunch offers an interesting perspective of the current debate on rich platform technologies. While the article’s coverage was broad, a couple of omissions left me wondering a bit.

I was surprised that Jeremy didn’t have a view on Silverlight, in my opinion a faster growing alternative to Flash. Given Microsoft’s reach, it doesn’t take a genius to figure that Silverlight could trump both Flash and HTML5. Windows 7 is proving that Microsoft have not lost its touch.

Apple has always been a niche player, the iPad could change that fundamentally. It is too early to tell if the iPad will just extend the iPhone and iPod Touch reach, or if Apple will have the hunger to go for the masses at large.

When starting a new project, folks want to maximise their reach while minimising their costs going forward. This is most likely the place where the debate heats up in many organisations: which platform should we base a brand new codebase on? It would have been nice to hear Jeremy ‘s opinion on this issue, because I think Flash is under serious pressure here.

I doubt that this particular platform debate can really swing the future of the web one way or another, because there is a 3rd party in this dance, the user. This hungry content producer and consumer will have a massive impact on the future of the web. Come to speak of content, Google, Microsoft, Facebook and Twitter in some ways are the big contenders. In my mind, if both Google and Facebook put their weight behind HTML5, the debate could be reduced to HTML5 versus Silverlight, and Flash will be pushed to a niche and legacy corner.

If Facebook keeps its momentum, and actually come up with their own rich media delivery platform (on the back of HipHop for example, it makes little sense to me at this point but I’m speculating), then we’re in for an entirely new debate.

This article makes me also reflect on words attributed to Steve Jobs (that “Adobe is lazy” and “flash is buggy”, article here). I shared those two views for two reasons,

  1. I’ve been thinking that Adobe should make a splash amidst all the assault they’re subjected to, I’ve yet to see that splash and was curious why that was. That doesn’t warrant the label “lazy” though, I have to say.
  2. whenever examining application crash dumps on my MacBook I noticed that the Flash plugin was always there. I was seeing application crashes showing the Flash plugin, yet it seemed to me they had nothing to do with the plugin. I took note for myself, but I didn’t dig it any further.

So when I came across the article mentioning Steve’s alleged comments, it fitted my thought.

So yes, there is a healthy debate going on, it has many facets, thinking in terms of “developer mindshare” alone is reductive. If the means of producing and consuming content on the web keeps shifting towards the end-user, and that Cloud computing takes hold and becomes ubiquitous, the IT platforms could eventually turn into a kind of “modern-day plumbing”. That’s also an aspect that plays a big part in this platform debate.

You can read Jeremy’s article here: