Twitter can expand its vocabulary a bit

Twitter can be a powerful ontology engine, for viewing, creating or manipulating ontologies for interest groups.

I find it strange that advances in technology seem to somehow induce us to learn less and less: we become more and more ignorant as technology improves. Examples abound, semi-randomly: the advent of the pocket calculator means we can’t do mental calculation any more, email, SMS, and now Twitter! Arguably email and SMS have induced people to write badly, just check your last few emails – I bet people wrote beautifully crafted letter before all this appeared. As the Twitter generation comes of age, what’s communication going to end up like?

To expand on the Twitter example, hermits aside Twitter has been all the rage lately. Yet on Twitter, the verbs that can be conjugated are limited to tweet and retweet, both of which suggest that we have become birds. Incidentally, people don’t appreciate being given bird names šŸ™‚ . But even birds are known to have a larger vocabulary, if you watch some of David Attenborough’s gems. So why wouldn’t Twitter expand its vocabulary, or just open up an expression platform?

There’s a whole range of impressions, feelings and knowledge that could be usefully expressed on Twitter:

  • like, nice: I like it (or, I dislike it)
  • interest: Interesting (or, Not Interesting)
  • buy: I would buy thisĀ  (or, I wouldn’t)
  • recommend
  • know: I know about this (or, I would love to learn more about this)
  • explain or reference: here is more information on this topic (or references)
  • etc.

I think there’s potential for a large ontology to grow here, making the platform even more powerful. Twitter would be terrific as a core ontology engine: create, visualise, browse knowledge and connections. Knowledge would organically build up, there’s an untapped crowd-sourcing opportunity in this. I’m sure there are groups out there harvesting and mining the massive amount of data generated on Twitter, but what if some of the mining was also partially occurring at real time on the platform itself?

The challenge in designing useful ontologies would also be entertaining, but it needs not be difficult or even be moderated.Ā  So like any upgrade path, I would make such a service an opt-in branch and see if it catches on. Clearly this would only apply to a small group of the users, but who knows what might develop if it were available? It could be fun to see large crowds playing with ontologies in new ways and in real time. This would be more than just staring at a tag cloud evolve.

Note: this is an oldish post that sat as draft for a while, time to publish it.

Doing Things Half Right for Better Results, a little story.

In a technical leadership position, it helps to be comfortable in giving others some space to grow. This doesn’t threaten a leader’s position, on the contrary the leader gains more respect from others, others would more easily take ownership

In this post, I intend to share a learning point drawn from my personal experience, as you might recall if you’ve been following my tweets. I would appreciate any comments you might have. If you would share your own experience, that would be wonderful. I have reopened the blog for comments, the threat of spams having been largely contained by recent WordPress updates.

For a start, I should clarify what the title means: this discussion is really about intentionally half-baking solutions with the purpose of encouraging people to take ownership. It is about leaving room for someone else to blossom. To give another analogy, my local supermarket sells half-baked bread, when put in the over at home you get a deliciously fresh bread that gives you the illusion that you made it yourself. This postĀ is not about doing a bad job, it is not about hiding behind excuses or avoiding getting one’s your job properly. This is about a good leadership practice when success depends on people taking onwership of parts or parcel of an initiative or a project. I hope this clears any potential misunderstandings.

Now, I would like to give you a taste for my personal experience, trying my best to keep it anonymous and neutral. I’ve boiled it down to one example, to keep the text focused and prevent any direct link with specific events and people.

In the early days of my carrier, I had the impression that I needed to have anwers to any technical question that came my way. I couldn’t help it. I would feverishly research and prototype technical solutions, work long hours to find answers that I could show to my colleagues (I still research as thoroughly as I can, but I’ve just learned to work better on the communication side of the equation). I thought that was the only sure way to gain respect and recognition amongst my peers, my reportees and my managers alike. I thought it could be damaging to my career if I failed to demonstrate knowledge and proficiency in technical mattersĀ This attitude served me well whenever I was directly responsible for designing or programming a solution, I could just show off my abilities quite easily.

At some stage though, as I got more and more project responsibilities, I was often annoyed to see that I sometimes failed to get sufficient support for my ideas even though I had checked everything and thought I had good answers. This was even more puzzling and frustrating when, in a leadership position, i would get the impression that people were simply ignoring me or refusing to see the light. Worse!, I had the impression that some managers were prepared to steal my ideas but seldom prepared to empower me, oh the outrage such feelings gave me!

Then one day, something happened that taught me a lesson: someone (higher up in the pecking order than me), rephrased and presented several ideas of mine, in my presence, without giving me credit (I had presented him my ideas and he rejected them). I was boiling during the whole presentation, but somehow I managed to stay calm and showed no emotions throughout. At the end of it I simply left the room and made no comments to anyone about the subject. The presentation was well received, the person in question did not dwell on it. I was so upset that I had the worse of thoughts for this person and didn’t wish to have anything to do with him again, if I could help it. A couple of weeks later though, as I ran into this person at the coffee machine he simply congratulated meĀ on my contribution and left. It was clear that he knew how I felt, I took it as an indirect apology, but didn’t make much of it as I had moved on. The episode made me ponder a bit however, I realised that I would rather be associated with a success story than staunchly defend some bragging rights. This accidental epiphany would serve me again and again later on, as I stepped into leadership roles and I often worked with bright people. I tried to remember my story and control my instinct for bragging about, something that is generally against nature for technically inclined folks.

I had spotted a learning pattern that I have been trying to nurture ever since: to successfully collaborate with people, particularly in a leadership position, it is good to try and show humility and give others some room to grow. Nobody wants a superman-like figure around, especially when he (or she) is the boss. Appearing to have all the answers (or a lot of it) can easily breed resistance and cause projects to fail, if only because other people could be overshadowed and would have no motivation to take any ownership.

Despite what one might think, it is always beneficial to hear other people’s perspectives on issues. When I got humbled the way I did, it also made me realise that I had always thought people have a lot more potential than they give themselves (or are given) credit for. In my urge to try and grow, I was often loosing sight of my own belief and surely would not be able to lead had I continued on that path. I had learned to change in an accidental shock therapy.

If you manage technically savvy people, one of the things you learn is that unless you gain their respect you can’t effectively lead people. To gain their respect, you often have to have demonstrable superior technical ability. In any case, you have to allow them to express themselves as freely as possible, you have to pay more attention to egos as people take pride in what they do. The matter becomes more complicated if you are a consultant and that you are trying to demonstrate your abilities to your client stakeholder groups, conflicts can easily arise with the technical recipients of your work. When the recipients don’t buy your solution, it won’t add any value and your endeavour kind easily end up in failure.

The lessons I’ve learned:

  • Do not be afraid to see some of your ideas “getting stolen”, at the end of the day it is better to be part of a success than otherwise
  • Even if, as a leader, you can bring about all the answers that are needed, doing that would be a show of poor leadership and a demonstration of a lack of delegation skill
  • In a technical leadership position, it helps to be comfortable in giving others some space to grow. This doesn’t threaten a leader’s position, on the contrary the leader gains more respect from others, others would more easily take ownership

Credit & disclaimer:

This blog post was inspired by an article I read on HBR by Peter Bregman, under the titleĀ Why Doing Things Half Right Gives You the Best Results, I wouldn’t be telling this story, had I not read this article.

I tried to keep this discussion as general as possible. Should the reader recognise any event or people or any particular situation in this text, that would be entirely fortuitous and a simple demonstration that nothing is really unique under this sun: it’s all been done before, I’m just one guy talking about something that happens regularly everywhere. So there should be no surprise there.

Architect Vendor or Platform independence with care

Vendor independence or Platform independence should be carefully considered, weighted against the organisation strategy. Vendor and/or Platform Independence often won’t make sense if architecture is to deliver true value efficiently.

When designing Software Architecture, it is easy to confuse Vendor independence with Platform independence, or think that it is absolutely necessary. If your organisation already has a strategic platform, or that it has strategic alliance with a particular vendor, then this should be core to your architecture design processes. Ignoring this aspect, you might design a perfect solution for a non-existent problem. This is the software architecture equivalent to overcapacity, it could be costly.

There is always a temptation to come up with an architecture that is pure, scalable, flexible, and so on. But to what end? As we know, beauty is in the eyes of the beholder, and beauty for its own sake will often not make business sense. Why pursue it then?

One of the benefits of the software architecture practice is in delaying implementation decisions until after the business issues have been properly understood and captured, in fact it should be separating implementation decisions (and not delaying). But that thinking amounts to assuming that implementation is not a business concern at all, which could be wrong. There is a risk of designing a perfect architecture that simply cannot be implemented without significant performance or delivery penalty.

My point can be summarised as follows:

  • A perfectly valid process can yield perfect disasters when it is short sighted in any significant way.
  • Business domain semantics should be analysed in context, not based on a theoretical ground far remote from the specific business reality.
  • Consider the complete lifecycle of the solution being designed: if the vendor or platform lifecyle is likely to survive your solution, duly include such vendor or platform considerations in your key drivers.

In large enterprise architecture endeavours, usually there are sufficient resources to tackle all aspects of an Enterprise Architecture requirements. This is often not the case for SMEs and start-ups, the largest group operating with tight resource constraints.

It might be tempting or even self gratifying to think that you’re designing a vendor or platform independent solution, but that might not be the best way to deliver value to your stakeholders. In practice, it is always better to take sides when designing business technology solutions. This is the main reason that large businesses often enter strategic alliance with a prominent vendor and thus commit to its solutions and products. There is nothing wrong with such practices, obviously. Carefully managing the vendor relationship is another issue altogether, which cannot be discussed in this post. What could always be counter-productive however, is when software architecture practitioners would try to downplay the role of such strategic vendor products and solutions at the design stage.

With Azure, it might be game-on for Microsoft!

At their PDC 2009 edition, Microsoft outlined a strong vision which might boost them in the Cloud

I watched Microsoft PDC keynotes yesterday and it left me quite an impression: Microsoft firmly intends to be a strong force in this Cloud markets, no doubt.

Ironically, Microsoft might become a big winner here. Thinking about it, Cloud turns the clock on vendor-independence mantra: businesses would rely more than ever on their cloud provider. This works for Microsoft too: it no longer matters what platform runs your business, so long as it performs and you get the expected value from it. Naturally there remain issues around security, governance and such, issues that are open for all vendors the same way.

By delivering quality tools and performing Cloud solutions, Microsoft is in the game. If you can depend on Google of Salesforce to run your business, what other reasons can there be for snubbing Microsoft?Ā Can anyone credibly speak of vendor lock-in as a valid argument? The other important questions to ask are then around pricing andĀ service terms.

The vision outlined by Microsoft appears sound to me, they also appear to be engaging customers in this effort. The tooling seems to be coming along nicely too, and Microsoft is wooing celebrity developers. It seems that a larger chunk of Microsoft technology is being made available in the Clouds, that is something I didn’t expect so soon – could still be just a teaser with no real intention to deliver much, we’ll see. But my impression at the moment is: game on for Microsoft, when they start shipping Azure.

As Microsoft starts to deliver Cloud services, the playing field becomes square and they can leverage their massive momentum to gain a significant market share once again. They might actually end up dominating the Cloud in the process!

I’m loving this epic battle in the Clouds. Will it be a winner takes all?

User Experience Design is part of Software Architecture

The most prevalent practices of Software Architecture tend to undertate the importance of user experience design

I think the title states quite the obvious, User Experience practitioners would agree with me. But in the context of your ICT activities, do you find that Software Architects take the same view? It is not so clear cut in practice, in fact tensions or misunderstandings can be easily found. The situation I am discussing here mostly applies to large IT organisations, where design and development and rollout activities and responsibilities are split across multiple groups. This is not the case for small organisation or tiny start-ups.

In my experience, software architecture tends to be the discipline or at least the responsibility given to people with an Sofware Architect profile. In this text, I am referring to Software Architect as the profile that is often contrasted to UI Architect in some ways. The former is the title given to the guy who knows the inner workings of the software, the person who can perform back-end coding and is intimate with it. Whereas, the latter doesn’t get credited for his/her technical skills beyond pure front-end coding considerations. Then there is also the IT Architect, the role given to the persons that build or look after machines. Incidentally, when the Software Architect and the IT Architect roles are not working closely together, problems arise at the time of deployment where organisations find that many production staging aspects where not tackled properly. But that is a subject for another talk.

Recently I tweeted a link to an old posting by Richard Monson-Haefel about 97 things every software architect should know. @multifarious remarked that he found it interesting that (experience) design was not one of the 97 things mentioned in the article. This blog entry is an attempt to expand on my thought a little bit.

As I understand it, R. Monson-Haefel polled a few people whom he deemed authoritative in his quest. A quick reading of the entries will support @multifarious’ suspicions. There was probably no intention to exclude UI experts. Whatever R. M-H’s intentions, @multifarious might have just singled out one of the shortcomings of software architecture practice. You can get the impression that designing user interface code is taken for granted, if the “back-end” stuff is sorted out that is. Often case, discussions on usability is really bogged down to response-time in a context of request/response. If the actual interaction proves unworkable, blame is easily laid on business analysts or business owners for not clearly indicating their intentions.

In my opinion, the Total Experience should really be the focus of usability at any point of a software architecture crafting process. I think Google got this right when they launched all these years back. It is no a flashy web site, but everyone agree that you get precisely what you are looking for without hassle. And that is a fundamental point. Perhaps the current maturity level still focuses too much on the inner workings of the underlying software, but is little cogent of the user interaction issues.

I give it to @multifarious, R. M-H’s article ought to mention user experience design. Maybe it does in the book, which I haven’t read yet. Maybe they’d cover it in a future iteration.

GO, a case for a new programming language

With the advent of Google Go, there’s a renewed debate on whether we needed a new programming language. Actually a case can be made, for a niche player or an organisation with the right momentum to develop and maintain a new programming language.

No programming language will ever be fit for all purposes. Microsoft, among others, understood this and catered for it as they created C# and many variants of CLR compliant languages. As the technology platforms evolve, it should be natural to factor in the lessons learned and try a new approach.

In fact many already maintain Domain Specific Languages (DSL) without necessarily thinking of it as a programming language. Martin Fowler, @martinfowler, talked eloquently about the subject on his blog. In the Java world for example, template engines provide a really powerful method for organisations to become even more productive. With the right architecture, a lot of development and testing time can be cut if developers and architects can set up a good infrastructure to leverage the power of these tools.

If anyone can come up with a new programming language and be successful with it, Google can. When Chrome came out we were wondering the same thing, whether we really needed a new web browser. Now then? Another really interesting thing about Google GO is that the architects made a very clever choice by making it easy to learn for C or C++ programmers. This lowers the learning curve significantly, even good Java developers will be able to pick up GO quickly. Some of the concepts of Google GO seem to be inspired from Objective C, another big win there because Objective C has some really elegant constructs.

So there you GO, building on proven concepts, just like C# did when it came out, Google is bound to deliver the goods. I expect this language to grow fast because it can potentially leverage existing libraries. The adoption will be easier when appropriate tooling start to emerge. I foresee an approach like Google Web Toolkit, as a path to building bridges towards existing language platforms or runtime engines. If the compiler is as efficient as advertised, GO might just give Google a kind of unifying language that helps reduce software engineering effort by taking away deployment platform concerns. Microsoft is busy with a similar approach, their Oslo project and LINQ efforts are indications of the sort of goal that can be sought. I can’t way to start playing with it a bit.