The tablet as the new main computer? Quite Possibly.

My tablet seems to be so fast at everything that I am starting to wonder if indeed this might be the next portable computer for the nimble professional. What makes this plausible are a number of (obvious) factors:

  • the new iPad has a near perfect screen, and it’s ridiculously fast at everything I’m doing with it so far.
  • ingenious cover solutions exist that give you a Bluetooth keyboard coupled to a stand, so you get the laptop experience, though this may mean you need a trackpad or mouse to go with it
  • storing your data in the cloud is clearly now a no-brainer
  • remote desktop makes it possible to access a powerful desktop from any location, and these are increasingly available on the web browser as well
  • as a software developer your code is probably on Github and you could use a cloud IDE, and surely you have an automated build system
  • online drawing tools are plenty and some are really good, so you can create your architecture diagrams or other documents without requiring a software running locally and have a comfortable experience
  • rather than printing, you can email, share via to an online share point, place documents on an online collaboration tool
  • to top it off, if you lose your tablet you can remotely wipe it out, or use the find my device feature to help the police trace it

So, what’s left that would tie someone to a desktop, or a laptop? Clearly not very much. Surely, such a transition will come at the price of some inconvenience. Not everything will be perfectly smooth at the start but people will accommodate, akin to the situation where “fail whale” (regularly crashing web sites) came to be tolerated for the convenience of being online.

Google’s vision of Chromebook still intrigues me though, unless it would cost less than $100 why wouldn’t you get a tablet instead? If the new iPad is anything to go by, even Apple may wake up to a time when the traditional MacBook buyers are migrating to the iPad. Unless you are running several Operating Systems on your laptop you may not need to carry it around if you’ve got an iPad. The implication is that the next PC laptops and MacBooks would have to match 2011 servers, those may become more of a niche for the really power hungry IT professionals or media producing consultants.

I read all the buzz about the new iPad screen and I thought, blah blah, sure. But what I’ve experienced is that the whole experience is astonishingly smooth, and that really gets you thinking.

All Apps are bad: ‘scarenomic’ may be just as harmful as privacy scourging

Yes, please do educate the public on the privacy issues that current social media services are raising. But do so in a measure way, don’t squarely blame every app for trying to steal user information. That is simply not the case.

Can the media tackle any important issue without resorting to hyperboles?

The WSJ piece on Selling You on Facebook, makes an interesting read on privacy issues for the non-initiated, clearly their main target audience. I was going to agree with it wholesale until I realised that the article sweeps too large and makes every app look bad – I mean mobile apps, not Facebook apps which clearly are something else in my opinion.

Yes, it’s true that the general public doesn’t realise the privacy implications of social media. Yes, it’s true that some apps and some companies are abusing the trust implicitly placed in them and taking more than they should. But I disagree with the way the WSJ article seems to be pointing to every app out there, the notion of app itself. That’s not a realistic ways of painting the true picture of what is going on. If that were to be allowed, then you could say the same thing of every human creation that may possibly be put to bad uses. The list would be long, folks wouldn’t feel safe anywhere or at any moment.

I agree that people need to be educated on the privacy issues surrounding social media in general. I disagree with trying to scare people into, perhaps, reading your article. If you try to scare people about every possible thing that could go wrong, then you blur your message and may defeat its purpose. What really helps is giving people self-help clues on what may be happening, and the implications of the specific actions they may be taking online. This should be measured, paced and kept up-to-date. But not a broad sweep because then people are no better off than when they weren’t told anything at all.

Google Go is good to go now. Where are all the libraries to go with it?

It seems that Google Go would suit scaling issues that are mainly due to application execution (CPU) bottlenecks, so not disk or network performance bottlenecks which are actually more common. Targeting those who’d have to otherwise program in C means that Google expects a niche market for this language. Companies like Facebook and Twitter may have good use cases, but those don’t look to be the best of friends with Google nowadays. Would traditional enterprise development groups rush to adopt Google Go? I doubt it.

I read an article on ReadWriteWeb, commenting on Google’s announcement that their Go language reached 1.0. I took a quick look, as I did when it was first publicly announced. As then and now, it looks interesting but I personally can’t see it fit in any of the initiatives I am involved in at the moment.

One thing that is constant, and actually infuriating with these new programming language announcements is the way they are presented. Many would showcase a Hello World, Fibonacci, writing a Blog Web site, or writing a To Do List application. I don’t know about you but I’ve rarely come across a real-world problem involving any of these examples. I think it as a form of escapism.

Another problem with any new programming language is that people have to go through a stage of “brainwashing” before they become really productive. That may be luxury for a lot of people at the moment. And lastly, even if a language is great you would be swimming upstream unless you could count on a large amount of libraries to tap into. In that department, the recent wave of JVM based languages are doing well. Even Microsoft, who normally have a massive install base, understood this and is working very hard to bridge its languages with the open source communities out there. I am not yet seeing how Go will help developers get the most out of existing libraries. This also makes me think that it is not targeted at the larger developer community.

It seems that Google Go would suit scaling issues that are mainly due to application execution (CPU) bottlenecks, so not disk or network performance bottlenecks which are actually more common. Targeting those who’d have to otherwise program in C means that Google expects a niche market for this language. Companies like Facebook and Twitter may have good use cases, but those don’t look to be the best of friends with Google nowadays. Would traditional enterprise development groups rush to adopt Google Go? I doubt it.

I am curious how the reactions would be like over next few months.