Should Users Run Their Own Cloud Crash Safety Harness?

It is clearly very hard to eliminate infrastructure outage, even for some of the biggest players in the industry. However, we are heading to an era where Cloud infrastructure may be ‘too big to fail’, are companies going to ensure they are ready for this? Ultimately, it is an issue of the economic value of risk. Those with sound risk management practice in place would have less to fear, I am not sure many have though.

The Cloud is becoming so essential to so many companies that there comes a point where provider’s infrastructure outage could cause serious liabilities. Every few months now a large Cloud provider experiences a technical incident that takes down many popular startup company web sites for several hours. These are not some odd amateur providers, we are talking about Amazon, Microsoft, Google, the biggest there is in this game. Such outages used to be the lot of Facebook or Twitter, those companies seem to have remarkably improved their infrastructure availability, it is the turn of smaller startups by way of their cloud providers.

It’s obviously very hard, if not impossible, to completely eliminate outages, but what surprises me is that these outages are taking a long time to recover from, for infrastructure serving hundreds of companies (if you consider the ripple effects).

A naive way to look at it would be to imagine that cloud providers are running specially crafted test lab that would continually run failure scenarios and teach the operations teams how to detect them, and hopefully leading to remediations that would be put in place before they are ever experienced in real-life. This may sound costly but it wouldn’t be for companies like Amazon or Google. Perhaps they actually do something like this. In this year alone, every time such Cloud incidents has occurred and were fully investigated, it turned out that the root cause could actually have been anticipated if not prevented. Arguably it’s very hard to stress and crash test a large server cluster, but these companies have the resources and know-how to model incident scenarios and run simulations. It may be that the growth rate is much higher than the occurrence of serious infrastructure incidents, making it a lower priority for provider to double down on incident prevention. I wonder then, should it be up to the users to plan for and protect themselves against such incidents?

I don’t want to oversimplify but I imagine it economical for those with high stake in the game to setup safety harnesses. The issue at hand is really that of the economic value of risk, easily determined for a business that trade by the hour, not so trivial for companies that make no money but are valued based on the user traffic they get.  Those with sound risk management practice in place would have less to fear, I am not sure many startups have though.

If a company’s valuation is determined by the traffic they generate with no associated monetary transaction then an infrastructure outage (that can be blamed on someone else) may not have such a high economic impact. However, online advertisement is a big source of income for many startups, some sell goods and services online. For these companies an untimely outage means less visitor traffic which means missed income, and for such companies it may be critical to put in place some form of cloud outage safety harness.

Older MacBook Pro Late 2008, a simple performance tip if you have an SSD + HD installed

If you read this blog for some time you may have come across my story on upgrading my MacBook Pro Late 2008 by adding an SSD to it. There were a couple of other things that I’d been doing but didn’t blog about, frequently shuffle around programs to make the most of the SSDs. I’ve been applying these techniques and it didn’t occur to me to talk about it until I saw an article on Apple’s newly announced Fusion Drive technology.

If you read this blog for some time you may have come across my story on upgrading my MacBook Pro Late 2008 by adding an SSD to it. There were a couple of other things that I’d been doing but didn’t blog about, frequently shuffle around programs to make the most of the SSDs, that’s the story I am about to tell here. In this post I am assuming that you are familiar with Unix/Linux file naming conventions, and that you have similar constraints that I have as explained in my disk upgrade story earlier in this blog.

To recap the posting I’m referring to, I had deliberately chosen a smaller SSD (60 GB) to keep the costs down, but that meant that I had to keep a lot of large files in the slower HD drive. Given the difference in performance between the two drives, I was wondering how I could somehow make that SSD work harder for me. So I came up with a simple routine, which I didn’t think much of at the time. There are times when I am more frequently programming, and other times when I am using graphical tools or launching virtual machines (Ubuntu, Windows xx), so I would swap around those programmes and large folders between the two disks in an very simple way.

Mac OSX has a feature that comes in handy here: applications may be installed in the system by placing them in the folder “/Applications”, or installed only for one user when placed in a folder called exactly “Applications” in the user’s home folder, which in Unix is “~/Applications”. Knowing this, I laid out my two disks to take advantage of it. In case of data files, even virtual machines you simply have to ‘forget’ any cached uses and get the application to load it by giving it a direct folder reference. Parallels is an exception, it is sensitive to virtual machine images being moved around and might fail to load them. While loading it, you have to tell Parallels that you have moved a VM, it will do the right thing, otherwise you may damage your VM. I won’t go into that here.

When I am frequently using Xcode I would move it to “/Developer”, I would move other large software like Adobe FireWorks from “/Applications” to “~/Application” and create symbolic links to mask these changes. When I enter another period where I don’t develop or do it less frequently, I swap those large folders around again and restore the original links. This may sound convoluted but it isn’t, it’s just a couple of lines of shell scripts that would run a couple of minutes then you’re done. The trigger for starting this folder dance was that, when my SSD had less than 10GB free space the machine could run out of virtual memory (remember I have 8 GB RAM on this machine), so I had to try and keep that disk above 11 GB free space at all times.

The main drawback I run into was when I tried to do the same with iTunes and iPhoto (my large photo library), I run into problems and had to stop that, but it worked fine for running frequently used programs. I also used to run “Fix Permissions” after each shuffling of large folders, this made my technique even slower, so I only tended to do it once every other week and sometime forgot to revert things when I changed daily work routines.

To summarise, if you have a MacBook Pro Late 2008, like I have, or any Mac that is equipped with both an SSD and an HD, you can make the SSD work harder for you and gain some performance by applying this super simple folder shuffling technique. You can even do it with large data files that you may be opening several times a day or in relatively quick successions. I initially came up with the idea by thinking about the way detachable storage works, nothing special. The technique is most effective if you have a relatively stable work routine over several days or weeks.

I recently saw an article by ArsTecnica on Apple’s Fusion Drive, which seemed to be indicating a somewhat similar technique but obviously a sophisticated and robust one that is also transparent to the user. This technology will be available in their new line of MacBooks, I didn’t really follow that event. t thought of telling this story, in case someone happens here while looking for tips. I have to say though, I read lots of stories about SSD failure rates, I experienced one with my older Time Capsule external storage drive, it was a LaCie 1TB drive, one day it simply stopped responding, no warning whatsoever, gone for good without any (noticeable) incident. So, although SSD are exciting, and I’ve been enjoying  a performance boost since I installed mine, there’s a little nervousness that it may surprise me one day. I wanted to warn you, the reader, about this risk because, you just never never know.

How to set effective metrics for Enterprise Architecture

To set effective metrics for Enterprise Architecture, don’t look for a magical list that you could just plug in. Instead, you must to develop your own set for this exercise to make any sense. In this post, initially published on Quora, I provide one practical technique to achieve this, it starts with a statement of purpose that you should make people feel comfortable with.

A recent article by Michael J Mauboussin on HBR reminded me my answer on Quora to this same question, so I realised that that answer should really be published on my own blog, and not somewhere else. That is the motivation for this post.

Don’t look for a magical set, you need to develop your own. Here is one practical technique to achieve this, it starts with a statement of purpose that you should make people feel comfortable with.

An effective Enterprise Architecture helps ensure that an organisation spends money wisely, that resource allocation is done in a way that supports your business growth. It should be an instrument for your most powerful decision makers. The scope is massive, this spans every tidbit of information that flows through your way of doing business, it is about your entire chain of information systems (in the widest sense of the term). It goes therefore that you need to know how resource is allocated (respectively how value is created), what the triggers are and how those triggers can be influenced. Your Enterprise Architecture practice must identify and use the levers that control these events and event triggers, for it to be effective.

With the above statements in mind, proceed as follows:

  1. List the metrics that have the most impact/visibility in your organisation’s success, put them in a priority order that makes the most sense to your best people. This works best if you interview and discuss with a mix of key people: people with good delivery track record, people most intimate with your business, and people with powerful decision making power. Don’t look outside your organisation for such a list, you might quickly fall into an anti-pattern trap.
  2. Now armed with your prioritised list, benchmark where you are as you start this exercise, take snapshots of these metrics at regular intervals. Define the intervals to closely match your business activity cycle: from resource allocation to value creation. Start with a high frequency (must be realistic though), and adjust the sampling frequency as necessary, compare the measures every time and with varying sampling windows.
  3. Share the intermediate results you are getting with the people you worked with in Step 1). Try to gather their feedback on the measures that are starting to show, look for trends. Don’t hesitate to change the metric priorities, drop what doesn’t make sense.
  4. As you gain insights into what is driving effectiveness, try to make small educated changes to the metrics, and perform Step 2) and 3) again.

This is rather crude, but if done right, some solid metrics will rapidly emerge for you, and your process in itself embodies an educational and buy-in mechanism, which reinforces your Enterprise Architecture effectiveness.

The State of JavaScript, great minds think alike

Contemporary programming language designers seem to be transcending the industry politics and showing a more inclusive attitude towards their peers. This is a good thing, especially given the vitriol that some of the most fervent adopters seem to be inclined to throw at competing camps. If nothing else, self-interest appear to have enough ‘gravitational pull’ in creating convergence where synergies might be the most desirable, this is good for all the users in the longer run.

I watched a presentation by Brendan Eich, the creator of JavaScript, as he spoke I was constantly having flashes of other famous programming language designers, namely these: Anders Heijsberg (C#, TypeScript), Rich Hickey (Clojure), Don Syme (F#), and Martin Odersky (Scala). I’m not citing for example Matz (Ruby) or Gosling(Java) simply  because I haven’t been following them lately as much as I have the others, despite my long history with Java and respect for Ruby. So this post isn’t about saying who is great, there are clearly many more I could cite including Rob Pike, (the late) McCarthy, Gilad Braha, etc, but I just don’t know as much about these other people in the context of this post. This post is simply about what I incidentally realised by closely following a few guys over roughly the same period of time.

I noticed that these folks truly embrace the community in a somewhat similar way, they all seem to be insatiably curious about how other language designers think and what they might want to borrow from others. This is a good sign for the Internet, because it means that the really heavy thinkers and influencers are less concerned about politics, they actually care about their users a lot.

The inclusive attitude of the language designers is in stark contrast with that of some their most fervent followers. It is fairly common to see articles by say a Scala or Ruby developer pouring scorn on JavaScript or PHP developers for example, and some JavaScript fans can’t thrash Java developers hard enough. Patronising is common, suggesting without any subtlety that some people must be inferior minds simply because of their choice of programming language. I obviously don’t condone such attitude, I think that great work can be delivered in any programming environment that the user is comfortable and proficient with. I didn’t mention C or C++ for example, but arguably they are the foundation upon which the Internet is built – all those Unix, Linux and Windows servers are built by C and C++ programmers, but just read what a Ruby or Clojure developer think about C or C++.

Incidentally, the programming languages that receive the most bashing (JavaScript, PHP, Java) appear to be the most successful from an adoption standpoint. C# isn’t thrashed as much from what I read, except for its origin as a Microsoft platform language – though the folks at Xamarin and their community are hard at work to make a good part of the .NET platform a first class citizen on Unix (thus Apple OSX and iOS).

It is encouraging to see that the luminaries in the industry have a more inclusive attitude than some of their users.  From Brendan Eich’s talk, it looks like JavaScript,  in its next two major versions, will gain several key features of the leading functional programming. So we should be seeing more blurring of the functional and object characteristics of the most popular programming languages, that is good for developers at large.

Brendan Eich’s talk is available at Infoq web site: http://www.infoq.com/presentations/State-JavaScript.

Tech platform curation is good, but let adults take responsibility

In tech, like in many other industries, people need to make trade-offs. Apple chose to curate as much as possible, and they are mostly doing a good job – why else would their devices be such a runaway success? And yes, it is certainly a slippery slope giving so-called power users more choices. But I don’t think the need to curate should be detrimental to good practice, in this case a backup solution should never force a user down a single path, the way Time Capsule is doing it here.

I frequently experience the issue that Apple, in their attempt to totally curate the platform, actually take away vital responsibility from those who could otherwise be more empowered. Most computing devices are good when they function normally and remain responsive to user interactions. But, when something goes wrong, that’s when you really appreciate the ability to get out of trouble. And those are the moments when a technical savvy person could get really frustrated with Apple. Take this latest example with my Time Capsule:

Time Capsule wants to delete my backup history
A dangerous Time Capsule dialog

This dialog might look innocent but it isn’t, here is what it is saying to me:

Time Capsule has decided that it is going to destroy your backup history from the beginning of times. This means that you are starting anew, there is no way back from this. Now I give you two options, a) do this right now and get over it, or else b) I stop doing any backup until you finally accept this situation.

This is terrible for several reasons. The first one is that it defeats the whole purpose of Time Machine, remember when Steve Jobs famously demo’ed feature saying that you “will never lose a file again”? Well, this dialog clearly contradicts that claim, your history will be gone for no fault of your own. The second reason why this isn’t good is that, backup rotation schemes have been around forever, Time Capsule has no business deleting my older backups without giving me any other alternative. What it is supposed to do here, is to allow me to either add a new disk, or take the current backup history offline, or simply suggest to delete the oldest backup (but not the entire history!).

If given more options, since I know how to go about this, I would be able to take an appropriate action and be content with the product. But if it treats me like a kid, giving me no other option, that is a bad user experience and it is frustrating. Another example is that, I noticed the disk free space on my MacBook Pro had shrunk dramatically and I suspected something wasn’t quite right. Luckily I’d used the Target Disk Mode before, so I did that and found out the disk needed repairing, which I run. Before repair it was reporting 34 GB free space, after Repair it reported 124 GB free space! I have one more example with Server Manager (OS X Mountain Lion) but I save that for another post, if time permits.

This Time Capsule issue is one example where platform curation falls short. And at the moment, it looks like Apple is the biggest culprit in such practice – though Microsoft seem to be marching fast on the same track – I’ve been running Windows 8 RC for a while, it’s getting harder to troubleshoot issues, see what apps are running for example.

In tech, like in many other industries, people need to make trade-offs. Apple chose to curate as much as possible, and they are mostly doing a good job – why else would their devices be such a runaway success? And yes, it is certainly a slippery slope giving so-called power users more choices. But I don’t think the need to curate should be detrimental to good practice, in this case a backup solution should never force a user down a single path, the way Time Capsule is doing it here.

 

With Windows 8, Microsoft is staging the biggest startup pivot in history

Microsoft is effectively undergoing a startup pivot with the sweeping changes they’ve been doing culminating in Windows 8. This is an extraordinary effort that deserves very close attention, lots of learning opportunity here.

What Microsoft is doing with Windows 8 is effectively a pivot, just like a 12 month old startup would define it. I’ve not read of it being described that way yet, but that’s all I can think of. Microsoft couldn’t be called a Startup or Lean, but I’m curious how they internally think about themselves nowadays in light of the sweeping changes they’re introducing.

Right from the start Apple had gone for intimacy, premium products whilst Microsoft had chosen for cost effectiveness and mass scale. It seems that Apple is continuing on their path, and that Microsoft is now changing strategy. Good analysis abound, I won’t dig any further than this.

If you like this subject, search “startup pivot” and read up the various definitions given of it, it is fascinating to think of Microsoft in the current context.

This is a tremendous learning opportunity, I am excited to see how it all goes.

Apple doesn’t just do nice device experiences, their SDK and documentation are masterclass too

If you know C and you get properly acquainted with Objective-C, you will realise that Apple succeeded in making people write large C code without hating it or getting into trouble. People say, and I agree, that learning Functional Programming (FP) makes you a better programmer even if you were to go back to non-FP languages. I think the same thing can be said of programming for iOS using Apple provider toolkits. Learning to code iOS application yields much more than solid applications, it also teaches you how to be a better programmer.

I’ve been regularly using Apple SDK for a number of months now, earlier on I only did small proof of concepts (as an Architect sounding out and testing gears before making any recommendations). I’ve used many developer toolkits throughout the years, but Apple’s have been an eye opener for me, once I decided to build a sizeable application with them. I would easily give Apple a top mark for creating and nurturing a development platform, yes they do make breaking changes but they clearly keep the concepts sound and stable throughout.

In the next paragraph I’ll be mostly referring to development environments (IDEs), I’ll skim over documentation because it made no big difference except in the cases of Microsoft Visual Studio and Apple’s Xcode.

I’ve used Eclipse intensively throughout the years, before that I’ve used IBM VisualAge on large projects. Eclipse is good, but I find IntelliJ IDEA to be a much sharper and more productive product than Eclipse in that area. I’ve looked at NetBeans too, for a time its free profiler was a selling point for me but I didn’t end up doing any serious work with it. I’ve used Microsoft development tools since the time that Borland shipped better ones (Borland’s were my favourite at the time), and I still use Visual Studio every now and then. For a time PowerBuilder was the best productive development tool for me and I enjoyed it until I moved on to web applications for good. I’ve had some exposure to IBM Rational software engineering products too, at the time they were often unstable, bloated, so I never got to liking them much. I’ve also used, and continue to use many text editor flavours, Unix ones like VI and Emacs, and various others that I don’t care much to mention. So I’ve seen quite a wide variety of developer tools and mostly didn’t come out too jazzed up about them.

What all of the above developer toolkits have in common is that they provide you a base upon which you can extend by means on plugins or add-ons. In nearly every case you need to source additional libraries to complete projects, at least that’s been my experience in most cases. However, in my opinion these products fail to teach you best practices in any meaningful way. If you are looking to learn to write fool-proof and performing code, you still have a long trial-and-error learning curve in front of you, the toolkit won’t provide you any guidance.

Apple’s combination of XCode, Cocoa, Objective-C and the documentation, together provide you much more than just an SDK and IDE, they actually teach the developer how to write code properly. As you write applications for Apple platform, you get more acquainted with patterns in a practical and effective way, you learn one behaviour and can become more productive as time goes by.

Microsoft have attempted to do something similar, Microsoft Patterns & Practice web site is where they are trying to complement where Visual Studio is lacking. There is a lot of good examples on how to compose Microsoft tools to produce results, it’s the best source I’ve come across if you only target Microsoft platforms. But I’ve found Microsoft Patterns&Practice to be too heavily biased towards APIs, and those APIs tend to change before you get to master them. So, unless you are immediately applying the techniques that you learn in an active project, preferably very close to the publishing time period of the articles, the learning may not stick and might get dated. I’ve had my fair share of disappointments learning Microsoft development libraries. For example, I’ve been through numerous Microsoft data access technologies and every time I had to learn a totally different approach for no clear reasons.

IBM’s DeveloperWorks is huge and for a time it offered lots of good articles on various development topics. But usually, they helped a lot if you were targeting IBM WebSphere and Lotus product stacks, they required more effort if you didn’t. I’ve long not done much with IBM technology stack, I look at DeveloperWorks every now and then, the venerable WebSphere stack is chugging along. I would think that IBM WebSphere development toolkits may help in learning some good practices, but I think unless you can afford the pricey IBM Rational suites you may not reap good learning fast enough and may end up having to patch things up to become proficient.

I’ve watched Oracle technology stack get more and more complex as they acquired companies. I’ve used many of the their components at separate times and when companies like BEA and Sun MicroSystems shipped products. In recent years I’ve come to believe that the JavaServer Faces (JSF) approach isn’t an efficient one if you didn’t specifically target Oracle platform. I’m not particularly fond of what I see on Oracle sites, I don’t think they help much in teaching good practices to developers.

These are the largest technology solutions that I have significant experience with, and I haven’t really seen the sort of purposeful learning that I see when dealing with Apple development toolkits.

In my opinion, Apple has done a better job in this critical area of developer enablement than many others, and what you learn in the process will serve you well beyond Apple platform. If you know C and you get properly acquainted with Objective-C, you will realise that Apple succeeded in making people write large C code without hating it or getting into trouble. Before iOS became popular, and aside from Linux hackers, you could’t convince a young developer to learn C programming. Apple has somehow pulled that trick without people realising it. And if you know your design patterns, for example the Gang of Four book, you will find it a delight to discover how Apple’s Cocoa code hangs together. The way you design and write iOS applications is closely aligned to the way patterns should be recognised and applied, for this reason you are learning patterns as you code for iOS. This choice alone ensures that developers grow quickly to maturity when they develop for iOS platform.

I’ve not talked about HTML  development toolkits so far (meaning HTML, CSS, JavaScript), for a reason. I have yet to see any toolkit in this area that actually teach good practices. It is actually paradoxical that that kind of web application development could become so popular, because I think it offers the highest learning curve of all. Yes, sure, people clone (GIT) a To Do List application sample, make a few changes and promote that into an application. But there is no good practice in there, actually that’s where the learning begins. The number of concepts and technologies that a developer must learn before they can write solid HTML code is astonishing. And if they don’t learn those things all they tend to do is create more mess and never learn any good skill in the process. So I’d say that there is no useful learning environment there, unless the recently announced Adobe Html would prove me wrong.

Talking about web development specifically, there are many popular development platforms that help people produce applications quickly. However, the most popular ones such as PHP, Ruby On Rails (RoR), won’t teach you much about good practices. RoR did a good job in helping beginners get in the game with its convention over configuration practice, but that is it. Once people got started with their first RoR application, they usually don’t know where to go next and quickly run into learning shortcomings as their application usage picks up.

In summary, I think the benefits that iOS developer realise is greater and more durable beyond the platform alone. People say, and I agree, that learning Functional Programming (FP) makes you a better programmer even if you were to go back to non-FP languages. I think the same thing can be said of programming for iOS using Apple provider toolkits. Learning to code iOS application yields much more than solid applications, it also teaches you how to be a better programmer.

Why Microsoft Typescript is a breath of fresh air in web application front-end development

Typescript is an elegant solution to a really annoying issue: Javascript code sprawling into a massive spaghetti. If you are a programmer who writes web applications, don’t wait to learn about scaling JavaScript the hard way, start with Typescript and you won’t regret it.

Anders Hejlsberg is one of the most astute thinker alive in the programming language world today. So when he comes up with something I take a careful look at it. Typescript is his latest creation, I watched his presentation video and I really liked what I saw.

Unless you’ve been involved in web application development at scale, you won’t realise what Typescript brings to the picture. It is a very elegant solution to a really common problem: Typescript helps in writing large Javascript code without ending up with a monster spaghetti that even the authors hate to look at.

Naysayers will snarl, isn’t this yet another Microsoft embrace and extend effort? Who needs a new JavaScript like language?

Microsoft lovers would rave.

But, any fanboim set aside, this is a tremendous effort and it is coming from a truly “new Microsoft” that some are still too blind to see. This is a nice case of “embrace and extend” that should be applauded. The team that made it took care of the following essential things:

  • keep the learning curve low and smooth: Typescript is actually just JavaScript, if you think about it, so no new language to learn
  • make it fit within developer’s work flow: the developer can keep his beloved tools and still get the benefits of a less error-prone (thus less bug) development
  • keep it future proof: Typescript team appears to have adopted the open standard that governs JavaScript itself (EcmaScript if you don’t know), so as the standard matures Typescript would have already been there or can easily adjust with changes to the specifications
  • make it open for the wider community: the language and its tooling is all available under a well liked Apache open source License, anyone can use it and extend it, no fear of vendor lock-in here

If you are a programmer who writes web applications, don’t wait to learn about scaling JavaScript the hard way, I highly recommend  Typescript to you. If you already know all the horrors of large JavaScript code, check this out anyway and you will learn something that might even make you want to switch. I definitely plan to integrate this in my tool-chest and the solutions that I am building.