Cloud and Mobile, an incredible if not unprecedented opportunity for comeback kids

It cannot have escaped many software company executives that Mobile and Cloud represent possibly the best opportunity to reinvent a flagging business proposition. Surveys and studies by many household name analysts such as McKinsey, Deloitte and others, show that . My post comes from another angle, a simple observation at a micro level that can be scaled up to a bigger picture.

Actually, I believe that Mobile without Cloud, or vice versa, would not have any significant impact. So, I contend that whenever Cloud is mentioned, Mobile is an implied companion. And the other way around as well. When Internet Of Thins (IoT) finally dawn, Ubiquity would become a more appropriate concept and could encompass Mobile.

I was recently reminded of one pervasive reality with software, dependencies are the cause of a lot of troubles, if not most. I was trying to install a new version of Archi, a software tool I am used to running but hadn’t needed for a while. When it failed to run, I thought I could quickly just rebuild it. I fired up Eclipse and started going through the process of building Archi. I quickly stopped, realising the trap I was slowly getting myself dragged into. What the industry term as legacy is largely due to such dependencies, in many situations, an insurmountable task of rewriting and rewiring code that is large, complex, hard to understand, better left alone as long as it is working. In the case of Archi, I am not suggesting that any rewrite would be necessary, just the dependencies that I would have to deal with before I could achieve my goal, get it up and running.

As a quick comparison, I take the recently famous move by Microsoft, who have turned to open source in a serious way.

rebuilding a complete software stackFull Stack: open source stack vs. Microsoft’s

In the diagram, I name an arbitrary number of levels, or layers, just for this discussion’s sake.

  • Level 0: this is usually provided by hardware vendors, who make the hardware an then bundle it with software.
  • Level 1: this is the Operating System level, in the comparison, Microsoft provides this element, this is Windows for example.
  • Level 2: fundamental building block elements, usually provided by the OS vendor. This is one area where Microsoft open sourced .NET Runtime Engine for example.
  • Level 3: combines with Level 2 and make it easy to expose system capabilities to users. Level 2 and Level 3 tend to be closely integrated, often case, 3rd party vendors provide Level3 elements.
  • Level 4: this is the part that the user will eventually experience.

In this discussion, I’ve intentionally left out Apple, as they are known to control the end-to-end user experience in most cases. Albeit, Apple do subcontract for parts and components, embed 3rd party software and open source software, but they control the packaging process. It is not an interesting study for this post, I focus on Microsoft stack vs. open source stack.

I’ve simplified the reality in this diagram quite a bit. But there usually are multiple layers between Level 2 and Level 3. This is where most of the dependency troubles lie. For a developer, this could be a tool chain for example. But it could also be libraries, utilities and other 3rd party software elements. This is where trouble with multiple Windows DLL versions typically occur. Such dependency issues are not confined to Windows, I’ve seen it with many open source Linux software too. The issue happens when for example you are dealing with a Level 3 component and it complains about some Level 2 element that you were not aware of or didn’t expect to be dealing with. That’s the can of worms.

By moving important elements to open source, Microsoft would normally have had to do a lot of work in Level 2 and Level 3, to bring up their open source proposition to the same level where Unix/Linux is at. But, with Cloud and Mobile, this work is not necessary. A wide variety of platforms and legacy systems can be skipped. This is also perhaps the best opportunity for Microsoft to actually ditch support for a lot of systems that won’t need to be directly exposed to the user.

Cloud providers and infrastructure providers offer solutions that combine Level 0 and Level 1. Lots of players have emerged in recent years, providing up to Level 3 either by themselves or by procuring Level 0 and Level 1 from others. Large players like Amazon, Microsoft, and Google provide up to Level 3. In some instances, they offer solutions all the way up to the end user level. Examples abound, I don’t want to extend this discussion too much.

Ditching Level 3 elements and rewiring your Level 2 elements for Mobile and Cloud, that is where the opportunity mainly lies. The necessity of revising and rewiring systems should be a profitable source of business for those who can can into it.

Unfortunately, companies that cannot figure out exactly what is going on in their business at its core, will struggle to even see the opportunity. The situation is akin to old Hollywood blockbusters movies such as Die Hard for example, where it was important to cut the right cable. The comeback opportunity is about separating out what should be dropped from what should be kept, then streamlining the business for Mobile and Cloud. Newcomers who never even built such Level 2 and Level 3 elements, have the freedom to move swiftly along with the market.

Programming language, one man’s cornucopia is another one’s air horn

Whenever I read a rant about a programming language, I try to see if the author actually knew the background of that programming language. In the vast majority of cases, people don’t know the origin of the language they are programming in. One can argue that this may not be necessary, but to me, it often would save time and frustration. To illustrate the topic, I made a little diagram.

programming language design in contextprogramming language design in context

The situation is this, unless your needs fit more closely to the range of light red and green in the diagram, you are bound to be disappointed with the language that you are dealing with. If you find that many of your concerns are in the brown colour, you really ought to carefully look at what you can trade-off before going further. If the important concerns of your challenge fit more in the blue shapes, you might be better off considering an alternative language altogether.

This is just a quick summary of some simple but often overlooked aspects of programming language selection.

 

Remove unnecessary layers from the stack to simplify a Java solution

An item in my news feed this morning triggered the writing of this post. It read shadow-pgsql: PostgreSQL without JDBC. It’s a Clojure project on GitHub. Indeed, “without JDBC”. That’s it. This is particularly pertinent in the Java software world, where a flurry of popular initiatives are driving down the adoption of once revered vendor solutions. Some of the arguments I make here apply as well to other software and programming environments, not just Java.

As you start writing a Java application, destined to be deployed as part of an Internet enabled solution, it is important to carefully consider the requirements before selecting technology to build your solution with. Everyday, a new tool pops up promising to make it easier and faster to start writing software. People lose sight of the fact that getting started isn’t the hard part, keeping running and growing, adapting to change, all at a reasonable cost, are by far the hardest part with software. And to boot, once something is deployed into production it instantly becomes a legacy that needs to be looked after. So, don’t be lured by get started in 10 minutes marketing lines.

I would argue that, not knowing the expected characteristics of your target solution environment, is a strong indication to choose as little technology as possible. There are lots of options nowadays, so you would expect that people always made proper trade-off analysis before settling on any particular stack. A case in point is database access. If you are not forced to target an AppServer then don’t use one. If you have access to a component with native support for your chosen database platform, it might be better to just skip all the ORMs and JDBC tools and use native database access libraries. This would simplify the implementation, reduce the deployment requirements and complexity.

It used to be that, if you set out to write a server side Java program, certain things were a given. It was going to be deployed on an AppServer. The AppServers promoted the concept of container where your application is shielded from its environment. The container took charge of manageability issues such as security, authentication, database access, etc. To use such facilities, you wrote your code against standard interfaces and libraries such as Servlets, JDBC, or Java beans. If the company could afford to, it would opt for an expensive product from a large vendor, for example IBM WebSphere or BEA WebLogic (nowadays Oracle). And if the organisation were more cost-conscious, a bit of risk taker or an early adopter for example, it would choose a free open source solution, Apache Tomcat, jBoss, etc. There were always valid reasons for deploying AppServers, they were proven, tested and hardened, properly managed and supported in the enterprise. Such enterprise attributes were perhaps not the most attractive for pushing out new business solutions or tactical marketing services. That was then, the Java AppServer world.

As the demand for Internet scale solutions started to rise, the AppServer model showed cracks that many innovative companies didn’t hesitate to tackle. These companies set aside AppServers, went back to the drawing board and rebuilt many elements and layers. The technology stack had to be reinvented.

Over last ten years I would say, people started writing leaner Java applications for the Internet services. Mostly doing away with AppServers. This allowed the deployment of more nimble solutions, reduced infrastructure costs, increase the agility of organisations. Phasing out the established AppServers came at a cost however, one now needed to provision for all the things that the AppServers were good at: deployment, instrumentation, security, etc. As it happens, we ended up reinventing the world (not the wheel as the saying would have it, but well a certain well known world). What used to be an expensive and heavy vendor solution stack, morphed into a myriad of mostly free and open source elements, hacked together parts and parcels, to be composed into lean and agile application stacks. This new reinvented technology stack typically has a trendy name, Netflix OSS is one that sells rather well. That’s how things go in the technology world, in leaps and bounds, lots of marketing and hype, but within the noise lie hidden some true value.

Arguably, there are a lot of new java based solutions being implemented and deployed with stacks of unnecessary layers. That is a shame, because there is an unprecedented amount and variety of options for implementing and deploying good solutions in Java, or targeting the JVM. Understanding the characteristics of the envisioned solutions is one aspect that keeps driving Java and JVM based languages and technologies. AppServers are not necessary for deploying scalable Java solutions. JDBC and ORMs are not necessary for building high performance and reliable Java programs. The fact that a lot of people keep doing this, is no good justification for adopting the practice. The move towards smaller and leaner specialised applications is a good one because it breaks down and compartmentalise complexity. The complexity that is removed from the application stack, is diluted and migrated over to the network of interlinked applications. Not all problems would automagically vanish, but some classes of problems would disappear or be significantly reduced. That is the same principle that should motivate into challenging and removing as many layers as is reasonable from individual application stacks. I resist the temptation of mentioning any specific buzzword here, there are plenty of good examples.

When writing new software, it is natural to simply follow the beaten paths. People pick up what they are already used to, or what they are told to use, and then get going. There may or may not be proper directions for choosing the technology, but there is invariably a safe bet option that will be chosen. This isn’t wrong per se. What is though, is when no alternatives were ever considered when there was a chance to do so. Keeping your options open would aways prove more profitable than just going with the flow. One characteristic of lagging incumbents is that they often just went with the flow, chose solutions deemed trusted and trustworthy without regard for their own reality, usually not challenging enough the technology stack.

 

DYI: Improve your home Wi-Fi performance and reliability.

If, like me, you eventually got annoyed by how choppy and frankly slow your home Wi-Fi can become, then this post may be for you.

Does your Wi-Fi feels much slower than you expect it to be? Is it typically slow at the times when your neighbours are also at home, just after work for example, on weekends? Does it perform better when most of your neighbours are asleep? In the affirmative in some of these questions, you may be experiencing Wi-Fi channel overcrowding.

If channel overcrowding is the issue, here is a simple way to solve it:

  1. Identify a channel that is less busy or not at all
  2. Configure your router to use this free channel
  3. Restart it. Et voila.

Here is how a crowded channel looks like

wi-fi network channels

 

In the picture, you see a lot of coloured areas between 1-8 (numbers at the bottom of the picture), these are wi-fi channel numbers. If your router’s name shows in this area you are likely to have a poor reception. The solution is to move to a quieter channel. In the picture, you see that channels above 108 are totally free. Look at your wi-fi router documentation, look for the word “channel” and find out you to change it. If your router is not able to use the desired channel, it may be because it’s too old for example. If you find that all the channels in your area are too busy then you are out of luck, as far as this tip is concerned.

Tools

In my example, I am using a Mac and and I found a program in Apple Store called WiFi Explorer. The picture in this post is a screenshot from this app. If you are using a PC, I am sure there would be an equivalent available somewhere – be safe though, free software from an obscure or unchecked source may be malware. If this kind of tool isn’t something for you, then the good old trial & error method might be a way to do it. Just set your Wi-Fi router to another channel, restart it, check how it performs. Repeat the process until you stumble into a channel that works for you.

 

Background

In residential areas, most of your neighbours also have wi-fi at home. More often than not, you may just drowning out each other’s wi-fi network with unwanted noise. Think of it this way, try to have a cozy conversation with someone in the middle of a busy shopping street and the two of you are 15 meters apart. What’s more, people have different voices that can be heard, the situation is different with wi-fi routers.

People get wi-fi at home either by directly obtaining it from their cable internet provider, or by buying wi-fi routers and installing that themselves on top of their wired Internet connection. It’s easy to do. The hope is that it’s going to be super fast as your provider relentlessly has been touting. At some point, a reality slowly start to sink in, your Internet connection is nowhere near as fast as you were told. Then follows annoyance, frustration, a little bit of guilty denial and ultimately resignation to accept the status quo – so-called Stockholm Syndrome.

A common cause of sluggish and fluctuating wi-fi network performance is the overcrowding of the allotted channels. Since most neighbours are likely to get wi-fi from the same providers, they all get the same routers with the exact same default settings. This means that the routers will be constantly interfering, causing noise that slows down and sometimes interrupt your network.

Aside

Although I am talking about possible Wi-fi interference here, it is by far not the only reason behind sluggish or unreliable home network. Another culprit may be typically deception from cable providers. It is well documented that residential Internet connection providers frequently throttle their users bandwidth, it’s easy to find out. And naturally, when lots of people are on the Internet, for example following an event, networks could become slower for everybody for a certain amount of time. It’s all more complicated than we often make of it. This is just a tip that may help if you are lucky to be meeting the conditions that I mentioned.

Software programmers are not blue-collar workers

Like many skilled workers, software developers are smart creative people. Leading or managing such people requires finding a way to guide, support and empower them, while letting them do their job. This sounds easy, but I have seen many organisations and individuals failing at it, leading to the very issues they were trying to prevent. The recurring mistake people commit is that they treat skilled workers in the same manner that they would do blue-collar workers.

I have built and managed technical teams throughout my career, I’ve always been aware of the need to balance communication with smart and skilled folks like software developers. I continuously practice software development myself. By staying hands-on on both sides of the equation, I’ve continuously kept tap of the challenges involved. It’s one thing to be programming, but leading and managing developers is an entirely different game.

Instructions can be crippling

Some believe in providing extremely detailed instructions to developers, as a way to manage work. That is misguided for a number of reasons. Every software programming endeavour is as unique as can be. Micro decisions need to be made at every step of way, the conditions that might have been prevalent in a prior project context would have been largely superseded by new ones. However detailed, instructions will leave gaps that developers will run into. As such speed bumps are encountered, people want them to be removed swiftly and they would feel better if they could that themselves. If developers are not properly empowered, such gaps will lead to frictions, could cause tensions and result in defensive behaviours. While well intended, detailed instructions is not a very effective instrument for managing software development work.

If everything goes, nothing does it

The other extreme to too much instructions is having none. Without any guidance, anything goes. Such software development will lead nowhere. For software development to succeed, all those involved should be able to describe in a consistent and coherent manner what the aim is and what the final product should be. Software developers don’t work in isolation of other units in an organisation, they need to know what is intended, what is expected of them. An optimal set of guidance accepted across teams is a necessity, not a luxury. Working out exactly what such guidance is and how much of it is needed, is part of the job that leadership must facilitate.

Bean counting may be useless

Some are painstakingly tracking developer hours, lines of codes, and other misleading metrics, in the firm belief that that is the way to manage development resources. For the kind of work that doesn’t require thinking or coming up with solutions, typically repetitive tasks, that may yield some results. In software development projects this is a fruitless endeavour, developer productivity does not lie in any of those things. Bean counting is only useful if used anonymously as a learning tool, and perhaps as supporting documentation in the event of a dispute over resource usage. This last one is typically what software consulting bureaus are worried about, they then unfortunately put so much emphasis on it that it becomes a damocles sword over people’s head. Finding the metrics that matter is a skill.

Perfectly codified work is demotivating

If software development work could be codified to such extent that it could be flawlessly executed without any thinking, exploring, personal problem solving skills, then nobody would ever want to do it. That would be the most boring job ever for anyone. It makes people feel insecure, a feeling that could be summed up in an imaginary thought quote:

If I am not bringing anything special to this work, I can be easily replaced by someone else, or a robot! I don’t like that. I want to be perceived as being special to this project, I reject any shoddy instruction that would make my job look plain and simple.

This is also something that team managers and leaders should be careful about. When providing guidance turns into hand-holding, people stop thinking for themselves and the group may walk blindly straight into trouble. And when trouble strikes, finger pointing is often what follows. Besides, people learn when they can get themselves stuck and work their way through hurdles. When the conditions are right, the outcome will almost always exceed expectations.

Architecture: just enough guidance

Writing software is inherently a process in which trade-offs and compromises are made continually and iteratively till the end. It is therefore essential that all parties stay as close as possible together so that they can weigh decisions when it matters and actively take part in the game. In my experience, what works well is to provide just enough guidance to developers so as to establish appropriate boundaries of play. When that is well understood, leadership must then work to establish metrics that matter, and closely manage such metrics for the benefit of all those involved.

Treating software developers as blue-collar workers to be told “here, just go and write software that does this and come back when it’s done” is fatally flawed. People who typically make such mistakes are:

  • project managers, who are clearly making assumptions that only they espouse and neglecting the basics of creative work process
  • architects, who believe that developers are lesser capable workers that just need to do as told, follow the blueprints
  • technical managers, who have the impression that managing people is about shouting orders, being perceived as having authority
  • people enamoured with processes, tools and frameworks, oblivious of the purely human and social aspects of software development

In my experience, the root cause of such mistakes are relics of obsolete management practices that favoured hierarchy over social network. Note, my use of the term social network has absolutely nothing do with the multitude web sites that hijacked this vocabulary. By social network, I am referring to situations when a group of humans are engaged in making something happen together, it is social because of the human condition, and it’s a network because of the interdependencies.

This subject is vast and can be treated over a much larger text than I am doing here. In this post I’ve only touched upon a few of the most visible and recurring aspects, I’ve stopped short of discussing issues such as applying learning,  methodologies or tools.

If you recognise some of the shortcoming listed above in the context where you are in, and if you are in a position to influence change, I suggest that you at least initiate a dialog around the way of working with your fellow workers.

Cyber security white elephant. You are doing it wrong.

The top cause of information security weakness is terrible user experience design. The second cause of information security vulnerability is the fallacy of information security. Everything else flows from there.

It happened again, a high profile IT system was hacked and the wrong information was made available to the wrong people in the wrong places at the wrong time. This is the opposite of what information security infrastructure is defined as. Just search the Internet for “Sony hack”, one of the top hits is an article from 2011 that reads as Sony Hacked Again; 25 Million Entertainment Users’ Info at Risk

This type of headline news always beg some questions:

Don’t we know that any information system can eventually be breached given the right amount of resources (resource includes any amount and combination of: motivation, hardware, people, skills, money, time, etc) ?

Is it really so that deep pocketed companies cannot afford to protect their valuable resources a little harder than they appear to be doing?

Do we really believe that hackers wait to be told about potential vulnerabilities before they would attempt to break in? Can we possibly be that naive?

This has happened many times before to many large companies. Realistically, this will continue to happen. A system designed by people will eventually be defeated by another system designed by people. Whether the same would hold true for machine designed systems versus people designed systems is an undetermined problem. Once machines can design themselves totally independent of any human action, perhaps then information security will take new shape and life. But that might just be what professor Steven Hawkins was on about recently?

I suggest that we stop pretending that we can make totally secure systems. We have proven that we can’t. If however, we accept the fallibility of our designs and would take proactive actions to prepare, practice drills and fine tune failure remediation strategies, then we can start getting places. This is not news either, people have been doing that for millennia in various capacities and organisations, war games and military drills are obvious examples. We couldn’t be thinking that IT would be exempt of such proven practices, could we?

We may be too often busy keeping up appearances, rather than doing truly useful things. There is way too much distraction around, too many buzzwords and trends to catch up with, too much money to be thrown at infrastructure security gears and too little time getting such gears to actually fit the people and contexts they are supposed to protect. Every freshman (freshwoman, if that term exists) knows about the weakest link in information security. If we know it, and we’ve known it for long enough, then, by design, it should no longer be an issue.

It’s very easy to sit in a comfortable chair and criticise one’s peers or colleagues. That’s not my remit here, I have profound respect for anyone who’s doing something. It’s equally vain to just shrug off and pretend that it can only happen to others. One day, you will eventually be the “other”. You feel for all those impacted by such a breach, although there are evidently far more important and painful issues in the world at the moment to be worrying about something like this particular breach of confidentiality.

This short essay is just a reminder, perhaps to myself as well, that if certain undesired technology issues don’t appear to be going away, we may not be approaching them the right way. Granted the news reporting isn’t always up to scratch, we do regularly learn that some very simple practices could have prevented the issues that get reported.