DYI: Improve your home Wi-Fi performance and reliability.

If, like me, you eventually got annoyed by how choppy and frankly slow your home Wi-Fi can become, then this post may be for you.

Does your Wi-Fi feels much slower than you expect it to be? Is it typically slow at the times when your neighbours are also at home, just after work for example, on weekends? Does it perform better when most of your neighbours are asleep? In the affirmative in some of these questions, you may be experiencing Wi-Fi channel overcrowding.

If channel overcrowding is the issue, here is a simple way to solve it:

  1. Identify a channel that is less busy or not at all
  2. Configure your router to use this free channel
  3. Restart it. Et voila.

Here is how a crowded channel looks like

wi-fi network channels


In the picture, you see a lot of coloured areas between 1-8 (numbers at the bottom of the picture), these are wi-fi channel numbers. If your router’s name shows in this area you are likely to have a poor reception. The solution is to move to a quieter channel. In the picture, you see that channels above 108 are totally free. Look at your wi-fi router documentation, look for the word “channel” and find out you to change it. If your router is not able to use the desired channel, it may be because it’s too old for example. If you find that all the channels in your area are too busy then you are out of luck, as far as this tip is concerned.


In my example, I am using a Mac and and I found a program in Apple Store called WiFi Explorer. The picture in this post is a screenshot from this app. If you are using a PC, I am sure there would be an equivalent available somewhere – be safe though, free software from an obscure or unchecked source may be malware. If this kind of tool isn’t something for you, then the good old trial & error method might be a way to do it. Just set your Wi-Fi router to another channel, restart it, check how it performs. Repeat the process until you stumble into a channel that works for you.



In residential areas, most of your neighbours also have wi-fi at home. More often than not, you may just drowning out each other’s wi-fi network with unwanted noise. Think of it this way, try to have a cozy conversation with someone in the middle of a busy shopping street and the two of you are 15 meters apart. What’s more, people have different voices that can be heard, the situation is different with wi-fi routers.

People get wi-fi at home either by directly obtaining it from their cable internet provider, or by buying wi-fi routers and installing that themselves on top of their wired Internet connection. It’s easy to do. The hope is that it’s going to be super fast as your provider relentlessly has been touting. At some point, a reality slowly start to sink in, your Internet connection is nowhere near as fast as you were told. Then follows annoyance, frustration, a little bit of guilty denial and ultimately resignation to accept the status quo – so-called Stockholm Syndrome.

A common cause of sluggish and fluctuating wi-fi network performance is the overcrowding of the allotted channels. Since most neighbours are likely to get wi-fi from the same providers, they all get the same routers with the exact same default settings. This means that the routers will be constantly interfering, causing noise that slows down and sometimes interrupt your network.


Although I am talking about possible Wi-fi interference here, it is by far not the only reason behind sluggish or unreliable home network. Another culprit may be typically deception from cable providers. It is well documented that residential Internet connection providers frequently throttle their users bandwidth, it’s easy to find out. And naturally, when lots of people are on the Internet, for example following an event, networks could become slower for everybody for a certain amount of time. It’s all more complicated than we often make of it. This is just a tip that may help if you are lucky to be meeting the conditions that I mentioned.

Software programmers are not blue-collar workers

Like many skilled workers, software developers are smart creative people. Leading or managing such people requires finding a way to guide, support and empower them, while letting them do their job. This sounds easy, but I have seen many organisations and individuals failing at it, leading to the very issues they were trying to prevent. The recurring mistake people commit is that they treat skilled workers in the same manner that they would do blue-collar workers.

I have built and managed technical teams throughout my career, I’ve always been aware of the need to balance communication with smart and skilled folks like software developers. I continuously practice software development myself. By staying hands-on on both sides of the equation, I’ve continuously kept tap of the challenges involved. It’s one thing to be programming, but leading and managing developers is an entirely different game.

Instructions can be crippling

Some believe in providing extremely detailed instructions to developers, as a way to manage work. That is misguided for a number of reasons. Every software programming endeavour is as unique as can be. Micro decisions need to be made at every step of way, the conditions that might have been prevalent in a prior project context would have been largely superseded by new ones. However detailed, instructions will leave gaps that developers will run into. As such speed bumps are encountered, people want them to be removed swiftly and they would feel better if they could that themselves. If developers are not properly empowered, such gaps will lead to frictions, could cause tensions and result in defensive behaviours. While well intended, detailed instructions is not a very effective instrument for managing software development work.

If everything goes, nothing does it

The other extreme to too much instructions is having none. Without any guidance, anything goes. Such software development will lead nowhere. For software development to succeed, all those involved should be able to describe in a consistent and coherent manner what the aim is and what the final product should be. Software developers don’t work in isolation of other units in an organisation, they need to know what is intended, what is expected of them. An optimal set of guidance accepted across teams is a necessity, not a luxury. Working out exactly what such guidance is and how much of it is needed, is part of the job that leadership must facilitate.

Bean counting may be useless

Some are painstakingly tracking developer hours, lines of codes, and other misleading metrics, in the firm belief that that is the way to manage development resources. For the kind of work that doesn’t require thinking or coming up with solutions, typically repetitive tasks, that may yield some results. In software development projects this is a fruitless endeavour, developer productivity does not lie in any of those things. Bean counting is only useful if used anonymously as a learning tool, and perhaps as supporting documentation in the event of a dispute over resource usage. This last one is typically what software consulting bureaus are worried about, they then unfortunately put so much emphasis on it that it becomes a damocles sword over people’s head. Finding the metrics that matter is a skill.

Perfectly codified work is demotivating

If software development work could be codified to such extent that it could be flawlessly executed without any thinking, exploring, personal problem solving skills, then nobody would ever want to do it. That would be the most boring job ever for anyone. It makes people feel insecure, a feeling that could be summed up in an imaginary thought quote:

If I am not bringing anything special to this work, I can be easily replaced by someone else, or a robot! I don’t like that. I want to be perceived as being special to this project, I reject any shoddy instruction that would make my job look plain and simple.

This is also something that team managers and leaders should be careful about. When providing guidance turns into hand-holding, people stop thinking for themselves and the group may walk blindly straight into trouble. And when trouble strikes, finger pointing is often what follows. Besides, people learn when they can get themselves stuck and work their way through hurdles. When the conditions are right, the outcome will almost always exceed expectations.

Architecture: just enough guidance

Writing software is inherently a process in which trade-offs and compromises are made continually and iteratively till the end. It is therefore essential that all parties stay as close as possible together so that they can weigh decisions when it matters and actively take part in the game. In my experience, what works well is to provide just enough guidance to developers so as to establish appropriate boundaries of play. When that is well understood, leadership must then work to establish metrics that matter, and closely manage such metrics for the benefit of all those involved.

Treating software developers as blue-collar workers to be told “here, just go and write software that does this and come back when it’s done” is fatally flawed. People who typically make such mistakes are:

  • project managers, who are clearly making assumptions that only they espouse and neglecting the basics of creative work process
  • architects, who believe that developers are lesser capable workers that just need to do as told, follow the blueprints
  • technical managers, who have the impression that managing people is about shouting orders, being perceived as having authority
  • people enamoured with processes, tools and frameworks, oblivious of the purely human and social aspects of software development

In my experience, the root cause of such mistakes are relics of obsolete management practices that favoured hierarchy over social network. Note, my use of the term social network has absolutely nothing do with the multitude web sites that hijacked this vocabulary. By social network, I am referring to situations when a group of humans are engaged in making something happen together, it is social because of the human condition, and it’s a network because of the interdependencies.

This subject is vast and can be treated over a much larger text than I am doing here. In this post I’ve only touched upon a few of the most visible and recurring aspects, I’ve stopped short of discussing issues such as applying learning,  methodologies or tools.

If you recognise some of the shortcoming listed above in the context where you are in, and if you are in a position to influence change, I suggest that you at least initiate a dialog around the way of working with your fellow workers.

Cyber security white elephant. You are doing it wrong.

The top cause of information security weakness is terrible user experience design. The second cause of information security vulnerability is the fallacy of information security. Everything else flows from there.

It happened again, a high profile IT system was hacked and the wrong information was made available to the wrong people in the wrong places at the wrong time. This is the opposite of what information security infrastructure is defined as. Just search the Internet for “Sony hack”, one of the top hits is an article from 2011 that reads as Sony Hacked Again; 25 Million Entertainment Users’ Info at Risk

This type of headline news always beg some questions:

Don’t we know that any information system can eventually be breached given the right amount of resources (resource includes any amount and combination of: motivation, hardware, people, skills, money, time, etc) ?

Is it really so that deep pocketed companies cannot afford to protect their valuable resources a little harder than they appear to be doing?

Do we really believe that hackers wait to be told about potential vulnerabilities before they would attempt to break in? Can we possibly be that naive?

This has happened many times before to many large companies. Realistically, this will continue to happen. A system designed by people will eventually be defeated by another system designed by people. Whether the same would hold true for machine designed systems versus people designed systems is an undetermined problem. Once machines can design themselves totally independent of any human action, perhaps then information security will take new shape and life. But that might just be what professor Steven Hawkins was on about recently?

I suggest that we stop pretending that we can make totally secure systems. We have proven that we can’t. If however, we accept the fallibility of our designs and would take proactive actions to prepare, practice drills and fine tune failure remediation strategies, then we can start getting places. This is not news either, people have been doing that for millennia in various capacities and organisations, war games and military drills are obvious examples. We couldn’t be thinking that IT would be exempt of such proven practices, could we?

We may be too often busy keeping up appearances, rather than doing truly useful things. There is way too much distraction around, too many buzzwords and trends to catch up with, too much money to be thrown at infrastructure security gears and too little time getting such gears to actually fit the people and contexts they are supposed to protect. Every freshman (freshwoman, if that term exists) knows about the weakest link in information security. If we know it, and we’ve known it for long enough, then, by design, it should no longer be an issue.

It’s very easy to sit in a comfortable chair and criticise one’s peers or colleagues. That’s not my remit here, I have profound respect for anyone who’s doing something. It’s equally vain to just shrug off and pretend that it can only happen to others. One day, you will eventually be the “other”. You feel for all those impacted by such a breach, although there are evidently far more important and painful issues in the world at the moment to be worrying about something like this particular breach of confidentiality.

This short essay is just a reminder, perhaps to myself as well, that if certain undesired technology issues don’t appear to be going away, we may not be approaching them the right way. Granted the news reporting isn’t always up to scratch, we do regularly learn that some very simple practices could have prevented the issues that get reported.


Great New Yorker article ‘The Group That Rules the Web’

This is a  a comprehensive overview of the history, organisation and processes around web technology standardisation. By some coincidence, in my last blog post I mention web technology without going into any details. This article is an actual journalist piece on web technology. It’s a good read for anyone interested in the subject.

New Yorker article: The Group That Rules the Web.

Trying to oppose Open Web to Native App Technologies is so misguided that it beggars belief

People who ought to know better spend valuable energy dissing native app technologies. I’ve ignored the fracas for a long time, finally I thought I’d pen a couple of simple thoughts. I got motivated to do this upon seeing Roy Fielding’s tweets. I also read @daringfireball’s recent blog post explaining in his usually articulate way how he sees Apple in the whole picture. Here, I just want to look at a couple of definitions, pure and simple.

To claim that open web technologies would be the only safe bet is akin to saying that all you would ever need is a hammer. Good luck with that if you never come across any nails.

I think both are very good and very useful in their own right, one isn’t going to push the other into irrelevance anytime soon because they serve different use cases. The signs actually are that web addressing model, by way of URIs, might get challenged soon once Internet connected devices start to proliferate (as they’ve been hyped to do for some time now). So, I actually think that a safer bet would be Internet Technologies, Web could increasingly be pushed into a niche. But ok, I actually didn’t intend to get in the fray.

Incidentally, I only see claims that native platform apps would be some kind of conspiracy to lock users down, but apparently open web would be the gentlemen benevolent dictator’s choice. I am skeptical in the face of such claims, because Mother Teresa wasn’t a web advocate for example. I remember that Apple, who triggered what-you-know, initially only wanted people to write HTML apps for the iPhone when it launched. It was only when they got pressured by developers that they finally released native SDKs. That’s such a terrible showing for a would-be mal-intended user-locker.

There is no such thing as an open web versus anything else. There is just the web and then there is an ever growing generation of native platform apps that also take advantage of Internet technologies. That’s it. Trying to oppose those two things is just rubbish. Plainly put. If something is a web technology and actually adheres to the web definition, it can only be open. A close web technology would be an oxymoron, the definition doesn’t hold. If something is built using Internet technology, it may borrow web technology for some purposes, but it is not part of the web per se.

In the instances where web technology is best fit, it will continue to be that way and get better. Conversely, in the  cases where native platform apps work best, those will continue to get better and may use the Internet. There is a finite number of web technologies bound by the standards and specifications implemented. But, there is an infinite number of native app technologies since the implementers can write anything they like and get it translated to machine code following any protocol they may devise.

The Web doesn’t equate to the Internet. There are open and closed Internet Technologies, but there isn’t such a thing for the Web. The Internet contains the Web, not the other way around.

In the outset, there is a platform battle going on where parties are vying for depleted user attention. In such battles, every party attracting people to their platform is self serving and can make as much morale (or whatever you call it) claim as any other. The only exception are those set to gain nothing in getting their platform adopted. There aren’t any of those around.

My observation of the ongoing discussions so far is simple. Some individuals grew increasingly insecure of having missed the boat on native platform apps. Whatever the reason, including own choices. Some other individuals, a different group, burned their fingers trying to follow some hypes, learned from that and turned to native platform apps. The two groups started having a go at each other. That got all the rousing started, everyone lost their cool and indulged in mud slinging. But there is no point to all of this.

If you are building software intended to solve a category of problems, then the most important technology selection criteria you should consider is ‘fit for purpose’. Next to that, if the problem is user bound, then you want to find the most user friendly way of achieving your aim. If you do this correctly, you will invariably pick what serves best the most valuable use cases of your intended audience. Sometimes this will work well with web technologies, sometimes it won’t, and other times you will mix and match as appropriate.

I don’t see web technologies being opposed to native app platforms, at all. Whatever developers find more attractive and profitable will eventually prevail, and use case is the only metric that truly matters. It is understandable that any vendor should vie for relevance. That’s what’s at stake, and that is important to everyone. It’s only once people and organisation would face up to this cold fact and start talking genuinely that they begin to make some progress.


Marrying Technology with Liberal Arts, Part II

In a earlier post, I started exploring this notion of marrying technology and liberal arts. If you follow information technology closely, you certainly know who got famous for making such a claim. Even if you do, I invite you to bear with me for a moment, join me in a journey of interpreting and understanding what could lie behind such an idea. It’s a fascinating subject.

This is not about defining anything new, let alone appropriating someone else’s thoughts This is just an exploration of the notion, to get an idea of the motivations, challenges and opportunities that such a view would generate.

In the last post, I mentioned that there were challenges and pitfalls when attempting to marry technology with liberal arts. Let’s have a look at some of these challenges and pitfalls.

Let’s imagine that the journey involves two sides, two movements, starting at opposing ends and moving towards each other, converging towards an imaginary centre point.  On one side of this journey are The Technologists, thereafter Technologists. On the other side are the Liberal Artists, thereafter called Liberal Artists. The imaginary point of convergence is thus User Needs and Wants, that’s what both sides are striving for. Herein lies an important first chasm:

  • The products generated by Liberal Artists want to occupy our minds and souls, capture our imagination, give us comfort and feeling of security, entertain our fancies, give us food for thoughts. Of essence here are issues such as aesthetics, forms, feelings, comfort, feeling secure or insecure, want, etc. Liberal Artists want to make us feel and want things, trigger our imagination, provoke our thoughts. Liberal Artists might not necessarily concern themselves with practicality – this is not to suggest that they would never do because clearly whole legions of them do, but just that it might be a lower priority concern.
  • The products generated by Technologists want to help us carry out some essential activities. The premise here is that something needs to be done, something that is perhaps essential, something that is unavoidable. The technologist has done her/his job if such an activity could be carried out with the tools created and the techniques devised, considerations such as aesthetics and friendliness might come at a at later stadium, if at all.

By virtue of them starting at different places, Technologists and Liberal Artists have different contexts, different set of values, different views of the world, not necessarily completely alien to one another but definitely having their minds occupied in completely different ways. They face different sorts of challenges. Because we are shaped by our environments, we can grow to become utterly different people. Technologists and Liberal Artists often grow to become very different  people.

Liberal Artists have their own activities,  attributes and aspirations. In no particular order, nor  an exhaustive list by any stretch of the imagination:

  • Crafting is the primary activity, the outcome could be more personal, intimate, often some form of impersonation or expression of the liberal artist.
  • To be loved, to be adopted and to entertain the patrons.
  • To be perceived to have good aesthetics, aesthetic in this sense could come in the form of something that is pretty, beautiful. Alternatively, this could be something very ugly, repulsive, contrary to accepted beliefs and wisdom. The aesthetics may manifest itself in the look, the feel, and the smell.
  • To provide feel good, feeling different if not somewhat superior, or otherwise convey distress, pain, anger, or other high emotions, especially when dawned in artistic expressions

Technologists also have their activities, attributes, and aspirations. Again not in any particular order, certainly not exhaustive either:

  • To be perceived to be effective and efficient.
  • Productivity is an important driver, this is more about Taylorism, automation, continuously seeking to make things faster and cheaper.
  • To be making durable products, to be providing effective services
  • To get a job done in an economical way.
  • Attributes such as fast, powerful, high performing, are the typical claims that are made.

It is not necessary to go any deeper before one starts to see some of the challenges that all sides/parties face: the  areas of strength for one side automatically represent the perceived or real weakness points of the other side. This is trivial. People whose muscle memory is sharpened by one kind of activity, tend to do poorly when facing an opposing kind of activity, and vice versa.

Liberal Artists are not expected to know much about efficiency and effectiveness, or the economical. Conversely, Technologists might not be taken seriously if they start dwelling on aesthetics. Even if it were aspirational for one side to claim value attributes of the other, they are likely to face an uncertain journey of reconversion, adaptation, reinvention. At an individual level this is hard at best, at an organisational level such aspirational journey could quickly become daunting, you have legions of people and habits to convert into totally alien habits.

Liberal Artists and Technologists are sometimes competing for the same resources and spaces, most of the time they are not. In fact, the two sides address complimentary wants and needs, they are frequently found to be collaborating but not competing. For a wide variety of their activities, Technologists and Liberal Artists rely on each other, one could be found making tools that the other would put to use.

If Technologists and Liberal Artists are collaborating to address user needs, aren’t they already somehow “married” then? Aren’t they solving different but complimentary problems? Does it make sense to talk about bringing them closer together?