Qubes OS Project, a secure desktop computing platform

Given that the majority of security annoyances stem from antiquated design considerations, considering the progress made in computing, affordable computing power, this is probably how Operating Systems should now be built and delivered.

Qubes is a security-oriented, open-source operating system for personal computers.

Source: Qubes OS Project

An open source port scanner that is helping some bad guys

An open source port scanning tool, masscan, is being used to repeatedly attack my web sites. There are certainly lots of such tools widely available that even criminals with no skills can get their hands on and randomly try to break sites. IT security is a never ending quest that is best left to dedicated professionals.

Port scanning is one of those annoying activities that the bad guys may use while attempting to try and find back doors on systems. The principle is simple, find out what ports a system has left open, if you recognise any, try a dictionary like attack on these ports. All it takes is a simple bot.

Last few months, I have noticed multiple port scan attacks at my web sites from a user agent “masscan/1.0”. I dug a little and found this to be coming from an open source tool, the project on Github:

robertdavidgraham/masscan

So, it seems that some people have found this tool, and are now randomly targeting web sites with it. To what aim, I can’t tell for sure. It is certainly reprehensible to be poking at someone’s doors without their consent, everybody knows this.

I’ve also noticed lots of attempts to run PHP scripts, they seem to be looking for PhpMyAdmin. Fortunately I don’t run anything with PHP. If I did, I would harden it significantly and have it permanently monitored for possible attacks.

Most of the attacks on my web sites originate from, in this order: China, Ukraine, Russia, Poland, Romania, and occasionally, the US.

You don’t need anything sophisticated to detect these kind of attacks, your web server log is an obvious place. Putting a firewall in place is a no-brainer, just block everything except normal web site http and https traffics. You can invest also more in tools, then the question is if you’re not better off just hosting at a well known provider.

This is just one instance, and there are infinitely many, where even the dumbest criminals are getting their hands on tools to try and break into systems. Cloud hosting are getting cheaper all the time, soon it will cost nothing to host some program that can wander about the Internet unfettered. Proportionally, it is getting exponentially easier to attack web sites, while at the same time, it is getting an order of magnitude higher to keep sites secure.

I do see a shimmering light, container technologies provide for a perfect throwable computing experience. Just start a container, keep it stateless, carry out some tasks, when done, throw it away. Just like that. This may reduce the exposure in some cases, it won’t be sustainable for providing an on-going long-running service.

IT security is a never ending quest that is best left to dedicated professionals. I am just casually checking these web sites that I run. At the moment, I haven’t deployed any sensitive data on these sites yet. When I do, I will make sure they are super hardened and manned properly, likely a SaaS provider rather than spending my time dealing with this.

Cyber security white elephant. You are doing it wrong.

The top cause of information security weakness is terrible user experience design. The second cause of information security vulnerability is the fallacy of information security. A system designed by people will eventually be defeated by another system designed by people. This short essay is just a reminder, perhaps to myself as well, that if certain undesired issues don’t appear to be going away, we may not be approaching them the right way.

The top cause of information security weakness is terrible user experience design. The second cause of information security vulnerability is the fallacy of information security. Everything else flows from there.

It happened again, a high profile IT system was hacked and the wrong information was made available to the wrong people in the wrong places at the wrong time. This is the opposite of what information security infrastructure is defined as. Just search the Internet for “Sony hack”, one of the top hits is an article from 2011 that reads as Sony Hacked Again; 25 Million Entertainment Users’ Info at Risk

This type of headline news always beg some questions:

Don’t we know that any information system can eventually be breached given the right amount of resources (resource includes any amount and combination of: motivation, hardware, people, skills, money, time, etc) ?

Is it really so that deep pocketed companies cannot afford to protect their valuable resources a little harder than they appear to be doing?

Do we really believe that hackers wait to be told about potential vulnerabilities before they would attempt to break in? Can we possibly be that naive?

This has happened many times before to many large companies. Realistically, this will continue to happen. A system designed by people will eventually be defeated by another system designed by people. Whether the same would hold true for machine designed systems versus people designed systems is an undetermined problem. Once machines can design themselves totally independent of any human action, perhaps then information security will take new shape and life. But that might just be what professor Steven Hawkins was on about recently?

I suggest that we stop pretending that we can make totally secure systems. We have proven that we can’t. If however, we accept the fallibility of our designs and would take proactive actions to prepare, practice drills and fine tune failure remediation strategies, then we can start getting places. This is not news either, people have been doing that for millennia in various capacities and organisations, war games and military drills are obvious examples. We couldn’t be thinking that IT would be exempt of such proven practices, could we?

We may be too often busy keeping up appearances, rather than doing truly useful things. There is way too much distraction around, too many buzzwords and trends to catch up with, too much money to be thrown at infrastructure security gears and too little time getting such gears to actually fit the people and contexts they are supposed to protect. Every freshman (freshwoman, if that term exists) knows about the weakest link in information security. If we know it, and we’ve known it for long enough, then, by design, it should no longer be an issue.

It’s very easy to sit in a comfortable chair and criticise one’s peers or colleagues. That’s not my remit here, I have profound respect for anyone who’s doing something. It’s equally vain to just shrug off and pretend that it can only happen to others. One day, you will eventually be the “other”. You feel for all those impacted by such a breach, although there are evidently far more important and painful issues in the world at the moment to be worrying about something like this particular breach of confidentiality.

This short essay is just a reminder, perhaps to myself as well, that if certain undesired technology issues don’t appear to be going away, we may not be approaching them the right way. Granted the news reporting isn’t always up to scratch, we do regularly learn that some very simple practices could have prevented the issues that get reported.

 

Tails, privacy for anyone anywhere. Makes sense to me

I often felt that visualisation has the potential for a more secure computing than we have seen so far. Project Tails is a good move in that direction, and is in effect a great virtualised secure solution. I am encouraged to see this happening.

I wrote before in this blog that I felt security and privacy were underserved by current technology offerings. This project lays out a vision that is very close to how I imagined it, and it looks promising.

Project Tails Logo

Virtualisation should have helped to reduce Internet security risks by now

Virtualisation could have helped (and still can help) reduce the security threats people are facing using their computers on the Internet. I am not sure why the industry is not yet exploring this, at least they’re not appearing to be publicly doing this so far.

For a while now I’ve held the view that virtualisation was (and still is) an effective way of reducing some of the Internet security threats people are facing all the time. Imagine that the most enticing computer uses would be completely sandboxed. For example, if you start internet banking, the browser would run on a sandbox that only communicates with your bank and potentially the token hardware in your possession, anything outside of that would simply stop working: no other network connection, the sandboxed browsers’ access to hardware is completely isolated from the rest of your computer, except for printing perhaps. The sandboxed browser does not support any plugins or extensions, its only features are those of a dumbed down banking terminal. The protection could go as far as vendors creating special device memory regions that get automatically reserved and wiped out for secure computing purposes, no third party programs allowed to touch it. Conversely, the banks would only accept terminals that had previously been registered, much the same way that they issue hardware tokens to their clients. Such virtual machines would not be patched the usual ineffective way, instead they could be less frequently updated and each update would be coordinated by the VM issuers.

Something like this might not totally eliminate Internet security risks, but it could rid us of many of the most common threats in a very simple way. This is achievable with virtualisation and it should be cheap to realise.

We know that Security and Convenience are often at odds, by pushing out security patches all the time software vendors are causing user fatigue, just look over the shoulder of every other user to see the number of updates pending their approval. So, the current security patching practice is clearly ineffective. With BYOD gaining traction, the situation is likely to worsen. I think a new radical approach may be a better answer to the growing pain that we are experiencing at the moment.

Recent Ruby on Rails SQL injection vulnerability: lack of developer awareness, type safety

The much publicised Ruby on Rails SQL Injection vulnerability is also down to a lack of developer awareness of secure coding practices.A type safe programming language would have protected against this vulnerability too. An id is typically an auto-incremented database field, a number. So, any attempt to pass a spurious SQL string in such function would have been rejected by the type safe code. Ruby isn’t type safe.

The recently publicised Ruby on Rails SQL injection vulnerability is really down to a single issue: many developers are not aware of hardening their code. The article linked at the bottom of this post showcases both issues.

Lack of awareness of Rails secret token file

This file, as its name implies, is where the token used to further protect user sessions is configured. Inquisitive people would spot such a file quickly and ensure they won’t expose it where it shouldn’t be. Apparently many are blissfully ignorant of this file and they just push everything to Github.

The First rule of SQL injection 101

If you examine how SQL injection is achieved, it’s just a very basic mistake that has been known forever: unchecked input parameters.This points to where SQL injection could occur

A security aware developer, writing this single line of code would wonder what could happen if the input parameter were to be manipulated, and whether that were even possible. And that is the second awareness awareness that would have caught the issue.

A type safe programming language would have protected against this vulnerability too. An id is typically an auto-incremented database field, a number. So, any attempt to pass a spurious SQL string in such function would have been rejected by the type safe code. Ruby isn’t type safe.

Here is the blog that apparently uncovered this recent vulnerability: let me github that for you.