Remove unnecessary layers from the stack to simplify a Java solution

When writing new software, it is natural to simply follow the beaten paths. People pick up what they are already used to, or what they are told to use, and then get going. There may or may not be proper directions for choosing the technology, but there is invariably a safe bet option that will be chosen. This isn’t wrong per se. What is though, is when no alternatives were ever considered when there was a chance to do so. AppServers are not necessary for deploying scalable Java solutions. JDBC and ORMs are not necessary for building high performance and reliable Java programs. The fact that a lot of people keep doing this, is no good justification for adopting the practice.

An item in my news feed this morning triggered the writing of this post. It read shadow-pgsql: PostgreSQL without JDBC. It’s a Clojure project on GitHub. Indeed, “without JDBC”. That’s it. This is particularly pertinent in the Java software world, where a flurry of popular initiatives are driving down the adoption of once revered vendor solutions. Some of the arguments I make here apply as well to other software and programming environments, not just Java.

As you start writing a Java application, destined to be deployed as part of an Internet enabled solution, it is important to carefully consider the requirements before selecting technology to build your solution with. Everyday, a new tool pops up promising to make it easier and faster to start writing software. People lose sight of the fact that getting started isn’t the hard part, keeping running and growing, adapting to change, all at a reasonable cost, are by far the hardest part with software. And to boot, once something is deployed into production it instantly becomes a legacy that needs to be looked after. So, don’t be lured by get started in 10 minutes marketing lines.

I would argue that, not knowing the expected characteristics of your target solution environment, is a strong indication to choose as little technology as possible. There are lots of options nowadays, so you would expect that people always made proper trade-off analysis before settling on any particular stack. A case in point is database access. If you are not forced to target an AppServer then don’t use one. If you have access to a component with native support for your chosen database platform, it might be better to just skip all the ORMs and JDBC tools and use native database access libraries. This would simplify the implementation, reduce the deployment requirements and complexity.

It used to be that, if you set out to write a server side Java program, certain things were a given. It was going to be deployed on an AppServer. The AppServers promoted the concept of container where your application is shielded from its environment. The container took charge of manageability issues such as security, authentication, database access, etc. To use such facilities, you wrote your code against standard interfaces and libraries such as Servlets, JDBC, or Java beans. If the company could afford to, it would opt for an expensive product from a large vendor, for example IBM WebSphere or BEA WebLogic (nowadays Oracle). And if the organisation were more cost-conscious, a bit of risk taker or an early adopter for example, it would choose a free open source solution, Apache Tomcat, jBoss, etc. There were always valid reasons for deploying AppServers, they were proven, tested and hardened, properly managed and supported in the enterprise. Such enterprise attributes were perhaps not the most attractive for pushing out new business solutions or tactical marketing services. That was then, the Java AppServer world.

As the demand for Internet scale solutions started to rise, the AppServer model showed cracks that many innovative companies didn’t hesitate to tackle. These companies set aside AppServers, went back to the drawing board and rebuilt many elements and layers. The technology stack had to be reinvented.

Over last ten years I would say, people started writing leaner Java applications for the Internet services. Mostly doing away with AppServers. This allowed the deployment of more nimble solutions, reduced infrastructure costs, increase the agility of organisations. Phasing out the established AppServers came at a cost however, one now needed to provision for all the things that the AppServers were good at: deployment, instrumentation, security, etc. As it happens, we ended up reinventing the world (not the wheel as the saying would have it, but well a certain well known world). What used to be an expensive and heavy vendor solution stack, morphed into a myriad of mostly free and open source elements, hacked together parts and parcels, to be composed into lean and agile application stacks. This new reinvented technology stack typically has a trendy name, Netflix OSS is one that sells rather well. That’s how things go in the technology world, in leaps and bounds, lots of marketing and hype, but within the noise lie hidden some true value.

Arguably, there are a lot of new java based solutions being implemented and deployed with stacks of unnecessary layers. That is a shame, because there is an unprecedented amount and variety of options for implementing and deploying good solutions in Java, or targeting the JVM. Understanding the characteristics of the envisioned solutions is one aspect that keeps driving Java and JVM based languages and technologies. AppServers are not necessary for deploying scalable Java solutions. JDBC and ORMs are not necessary for building high performance and reliable Java programs. The fact that a lot of people keep doing this, is no good justification for adopting the practice. The move towards smaller and leaner specialised applications is a good one because it breaks down and compartmentalise complexity. The complexity that is removed from the application stack, is diluted and migrated over to the network of interlinked applications. Not all problems would automagically vanish, but some classes of problems would disappear or be significantly reduced. That is the same principle that should motivate into challenging and removing as many layers as is reasonable from individual application stacks. I resist the temptation of mentioning any specific buzzword here, there are plenty of good examples.

When writing new software, it is natural to simply follow the beaten paths. People pick up what they are already used to, or what they are told to use, and then get going. There may or may not be proper directions for choosing the technology, but there is invariably a safe bet option that will be chosen. This isn’t wrong per se. What is though, is when no alternatives were ever considered when there was a chance to do so. Keeping your options open would aways prove more profitable than just going with the flow. One characteristic of lagging incumbents is that they often just went with the flow, chose solutions deemed trusted and trustworthy without regard for their own reality, usually not challenging enough the technology stack.

 

DYI: Improve your home Wi-Fi performance and reliability.

Does your Wi-Fi feels much slower than you expect it to be? Is it typically slow at the times when your neighbours are also at home, just after work for example, on weekends? Does it perform better when most of your housemates or neighbours are asleep? In the affirmative in some of these questions, you may be experiencing Wi-Fi channel overcrowding. If channel overcrowding is the issue, you can solve it easily by moving your wi-fi router to a quieter channel.

If, like me, you eventually got annoyed by how choppy and frankly slow your home Wi-Fi can become, then this post may be for you.

Does your Wi-Fi feels much slower than you expect it to be? Is it typically slow at the times when your neighbours are also at home, just after work for example, on weekends? Does it perform better when most of your neighbours are asleep? In the affirmative in some of these questions, you may be experiencing Wi-Fi channel overcrowding.

If channel overcrowding is the issue, here is a simple way to solve it:

  1. Identify a channel that is less busy or not at all
  2. Configure your router to use this free channel
  3. Restart it. Et voila.

Here is how a crowded channel looks like

wi-fi network channels

 

In the picture, you see a lot of coloured areas between 1-8 (numbers at the bottom of the picture), these are wi-fi channel numbers. If your router’s name shows in this area you are likely to have a poor reception. The solution is to move to a quieter channel. In the picture, you see that channels above 108 are totally free. Look at your wi-fi router documentation, look for the word “channel” and find out you to change it. If your router is not able to use the desired channel, it may be because it’s too old for example. If you find that all the channels in your area are too busy then you are out of luck, as far as this tip is concerned.

Tools

In my example, I am using a Mac and and I found a program in Apple Store called WiFi Explorer. The picture in this post is a screenshot from this app. If you are using a PC, I am sure there would be an equivalent available somewhere – be safe though, free software from an obscure or unchecked source may be malware. If this kind of tool isn’t something for you, then the good old trial & error method might be a way to do it. Just set your Wi-Fi router to another channel, restart it, check how it performs. Repeat the process until you stumble into a channel that works for you.

 

Background

In residential areas, most of your neighbours also have wi-fi at home. More often than not, you may just drowning out each other’s wi-fi network with unwanted noise. Think of it this way, try to have a cozy conversation with someone in the middle of a busy shopping street and the two of you are 15 meters apart. What’s more, people have different voices that can be heard, the situation is different with wi-fi routers.

People get wi-fi at home either by directly obtaining it from their cable internet provider, or by buying wi-fi routers and installing that themselves on top of their wired Internet connection. It’s easy to do. The hope is that it’s going to be super fast as your provider relentlessly has been touting. At some point, a reality slowly start to sink in, your Internet connection is nowhere near as fast as you were told. Then follows annoyance, frustration, a little bit of guilty denial and ultimately resignation to accept the status quo – so-called Stockholm Syndrome.

A common cause of sluggish and fluctuating wi-fi network performance is the overcrowding of the allotted channels. Since most neighbours are likely to get wi-fi from the same providers, they all get the same routers with the exact same default settings. This means that the routers will be constantly interfering, causing noise that slows down and sometimes interrupt your network.

Aside

Although I am talking about possible Wi-fi interference here, it is by far not the only reason behind sluggish or unreliable home network. Another culprit may be typically deception from cable providers. It is well documented that residential Internet connection providers frequently throttle their users bandwidth, it’s easy to find out. And naturally, when lots of people are on the Internet, for example following an event, networks could become slower for everybody for a certain amount of time. It’s all more complicated than we often make of it. This is just a tip that may help if you are lucky to be meeting the conditions that I mentioned.