Programming concepts: who needs to know about Monads? who doesn’t?

Programming language users don’t need to know about Monads. No I won’t get into explaining what a Monad is or isn’t, instead I will focus on the potential usefulness as an angle to this hot topic.

I drafted this post in February 2015 then forgot about it. I recently stumbled upon it again, so I thought to publish it, after some minor editing.

There is a clear chasm in the way people communicate thoughts and concepts within programming language communities. And this chasm manifest itself in the multiple layers of conversations in the communities, interacting at different levels. In the mix, some are discussing language design issues whereas others are just trying to get started with basic concepts.

Let me paraphrase Martin Odersky – I got the following rough definition from him, though I’ve lost the reference, I can’t provide an exact quote, just the idea. I emphasise the main idea:

There are three categories of people in most programming language communities: language designers, framework or toolkit designers, and language users.

These groups of people have different concerns, although they intermingle on social media they seldom speak the same language. This becomes quite obvious when you read their tweets.

Recent years have seen a significant focus on functional programming, the cool kids can’t rave enough about it. These are typically functional programming advocates. Monad, a word that comes up quite frequently in conversations. Wow, it sounds so cool! Nobody wants to be left out, nobody wants to sound old-fashioned or inferior, so everyone think they too should be involved with this monad stuff. I think it’s misguided to just dive in without clear justification. Most regular developers really belong in the language users group, they need not bother about monads too soon.

A regular developer is someone who’s main job is to write software aimed at end users. The work doesn’t result in programming artefacts for other software writers, instead the software is produced for people will just run them. If this is your main job, then you really don’t need to know about monads. If you’re feeling some kind of peer pressure, a sense of being important, if that’s what’s driving you into this monad business, then you are wasting your time.

Having got that away, let me give a really short take on what Monads are useful for, the one definition for the non-initiated (sorry, this is you, software non-librarian):

In essence, Monad is important because it gives us a principled way to wrap context information around a value, then propagate and evolve that context as the value evolves. Hence, it minimises coupling between the values and contexts while the presence of the Monad wrapper informs the reader of the context’s existence.

Excerpts from the book: Programming Scala, by Dean Wampler & Alex Payne. Any typos or mistakes are mine.

This explanation is really good, easy to understand, no PhD in Math required. The word value might be puzzling if your choice of programming language doesn’t have such concept, you could just substitute it with something. Ultimately, this is about writing a more general solution, that can be trusted for soundness and solidity if its architectural concepts are backed by math theories. The notion is popularised by functional programming theories, concepts, programming languages and tools. Monad is one powerful tool from such a trove. The  programming language, its libraries and the compiler, combine to provide such power.

The discipline emerged from academia, researchers working on formal proof of algorithm correctness started it all. During my studies at the university, we got introductory lectures on language theories and formal proofs, but never went much too deep. At the time I thought it was really for people who wanted a career in academia, I wanted to make software for people. In recent years, functional programming picked up steam and is ever growing in popularity. I won’t dive further into the background here, there is plenty of material on that.

In the early stages, as your solution is only starting to emerge, it doesn’t make sense to try and reason in terms of monads or not. I think it best to simply write something that works, gives the user what they need. Then only after that first goal is achieved, would you take the time to contemplate what you wrote, look for opportunities to remove boilerplates, repetitions, ambiguities, and so on.

As you engage in this type of exercise, that’s when you start to to think: wouldn’t it be handy if a tool existed, or the compiler, to allow me to make my algorithms and recipes more general, without me having to invent something? That is the point where it is good to look at what your language and its compiler offer, the libraries available. Chances are, you could soon be using some monads without even knowing to call them so.

The more you progress in this journey, the more concise your code becomes, fewer components can perform increasingly more work. The process helps you improve quality, reduce maintenance burden, and even save yourself valuable testing time. If your programming language is built for this, like Scala, Haskell, Idris, OCaml, and so on, then you reap much more rewards. In fact you’d be expected to be building on such concepts. These benefits can be discussed and reasoned about without getting lost in strange cabalistic terminology. Cabalistic is as math may sound like to the uninitiated.

The process illustrated in the previous paragraph would probably be trivial for framework designers. Indeed they need to achieve high levels of reusability without sacrificing performance and legibility. Folks engaged in domain specific language (DSL) design would also have similar aims. Math savvy framework designers would probably go a formal way, that’s where category theory comes into play. Unfortunately, math savvy people can behave like doctors, speaking in tongues that only specialists can understand. And this is how, although well meaning, most people get lost when trying to follow conversations or reasoning involving such concepts.

On social media and blogs, software people who regularly talk about monads and various kinds of math infused concepts, are either language designers, or framework designers, or are perhaps aspiring to become one. They seem to be either demonstrating proficiency, or trying to informally engage people they acknowledge as peers.

That’s it. A programming language user doesn’t need to know about monads. It is useful if they do and can actually take advantage of it, but one can be productive with no knowledge of monads. To those wondering what a monad is, whether they need one or not, I would suggest to check out Dean Wampler’s quote above as a way to assess one’s motivation. If that qualifies their quest, then it is worth reading up many different code samples from multiple languages to find out how to take advantage of it. No need to get mystified.

An abundance paradox: so many opportunities to innovate, yet so little demonstration of it

In his number titled Rat Race, Bob Marley once sang the following lines,

In the abundance of water,
The fool is thirsty.

I think it could apply to the world we live in at the moment.

Many observers wrote that the world is producing unprecedented amount of media at an unprecedented rate. All that information is mostly freely available. Yet it has hardly been more difficult to make proper use of such manna. Technology is evolving at such a pace that the barrier to innovation is lowering by the day. Capabilities that would sound like science-fiction just a few years back, are now becoming available at near commodity levels.

It has never been easier and cheaper to publish digital media. At the same time, long established media publishers seem to be relinquishing publishing control to new players such as Facebook, Medium or Google. I won’t even start on horrible terms such as post-truth and similar misnomers, you get the impression they’re on about some World Wrestling Entertainment show, or maybe soap operas. Quality seems to be missing when it should’ve been easier to publish truthful and newsworthy items.

It used to be very difficult and time consuming to design and build even modest web solutions, nowadays the task can be accomplished at a fraction of the cost and comparatively no time at all. And yet, many businesses are struggling to digitise their processes, take many months if not years to deploy decent solutions. We might not be learning fast enough, have our eyes on the wrong priorities, struggling to streamline our messages and get them across, falling too quickly to fads, finding it difficult to separate wheat from chaff. The reasons are varied and multiple, but it is clear that technological prowess isn’t proportionally resulting in unquestionable progress in a lot of cases.

All this, to me, point to an increasing inability to innovate and deliver value when it should have been relatively easier and cheaper to do so. Why is this happening? How come we can’t seem to do better, despite getting access to ever better and cheaper tools and learning?

I just read an article about a new uncertainty that’s going to hit businesses that ship user-facing software solutions built on the Java platform. Many who invested in building IP around user solutions, will likely face large bills or tedious legal procedures. This will come as a shock surprise to many, who never considered the scenario.

Fluff, stuff and veneer

Products can be categorised in three broad sets, based on the proportion of fluff, stuff, and veneer in their structure. This can be seen in the picture below.

distribution of fluff, stuff, and stuff

What really makes a difference is stuff, that is the true substance. Fluff is often added to help with margin. Veneer makes it easy to sell, may also help with the margin. Users are more naturally looking for (a.) type of products, but this is often not the case. Companies that offer (a.) type of products usually have more affinity with value, have more empathy, and can learn to adjust to changing times. Companies offering (b.) products also have decent chances to respond when the environment changes. However, those only dishing out (c.) products will suffer most in times of change. The latter group builds on thin air and is promptly exposed, they may be adept to fad.

What about infrastructure?

If infrastructure is at the core of what a company sells, then it may well be stuff. Even then, there is value in working out how much fluff and veneer it may contain. In other cases, if infrastructure is determined to be a key part of the differentiator, then it should still be considered as stuff. When that is also not the case, then infrastructure may well be part of fluff and should be given a lot of scrutiny. A key consideration is thus, how do you determine what is fluff, stuff or veneer? In which part are you spending most of your resources, stuff, fluff, or veneer? How do these spending proportions look like? In which one have you become very good at? These are important questions to ask.

What’s fluff for one company may be stuff or veneer for another. Other combinations apply equally. Infrastructure for one business may be veneer for another. It is quite likely that the proportions would differ significantly, even for companies serving a given market. Attempts to apply generally accepted recipes should focus around formulas and the careful gathering of input, but definitely not pre-cooked outputs that might be off in a different context.

Nimbleness

If a business strongly relies on information technology, an increasingly common case nowadays, it ought to carefully consider applying the sharing economy principles. Essentially, the path to nimbleness is paved with pay-as-you-go operating models, requiring less and less ownership of the infrastructure. The less infrastructure you own and operate, the less liability you incur, the more agile your business becomes.

Roughly, the notion of sharing economy that has taken hold in recent years, sees people increasingly ditching ownership of fluff in favour of sharing or renting. Stop owning things that may be fluff or veneer, share those instead. When applied to the context of business computing, sharing amounts to embracing cloud native solutions whereby infrastructure is put to maximum use over its allocation time.

An obvious and immediate defensive reaction I often hear is, how about vendor lock-in? That’s possibly one of the most overused red-herring these days. People are too inclined to worry, afraid of getting stuck, preferring to freeze and starve instead. That’s like worrying about your hair style while standing in the pouring rain. What’s the point in that? I don’t have enough space to discuss lock-in in this post, I’ll just say that it’s a non-issue for most businesses that find themselves on a burning platform. Even in plain sailing conditions, choice is inevitable. Whatever choice a business makes will obviously imply some kind of lock-in. Being afraid of it doesn’t change things, choosing the right kind of lock-in does. No lock-in often means going for the lowest common denominator, that should be vetted against the cost of operating such setup.

When people and companies succumb to fad, that they mostly go with the flow, there is an absence of steering. And that’s one possible reason that so few innovations are emerging out of the abundance of the opportunities created by technological progress. Likewise, the overabundance of information is creating a discernment crisis, all look the same to newcomers, experienced folks may have lapses of vigilance, deafening noise cancelling out sound judgment. Sure, we learn by experimenting, by failing as they put it. Fail fast is the motto in hip startup technology circles. But, exactly how fast should that be? At what point does it become a case of fail fast rather that an instance of quitting, abandoning? How do companies determine that it is time to fail? Some will try to copy recipes they’ve heard worked elsewhere, only to fall into a me-too trap, no realising the benefits sought after.

Leadership drive: From ‘despises Open Source’ To ‘inspired by Open Source’, Microsoft’s journey

With a change of mind at the top leadership level, Microsoft showed that even a very large company is able to turn around and adopt a customer focused approach to running a business. By announcing Nano Server, essentially a micro-kernel architecture, Microsoft is truly joining the large scale technology players in a fundamental way.

A video on Microsoft’s Channel9 gives a great illustration of the way Microsoft is morphing its business to become a true champion of open source. I took some time to pick some of the important bits and go over them.

I remember the time when Microsoft was actually the darling of developers, the open source equivalent of the day as I entered this profession. I was a young student, eager to learn but couldn’t afford to buy any of the really cool stuff. Microsoft technology was the main thing I could lie my hands on, Linux wasn’t anywhere yet, I had learned Unix, TCP/IP, networking, and I loved all of that. Microsoft had the technical talent and a good vision, but they baked everything into Windows, both desktop and server, when they could have evolved Ms-DOS properly as a headless kernel that would get windowing and other things stacked upon it. They never did, until now. The biggest fundamental enabler was probably just a change in the leadership mindset.

The video presents Nano Server, described as a Cloud Optimized Windows Server for Developers. On a single diagram, Microsoft shows how they’ve evolved Windows Servers.

Microsoft Windows Server Journey
Microsoft Windows Server Journey

Considering this diagram from left to right, it is clear that Microsoft became increasingly aware of the need to strip out the GUI from unattended services for an improved experience. That’s refreshing, but to me, it has always been mind-boggling that they didn’t do this many years ago.

Things could have been different

In fact, back in mid-90’s, when I had completed my deep dives into Windows NT systems architecture and technology, I was a bit disappointed to see that everything was tied up to the GUI. Back then, I wanted a Unix-like architecture, an architecture that was available even before I knew anything about computers. I wanted the ability to pipe one command’s output into the input of another command. Software that requires a human present and clicking on buttons should only be present on the desktop, not on the server. With Microsoft, there was always some windows popping up and buttons to be clicked. I couldn’t see a benefit to the user (systems administrators), in the way Microsoft had built its server solutions. It wasn’t a surprise that Linux continued to spread, scale and adapt to Cloud work loads and scenarios, while Windows was mainly confined to corporate and SMB environments. I use the term confined to contrast the growth in departmental IT with Internet companies, the latter having mushroomed tremendously over last decade. So, where the serious growth is, Microsoft technology was being relegated.

Times change

When deploying server solutions mainly covered collaboration services and some departmental applications needs, people managed a few number of servers. The task could be overseen and manned by a few people, although in practice IT departments became larger and larger. Adding more memory and storage capacity was the most common way of scaling deployments. Although, still rather inconvenient, software came in CD-ROMS and someone had to physically go and sit a console to install and manage applications. This is still the case for lots of companies. In these scenari, physical sever hardware are managed a bit like buildings, they have well known names, locations and functions, administrators care discriminately for the servers. The jargon term associated to this is server as pet. With the best effort, Data Centre resource utilisation remained low (the typical figure is 15% utilisation) compared to the available was high and large.

Increasingly however, companies grew aware of the gain in operations and scalability when adopting cloud scaling techniques. Such scaling techniques, popularised by large Internet companies such as Google, Facebook, Netflix, and many others, mandate that servers are commodities that are expected to crash and can be easily replaced. It doesn’t matter what a server is called, workloads can be distributed and deployed anywhere, and relocated on any available servers. Failing servers are simply replaced, and mostly without any downtime. The jargon term associated to this approach is server as cattle, implying they exist in large numbers, are anonymous and disposable. In this new world, Microsoft would have always struggled for relevance because, until recently with Azure and newer offerings, their technology just wouldn’t fit.

the voice of the customer
the voice of the customer

So, Microsoft now really needed to rethink their server offerings, with a focus on the customer. This is customer-first, driven by user demands, a technology pull, instead of the classical model which was a technology-first, I build it and they will come, a push model, in which the customer needs come after many other considerations. In this reimagined architecture, the GUI is no longer baked into everything, instead it’s an optional element. You can bet that Microsoft had heard these same complaints from legions of open source and Linux advocates many times over.

Additionally, managing servers required to either sit in front of the machines, or firing up remote desktop sessions so that you could happily go on clicking all day. This is something that Microsoft appear to be addressing now, although in the demos I did see some authentication windows popping up. But, to be fair, this was about an early preview, I don’t think they even have a logo yet. So I would expect that when Nano Server eventually ships, authentication would no longer require popup windows. 😉

the new server application model
the new server model, the GUI is no longer baked in, it can be skipped.

The rise of containers

Over last couple of years, the surge in container technologies really helped to bring home the message that the days of bloated servers were numbered. This is the time when servers-as-cattle takes hold, where it’s more about workload balancing and distribution rather than servers dedicated to application tiers. Microsoft got the message too.

microsoft nano server solution
microsoft nano server solution

I have long held the view that Microsoft only needed to rebuild a few key components in order to come up with a decent headless version of their technology. I often joked that only common controls needed rewriting, but I had no doubt that it was more to do with a political decision. Looking at the next slide, I wasn’t too far off.

reverse forwarders
Reverse forwarders, a technical term to mean that these are now decent headless components

Now, with Nano Server, Microsoft joins the Linux and Unix container movement in a proper way. You see that Microsoft takes container technologies very seriously, they’ve embedded it into their Azure portfolio, Microsoft support for Docker container technologies.

Although this is a laudable effort, that should bear fruits in time, I still see that there is a long way to go before users, all types of users, become truly centre for technology vendors. For example, Desktop systems must still be architected properly to save hours of nonsense. There is no reason why a micro-kernel like Nano Server wouldn’t be the foundation for desktop systems too. Mind you, even with multi-core machines with tons of memory and storage, you still get totally unresponsive desktops when one application hogs up everything. This shouldn’t be allowed to ever happen, user should always be able to preempt his/her system and get immediate feedback. That’s how computing experience for people should be. It’s not there yet, it’s not likely to happen soon, but there is good progress, partially forced by the success coming from free and open source advocacy.

If you want to get a feel for how radical Microsoft has changed their philosophy, and you are a technical minded person, this video is the best I’ve seen so far. You will see that the stuff is new and being built as they spoke, but listen carefully to everything being said, watch the demos, you will recognise many things that come straight from the free open source and other popular technology practices: continuous integration, continuous delivery, pets vs. cattle, open source, etc. I didn’t hear them say if Nano Server would be developed in the open too, but that would have been interesting. Nano Server, cloud optimised Windows server for developers.

Remove unnecessary layers from the stack to simplify a Java solution

When writing new software, it is natural to simply follow the beaten paths. People pick up what they are already used to, or what they are told to use, and then get going. There may or may not be proper directions for choosing the technology, but there is invariably a safe bet option that will be chosen. This isn’t wrong per se. What is though, is when no alternatives were ever considered when there was a chance to do so. AppServers are not necessary for deploying scalable Java solutions. JDBC and ORMs are not necessary for building high performance and reliable Java programs. The fact that a lot of people keep doing this, is no good justification for adopting the practice.

An item in my news feed this morning triggered the writing of this post. It read shadow-pgsql: PostgreSQL without JDBC. It’s a Clojure project on GitHub. Indeed, “without JDBC”. That’s it. This is particularly pertinent in the Java software world, where a flurry of popular initiatives are driving down the adoption of once revered vendor solutions. Some of the arguments I make here apply as well to other software and programming environments, not just Java.

As you start writing a Java application, destined to be deployed as part of an Internet enabled solution, it is important to carefully consider the requirements before selecting technology to build your solution with. Everyday, a new tool pops up promising to make it easier and faster to start writing software. People lose sight of the fact that getting started isn’t the hard part, keeping running and growing, adapting to change, all at a reasonable cost, are by far the hardest part with software. And to boot, once something is deployed into production it instantly becomes a legacy that needs to be looked after. So, don’t be lured by get started in 10 minutes marketing lines.

I would argue that, not knowing the expected characteristics of your target solution environment, is a strong indication to choose as little technology as possible. There are lots of options nowadays, so you would expect that people always made proper trade-off analysis before settling on any particular stack. A case in point is database access. If you are not forced to target an AppServer then don’t use one. If you have access to a component with native support for your chosen database platform, it might be better to just skip all the ORMs and JDBC tools and use native database access libraries. This would simplify the implementation, reduce the deployment requirements and complexity.

It used to be that, if you set out to write a server side Java program, certain things were a given. It was going to be deployed on an AppServer. The AppServers promoted the concept of container where your application is shielded from its environment. The container took charge of manageability issues such as security, authentication, database access, etc. To use such facilities, you wrote your code against standard interfaces and libraries such as Servlets, JDBC, or Java beans. If the company could afford to, it would opt for an expensive product from a large vendor, for example IBM WebSphere or BEA WebLogic (nowadays Oracle). And if the organisation were more cost-conscious, a bit of risk taker or an early adopter for example, it would choose a free open source solution, Apache Tomcat, jBoss, etc. There were always valid reasons for deploying AppServers, they were proven, tested and hardened, properly managed and supported in the enterprise. Such enterprise attributes were perhaps not the most attractive for pushing out new business solutions or tactical marketing services. That was then, the Java AppServer world.

As the demand for Internet scale solutions started to rise, the AppServer model showed cracks that many innovative companies didn’t hesitate to tackle. These companies set aside AppServers, went back to the drawing board and rebuilt many elements and layers. The technology stack had to be reinvented.

Over last ten years I would say, people started writing leaner Java applications for the Internet services. Mostly doing away with AppServers. This allowed the deployment of more nimble solutions, reduced infrastructure costs, increase the agility of organisations. Phasing out the established AppServers came at a cost however, one now needed to provision for all the things that the AppServers were good at: deployment, instrumentation, security, etc. As it happens, we ended up reinventing the world (not the wheel as the saying would have it, but well a certain well known world). What used to be an expensive and heavy vendor solution stack, morphed into a myriad of mostly free and open source elements, hacked together parts and parcels, to be composed into lean and agile application stacks. This new reinvented technology stack typically has a trendy name, Netflix OSS is one that sells rather well. That’s how things go in the technology world, in leaps and bounds, lots of marketing and hype, but within the noise lie hidden some true value.

Arguably, there are a lot of new java based solutions being implemented and deployed with stacks of unnecessary layers. That is a shame, because there is an unprecedented amount and variety of options for implementing and deploying good solutions in Java, or targeting the JVM. Understanding the characteristics of the envisioned solutions is one aspect that keeps driving Java and JVM based languages and technologies. AppServers are not necessary for deploying scalable Java solutions. JDBC and ORMs are not necessary for building high performance and reliable Java programs. The fact that a lot of people keep doing this, is no good justification for adopting the practice. The move towards smaller and leaner specialised applications is a good one because it breaks down and compartmentalise complexity. The complexity that is removed from the application stack, is diluted and migrated over to the network of interlinked applications. Not all problems would automagically vanish, but some classes of problems would disappear or be significantly reduced. That is the same principle that should motivate into challenging and removing as many layers as is reasonable from individual application stacks. I resist the temptation of mentioning any specific buzzword here, there are plenty of good examples.

When writing new software, it is natural to simply follow the beaten paths. People pick up what they are already used to, or what they are told to use, and then get going. There may or may not be proper directions for choosing the technology, but there is invariably a safe bet option that will be chosen. This isn’t wrong per se. What is though, is when no alternatives were ever considered when there was a chance to do so. Keeping your options open would aways prove more profitable than just going with the flow. One characteristic of lagging incumbents is that they often just went with the flow, chose solutions deemed trusted and trustworthy without regard for their own reality, usually not challenging enough the technology stack.

 

Trying to oppose Open Web to Native App Technologies is so misguided that it beggars belief

Open web is not in opposition of native apps. There is just one single entity called Web, and it is open by definition. There are numerous forms of applications that make use of Internet technologies, a subset of such Internet-based technologies serve the web. You get nowhere trying to oppose those two things.

People who ought to know better spend valuable energy dissing native app technologies. I’ve ignored the fracas for a long time, finally I thought I’d pen a couple of simple thoughts. I got motivated to do this upon seeing Roy Fielding’s tweets. I also read @daringfireball’s recent blog post explaining in his usually articulate way how he sees Apple in the whole picture. Here, I just want to look at a couple of definitions, pure and simple.

To claim that open web technologies would be the only safe bet is akin to saying that all you would ever need is a hammer. Good luck with that if you never come across any nails.

I think both are very good and very useful in their own right, one isn’t going to push the other into irrelevance anytime soon because they serve different use cases. The signs actually are that web addressing model, by way of URIs, might get challenged soon once Internet connected devices start to proliferate (as they’ve been hyped to do for some time now). So, I actually think that a safer bet would be Internet Technologies, Web could increasingly be pushed into a niche. But ok, I actually didn’t intend to get in the fray.

Incidentally, I only see claims that native platform apps would be some kind of conspiracy to lock users down, but apparently open web would be the gentlemen benevolent dictator’s choice. I am skeptical in the face of such claims, because Mother Teresa wasn’t a web advocate for example. I remember that Apple, who triggered what-you-know, initially only wanted people to write HTML apps for the iPhone when it launched. It was only when they got pressured by developers that they finally released native SDKs. That’s such a terrible showing for a would-be mal-intended user-locker.

There is no such thing as an open web versus anything else. There is just the web and then there is an ever growing generation of native platform apps that also take advantage of Internet technologies. That’s it. Trying to oppose those two things is just rubbish. Plainly put. If something is a web technology and actually adheres to the web definition, it can only be open. A close web technology would be an oxymoron, the definition doesn’t hold. If something is built using Internet technology, it may borrow web technology for some purposes, but it is not part of the web per se.

In the instances where web technology is best fit, it will continue to be that way and get better. Conversely, in the  cases where native platform apps work best, those will continue to get better and may use the Internet. There is a finite number of web technologies bound by the standards and specifications implemented. But, there is an infinite number of native app technologies since the implementers can write anything they like and get it translated to machine code following any protocol they may devise.

The Web doesn’t equate to the Internet. There are open and closed Internet Technologies, but there isn’t such a thing for the Web. The Internet contains the Web, not the other way around.

In the outset, there is a platform battle going on where parties are vying for depleted user attention. In such battles, every party attracting people to their platform is self serving and can make as much morale (or whatever you call it) claim as any other. The only exception are those set to gain nothing in getting their platform adopted. There aren’t any of those around.

My observation of the ongoing discussions so far is simple. Some individuals grew increasingly insecure of having missed the boat on native platform apps. Whatever the reason, including own choices. Some other individuals, a different group, burned their fingers trying to follow some hypes, learned from that and turned to native platform apps. The two groups started having a go at each other. That got all the rousing started, everyone lost their cool and indulged in mud slinging. But there is no point to all of this.

If you are building software intended to solve a category of problems, then the most important technology selection criteria you should consider is ‘fit for purpose’. Next to that, if the problem is user bound, then you want to find the most user friendly way of achieving your aim. If you do this correctly, you will invariably pick what serves best the most valuable use cases of your intended audience. Sometimes this will work well with web technologies, sometimes it won’t, and other times you will mix and match as appropriate.

I don’t see web technologies being opposed to native app platforms, at all. Whatever developers find more attractive and profitable will eventually prevail, and use case is the only metric that truly matters. It is understandable that any vendor should vie for relevance. That’s what’s at stake, and that is important to everyone. It’s only once people and organisation would face up to this cold fact and start talking genuinely that they begin to make some progress.

 

Marrying Technology with Liberal Arts, Part II

People whose mental memory is sharpened by one kind of activity, tend to do poorly when facing an opposing kind of activity, and vice versa. Liberal Artists are not expected to brag about efficiency and effectiveness, or even economical. Conversely, Technologists will not be taken seriously if they start dwelling on aesthetics. Even if it were aspirational for one side to claim value attributes of the other, they are likely to face an uncertain journey of reconversion, adaptation, reinvention. At an individual level this is hard at best, at an organisational level such aspirational journey could quickly become daunting, you have legions of people and habits to convert into totally alien habits.

In a earlier post, I started exploring this notion of marrying technology and liberal arts. If you follow information technology closely, you certainly know who got famous for making such a claim. Even if you do, I invite you to bear with me for a moment, join me in a journey of interpreting and understanding what could lie behind such an idea. It’s a fascinating subject.

This is not about defining anything new, let alone appropriating someone else’s thoughts This is just an exploration of the notion, to get an idea of the motivations, challenges and opportunities that such a view would generate.

In the last post, I mentioned that there were challenges and pitfalls when attempting to marry technology with liberal arts. Let’s have a look at some of these challenges and pitfalls.

Let’s imagine that the journey involves two sides, two movements, starting at opposing ends and moving towards each other, converging towards an imaginary centre point.  On one side of this journey are The Technologists, thereafter Technologists. On the other side are the Liberal Artists, thereafter called Liberal Artists. The imaginary point of convergence is thus User Needs and Wants, that’s what both sides are striving for. Herein lies an important first chasm:

  • The products generated by Liberal Artists want to occupy our minds and souls, capture our imagination, give us comfort and feeling of security, entertain our fancies, give us food for thoughts. Of essence here are issues such as aesthetics, forms, feelings, comfort, feeling secure or insecure, want, etc. Liberal Artists want to make us feel and want things, trigger our imagination, provoke our thoughts. Liberal Artists might not necessarily concern themselves with practicality – this is not to suggest that they would never do because clearly whole legions of them do, but just that it might be a lower priority concern.
  • The products generated by Technologists want to help us carry out some essential activities. The premise here is that something needs to be done, something that is perhaps essential, something that is unavoidable. The technologist has done her/his job if such an activity could be carried out with the tools created and the techniques devised, considerations such as aesthetics and friendliness might come at a at later stadium, if at all.

By virtue of them starting at different places, Technologists and Liberal Artists have different contexts, different set of values, different views of the world, not necessarily completely alien to one another but definitely having their minds occupied in completely different ways. They face different sorts of challenges. Because we are shaped by our environments, we can grow to become utterly different people. Technologists and Liberal Artists often grow to become very different  people.

Liberal Artists have their own activities,  attributes and aspirations. In no particular order, nor  an exhaustive list by any stretch of the imagination:

  • Crafting is the primary activity, the outcome could be more personal, intimate, often some form of impersonation or expression of the liberal artist.
  • To be loved, to be adopted and to entertain the patrons.
  • To be perceived to have good aesthetics, aesthetic in this sense could come in the form of something that is pretty, beautiful. Alternatively, this could be something very ugly, repulsive, contrary to accepted beliefs and wisdom. The aesthetics may manifest itself in the look, the feel, and the smell.
  • To provide feel good, feeling different if not somewhat superior, or otherwise convey distress, pain, anger, or other high emotions, especially when dawned in artistic expressions

Technologists also have their activities, attributes, and aspirations. Again not in any particular order, certainly not exhaustive either:

  • To be perceived to be effective and efficient.
  • Productivity is an important driver, this is more about Taylorism, automation, continuously seeking to make things faster and cheaper.
  • To be making durable products, to be providing effective services
  • To get a job done in an economical way.
  • Attributes such as fast, powerful, high performing, are the typical claims that are made.

It is not necessary to go any deeper before one starts to see some of the challenges that all sides/parties face: the  areas of strength for one side automatically represent the perceived or real weakness points of the other side. This is trivial. People whose muscle memory is sharpened by one kind of activity, tend to do poorly when facing an opposing kind of activity, and vice versa.

Liberal Artists are not expected to know much about efficiency and effectiveness, or the economical. Conversely, Technologists might not be taken seriously if they start dwelling on aesthetics. Even if it were aspirational for one side to claim value attributes of the other, they are likely to face an uncertain journey of reconversion, adaptation, reinvention. At an individual level this is hard at best, at an organisational level such aspirational journey could quickly become daunting, you have legions of people and habits to convert into totally alien habits.

Liberal Artists and Technologists are sometimes competing for the same resources and spaces, most of the time they are not. In fact, the two sides address complimentary wants and needs, they are frequently found to be collaborating but not competing. For a wide variety of their activities, Technologists and Liberal Artists rely on each other, one could be found making tools that the other would put to use.

If Technologists and Liberal Artists are collaborating to address user needs, aren’t they already somehow “married” then? Aren’t they solving different but complimentary problems? Does it make sense to talk about bringing them closer together?

Tails, privacy for anyone anywhere. Makes sense to me

I often felt that visualisation has the potential for a more secure computing than we have seen so far. Project Tails is a good move in that direction, and is in effect a great virtualised secure solution. I am encouraged to see this happening.

I wrote before in this blog that I felt security and privacy were underserved by current technology offerings. This project lays out a vision that is very close to how I imagined it, and it looks promising.

Project Tails Logo

Django CMS is the simplest and most elegant Web CMS I’ve seen lately

Recently I have deep-dive compared a lot of web content management (CMS) products. I’ve found that Django CMS is the simplest and most elegant way of authoring semantic HTML content on the web.

Recently I have evaluated a lot of web content management solutions, all of them open source, almost exclusively on Unix like systems, I haven’t seen anything as simple and elegant as Django CMS. I took it for a spin, it’s polished, really designed to solve fast turn-around web publishing needs. It does that brilliantly, no question about that. I particularly like the way they make all the authoring tools and widgets disappear when not in use, so that you are always looking at your content exactly as it will appear when  published.

There is one thing, in a lot of cases, to truly appreciate any piece of work, one must have a practical understanding of the challenges involved in making it. This is also true for software. Content management, web content management isn’t a sexy topic. However, without such systems it would be tedious to maintain an online  presence, difficult to consume a lot of the content produced daily by legions of people.

I was just going to use one of the web CMS products I’ve known for a long time, to power one of my web sites. In the process, I realised that I have not done any proper re-evaluation of the products available for a while. I wanted to try that first before settling on something. I thought perhaps I should go open minded about it, include a broad selection of products. As time is precious, I decided to focus on Unix/Linux based solutions for now, because that is also my target deployment platform.

For my purpose I went back to implementing a few other web CMS products, namely: Zotonic (Erlang based), Liferay Portal (JCR), Magnolia (JCR), dotCMS (JCR), several static publishing systems such as Pelican (Python), Jekyll and Octopress (Ruby), and of course WordPress (PHP, which powers this blog). Django CMS best all these in terms of simplicity and its focus on making semantic HTML content authoring a bliss. To be fair, it’s hard to compare these products I just mentioned since each actually aim to cover a breadth of needs that might be going well beyond web CMS (portal for example). But I had narrowed my scope down to the authoring and publishing processes only, web CMS functionality alone, that makes it possible to come up with a fairly reasonable comparison.

I am not yet done evaluating web content management systems, I still have on my to-do list Prismic.io (an innovative Content Repository As A Service) and a couple of .NET based web CMS.

More to come on this topic sometime later. I probably have enough material to publish a small document on this subject. We’ll see if/when I get to that. But for now, I definitely maintain that Django CMS is the most polished solution for folks looking for a simple and attractive web CMS software.

With 8, Java is distancing itself from the ‘new Cobol’

Java 8 has introduced a lot of welcome improvements, justifying that I take back my earlier statement that it was ‘the new Cobol’, but I see there is room for improvement.

It is time to amend a post I made a while back, where I supported the idea that Java was the new Cobol. At that time I had given up any hope to see a decent modernisation of the language in a foreseeable future. That future is here and Java has been modernised.

If you look at Oracle’s site, a few points stick out:

  1. Lambda Expressions: this improves some of the building blocks considerably.
  2. Nashorn and Javascript: this is a nod to the success of NodeJS

To me, those two points justify a new blog post to adjust my earlier statement. I still see opportunities for improvement in the syntax itself, take the following example (from the same Oracle article linked above):

Set artists =
        albums.stream()
            .filter(album -> album.getTracks().size() < 8)
            .map(album -> album.getArtist())
            .collect(toSet());

Wouldn’t this read better in the following way?

Set artists =
        albums ->
            stream
                filter tracks.size < 8 
                map artist Set

Notice a lot of the unnecessary characters removed, because the compiler would be able to infer them. This isn't far off from the language's current standard, yet it could have dramatically improved the code readability. I know, there's a huge legacy to cater for and language design requires serious work. This type of simplification wasn't out of reach, it is what you typically see in the attractive programming languages like Haskell, OCaml and others.

So, Java 8 has introduced a lot of welcome improvements, justifying that I take back my earlier statement that it was 'the new Cobol', but I see there is room for improvement.

The very employable leave their mark well beyond their own office

Tim Bray is leaving Google. He is letting it be known in a marvellous example of humility and thoughtfulness of the scientist penning a dissertation. This is a good way to leave a company.

As a follow-up to my posting on people who rant as they quit a job, I stumbled upon a perfect example of how to do it right. And this is the case for Tim Bray’s post on his leaving Google.

If the name doesn’t ring a bell, check out Wikipedia (Note: do read Wikipedia disclaimers, I only know Tim through his great work that large swathes of the IT industry depend upon daily).