Remove unnecessary layers from the stack to simplify a Java solution

When writing new software, it is natural to simply follow the beaten paths. People pick up what they are already used to, or what they are told to use, and then get going. There may or may not be proper directions for choosing the technology, but there is invariably a safe bet option that will be chosen. This isn’t wrong per se. What is though, is when no alternatives were ever considered when there was a chance to do so. AppServers are not necessary for deploying scalable Java solutions. JDBC and ORMs are not necessary for building high performance and reliable Java programs. The fact that a lot of people keep doing this, is no good justification for adopting the practice.

An item in my news feed this morning triggered the writing of this post. It read shadow-pgsql: PostgreSQL without JDBC. It’s a Clojure project on GitHub. Indeed, “without JDBC”. That’s it. This is particularly pertinent in the Java software world, where a flurry of popular initiatives are driving down the adoption of once revered vendor solutions. Some of the arguments I make here apply as well to other software and programming environments, not just Java.

As you start writing a Java application, destined to be deployed as part of an Internet enabled solution, it is important to carefully consider the requirements before selecting technology to build your solution with. Everyday, a new tool pops up promising to make it easier and faster to start writing software. People lose sight of the fact that getting started isn’t the hard part, keeping running and growing, adapting to change, all at a reasonable cost, are by far the hardest part with software. And to boot, once something is deployed into production it instantly becomes a legacy that needs to be looked after. So, don’t be lured by get started in 10 minutes marketing lines.

I would argue that, not knowing the expected characteristics of your target solution environment, is a strong indication to choose as little technology as possible. There are lots of options nowadays, so you would expect that people always made proper trade-off analysis before settling on any particular stack. A case in point is database access. If you are not forced to target an AppServer then don’t use one. If you have access to a component with native support for your chosen database platform, it might be better to just skip all the ORMs and JDBC tools and use native database access libraries. This would simplify the implementation, reduce the deployment requirements and complexity.

It used to be that, if you set out to write a server side Java program, certain things were a given. It was going to be deployed on an AppServer. The AppServers promoted the concept of container where your application is shielded from its environment. The container took charge of manageability issues such as security, authentication, database access, etc. To use such facilities, you wrote your code against standard interfaces and libraries such as Servlets, JDBC, or Java beans. If the company could afford to, it would opt for an expensive product from a large vendor, for example IBM WebSphere or BEA WebLogic (nowadays Oracle). And if the organisation were more cost-conscious, a bit of risk taker or an early adopter for example, it would choose a free open source solution, Apache Tomcat, jBoss, etc. There were always valid reasons for deploying AppServers, they were proven, tested and hardened, properly managed and supported in the enterprise. Such enterprise attributes were perhaps not the most attractive for pushing out new business solutions or tactical marketing services. That was then, the Java AppServer world.

As the demand for Internet scale solutions started to rise, the AppServer model showed cracks that many innovative companies didn’t hesitate to tackle. These companies set aside AppServers, went back to the drawing board and rebuilt many elements and layers. The technology stack had to be reinvented.

Over last ten years I would say, people started writing leaner Java applications for the Internet services. Mostly doing away with AppServers. This allowed the deployment of more nimble solutions, reduced infrastructure costs, increase the agility of organisations. Phasing out the established AppServers came at a cost however, one now needed to provision for all the things that the AppServers were good at: deployment, instrumentation, security, etc. As it happens, we ended up reinventing the world (not the wheel as the saying would have it, but well a certain well known world). What used to be an expensive and heavy vendor solution stack, morphed into a myriad of mostly free and open source elements, hacked together parts and parcels, to be composed into lean and agile application stacks. This new reinvented technology stack typically has a trendy name, Netflix OSS is one that sells rather well. That’s how things go in the technology world, in leaps and bounds, lots of marketing and hype, but within the noise lie hidden some true value.

Arguably, there are a lot of new java based solutions being implemented and deployed with stacks of unnecessary layers. That is a shame, because there is an unprecedented amount and variety of options for implementing and deploying good solutions in Java, or targeting the JVM. Understanding the characteristics of the envisioned solutions is one aspect that keeps driving Java and JVM based languages and technologies. AppServers are not necessary for deploying scalable Java solutions. JDBC and ORMs are not necessary for building high performance and reliable Java programs. The fact that a lot of people keep doing this, is no good justification for adopting the practice. The move towards smaller and leaner specialised applications is a good one because it breaks down and compartmentalise complexity. The complexity that is removed from the application stack, is diluted and migrated over to the network of interlinked applications. Not all problems would automagically vanish, but some classes of problems would disappear or be significantly reduced. That is the same principle that should motivate into challenging and removing as many layers as is reasonable from individual application stacks. I resist the temptation of mentioning any specific buzzword here, there are plenty of good examples.

When writing new software, it is natural to simply follow the beaten paths. People pick up what they are already used to, or what they are told to use, and then get going. There may or may not be proper directions for choosing the technology, but there is invariably a safe bet option that will be chosen. This isn’t wrong per se. What is though, is when no alternatives were ever considered when there was a chance to do so. Keeping your options open would aways prove more profitable than just going with the flow. One characteristic of lagging incumbents is that they often just went with the flow, chose solutions deemed trusted and trustworthy without regard for their own reality, usually not challenging enough the technology stack.

 

Software programmers are not blue-collar workers

Like many skilled workers, software developers are smart creative people. Leading or managing such people requires finding a way to guide, support and empower them, while letting them do their job. Writing software is inherently a process in which trade-offs and compromises are made continually and iteratively till the end. It is therefore essential that all parties stay as close as possible together so that they can weigh decisions when it matters and actively take part in the game.

Like many skilled workers, software developers are smart creative people. Leading or managing such people requires finding a way to guide, support and empower them, while letting them do their job. This sounds easy, but I have seen many organisations and individuals failing at it, leading to the very issues they were trying to prevent. The recurring mistake people commit is that they treat skilled workers in the same manner that they would do blue-collar workers.

I have built and managed technical teams throughout my career, I’ve always been aware of the need to balance communication with smart and skilled folks like software developers. I continuously practice software development myself. By staying hands-on on both sides of the equation, I’ve continuously kept tap of the challenges involved. It’s one thing to be programming, but leading and managing developers is an entirely different game.

Instructions can be crippling

Some believe in providing extremely detailed instructions to developers, as a way to manage work. That is misguided for a number of reasons. Every software programming endeavour is as unique as can be. Micro decisions need to be made at every step of way, the conditions that might have been prevalent in a prior project context would have been largely superseded by new ones. However detailed, instructions will leave gaps that developers will run into. As such speed bumps are encountered, people want them to be removed swiftly and they would feel better if they could that themselves. If developers are not properly empowered, such gaps will lead to frictions, could cause tensions and result in defensive behaviours. While well intended, detailed instructions is not a very effective instrument for managing software development work.

If everything goes, nothing does it

The other extreme to too much instructions is having none. Without any guidance, anything goes. Such software development will lead nowhere. For software development to succeed, all those involved should be able to describe in a consistent and coherent manner what the aim is and what the final product should be. Software developers don’t work in isolation of other units in an organisation, they need to know what is intended, what is expected of them. An optimal set of guidance accepted across teams is a necessity, not a luxury. Working out exactly what such guidance is and how much of it is needed, is part of the job that leadership must facilitate.

Bean counting may be useless

Some are painstakingly tracking developer hours, lines of codes, and other misleading metrics, in the firm belief that that is the way to manage development resources. For the kind of work that doesn’t require thinking or coming up with solutions, typically repetitive tasks, that may yield some results. In software development projects this is a fruitless endeavour, developer productivity does not lie in any of those things. Bean counting is only useful if used anonymously as a learning tool, and perhaps as supporting documentation in the event of a dispute over resource usage. This last one is typically what software consulting bureaus are worried about, they then unfortunately put so much emphasis on it that it becomes a damocles sword over people’s head. Finding the metrics that matter is a skill.

Perfectly codified work is demotivating

If software development work could be codified to such extent that it could be flawlessly executed without any thinking, exploring, personal problem solving skills, then nobody would ever want to do it. That would be the most boring job ever for anyone. It makes people feel insecure, a feeling that could be summed up in an imaginary thought quote:

If I am not bringing anything special to this work, I can be easily replaced by someone else, or a robot! I don’t like that. I want to be perceived as being special to this project, I reject any shoddy instruction that would make my job look plain and simple.

This is also something that team managers and leaders should be careful about. When providing guidance turns into hand-holding, people stop thinking for themselves and the group may walk blindly straight into trouble. And when trouble strikes, finger pointing is often what follows. Besides, people learn when they can get themselves stuck and work their way through hurdles. When the conditions are right, the outcome will almost always exceed expectations.

Architecture: just enough guidance

Writing software is inherently a process in which trade-offs and compromises are made continually and iteratively till the end. It is therefore essential that all parties stay as close as possible together so that they can weigh decisions when it matters and actively take part in the game. In my experience, what works well is to provide just enough guidance to developers so as to establish appropriate boundaries of play. When that is well understood, leadership must then work to establish metrics that matter, and closely manage such metrics for the benefit of all those involved.

Treating software developers as blue-collar workers to be told “here, just go and write software that does this and come back when it’s done” is fatally flawed. People who typically make such mistakes are:

  • project managers, who are clearly making assumptions that only they espouse and neglecting the basics of creative work process
  • architects, who believe that developers are lesser capable workers that just need to do as told, follow the blueprints
  • technical managers, who have the impression that managing people is about shouting orders, being perceived as having authority
  • people enamoured with processes, tools and frameworks, oblivious of the purely human and social aspects of software development

In my experience, the root cause of such mistakes are relics of obsolete management practices that favoured hierarchy over social network. Note, my use of the term social network has absolutely nothing do with the multitude web sites that hijacked this vocabulary. By social network, I am referring to situations when a group of humans are engaged in making something happen together, it is social because of the human condition, and it’s a network because of the interdependencies.

This subject is vast and can be treated over a much larger text than I am doing here. In this post I’ve only touched upon a few of the most visible and recurring aspects, I’ve stopped short of discussing issues such as applying learning,  methodologies or tools.

If you recognise some of the shortcoming listed above in the context where you are in, and if you are in a position to influence change, I suggest that you at least initiate a dialog around the way of working with your fellow workers.

Cyber security white elephant. You are doing it wrong.

The top cause of information security weakness is terrible user experience design. The second cause of information security vulnerability is the fallacy of information security. A system designed by people will eventually be defeated by another system designed by people. This short essay is just a reminder, perhaps to myself as well, that if certain undesired issues don’t appear to be going away, we may not be approaching them the right way.

The top cause of information security weakness is terrible user experience design. The second cause of information security vulnerability is the fallacy of information security. Everything else flows from there.

It happened again, a high profile IT system was hacked and the wrong information was made available to the wrong people in the wrong places at the wrong time. This is the opposite of what information security infrastructure is defined as. Just search the Internet for “Sony hack”, one of the top hits is an article from 2011 that reads as Sony Hacked Again; 25 Million Entertainment Users’ Info at Risk

This type of headline news always beg some questions:

Don’t we know that any information system can eventually be breached given the right amount of resources (resource includes any amount and combination of: motivation, hardware, people, skills, money, time, etc) ?

Is it really so that deep pocketed companies cannot afford to protect their valuable resources a little harder than they appear to be doing?

Do we really believe that hackers wait to be told about potential vulnerabilities before they would attempt to break in? Can we possibly be that naive?

This has happened many times before to many large companies. Realistically, this will continue to happen. A system designed by people will eventually be defeated by another system designed by people. Whether the same would hold true for machine designed systems versus people designed systems is an undetermined problem. Once machines can design themselves totally independent of any human action, perhaps then information security will take new shape and life. But that might just be what professor Steven Hawkins was on about recently?

I suggest that we stop pretending that we can make totally secure systems. We have proven that we can’t. If however, we accept the fallibility of our designs and would take proactive actions to prepare, practice drills and fine tune failure remediation strategies, then we can start getting places. This is not news either, people have been doing that for millennia in various capacities and organisations, war games and military drills are obvious examples. We couldn’t be thinking that IT would be exempt of such proven practices, could we?

We may be too often busy keeping up appearances, rather than doing truly useful things. There is way too much distraction around, too many buzzwords and trends to catch up with, too much money to be thrown at infrastructure security gears and too little time getting such gears to actually fit the people and contexts they are supposed to protect. Every freshman (freshwoman, if that term exists) knows about the weakest link in information security. If we know it, and we’ve known it for long enough, then, by design, it should no longer be an issue.

It’s very easy to sit in a comfortable chair and criticise one’s peers or colleagues. That’s not my remit here, I have profound respect for anyone who’s doing something. It’s equally vain to just shrug off and pretend that it can only happen to others. One day, you will eventually be the “other”. You feel for all those impacted by such a breach, although there are evidently far more important and painful issues in the world at the moment to be worrying about something like this particular breach of confidentiality.

This short essay is just a reminder, perhaps to myself as well, that if certain undesired technology issues don’t appear to be going away, we may not be approaching them the right way. Granted the news reporting isn’t always up to scratch, we do regularly learn that some very simple practices could have prevented the issues that get reported.

 

Idris, a dependent typed functional programming language: install and run on OS X Mavericks

Installing and running Idris programming language on OS X Mavericks. As often in my experience, the package dependency management is rather weak. Fortunately, with a small detour I could work around my issues.

I got rid of the (now deprecated) cabal-dev, updated to the latest version of cabal and thought of installing idris on my machine.  I immediately hit a problem, it won’t install. I get the following error message (log cut down):

$ cabal install idris
Resolving dependencies...
Configuring annotated-wl-pprint-0.5.3...
Building annotated-wl-pprint-0.5.3...
Preprocessing library annotated-wl-pprint-0.5.3...
...(cut)...
Registering trifecta-1.5...
Installed trifecta-1.5
cabal: Error: some packages failed to install:
idris-0.9.14.1 depends on language-java-0.2.6 which failed to install.
language-java-0.2.6 failed during the configure step. The exception was:
ExitFailure 1

So, it’s logical to try and install ‘language-java-0.2.6’. I tried that:

$ cabal install language-java-0.2.6
Resolving dependencies...
Configuring language-java-0.2.6...
cabal: The program 'alex' version >=2.3 is required but it could not be found.
Failed to install language-java-0.2.6
cabal: Error: some packages failed to install:
language-java-0.2.6 failed during the configure step. The exception was:
ExitFailure 1

And that fails too. It turns out I could install the package ‘alex’ manually. So I run this:

$ cabal install alex
Resolving dependencies...
Configuring random-1.0.1.1...
Building random-1.0.1.1...
...(cut)...
Linking dist/dist-sandbox-c9c9694f/build/alex/alex ...
Installing executable(s) in
/Volumes/MacintoshHD/usrlocal/bin/.cabal-sandbox/bin
Installed alex-3.1.3

That worked. I can now attempt to install Idris:

$ cabal install idris
Resolving dependencies...
Configuring language-java-0.2.6...
Building language-java-0.2.6...
Preprocessing library language-java-0.2.6...
...(cut)...
Registering idris-0.9.14.1...
Installed idris-0.9.14.1

Succes. Phew! I am now ready to start using Idris:

$ idris
bash: idris: command not found

Oops! It’s not in the search path, and it’s not obvious where it’s gone. That’s easy, prior to all this I had installed and initialised a cabal sandbox.  So it’s gone in the sandbox, which isn’t in my search path. I went for a sandbox approach because I wanted to isolate my installation of snap and yesod from this Idris playground.

To preserve my sandbox concept, I am going to use an alias to Idris:

$ alias idris="/Volumes/MacintoshHD/usrlocal/bin/.cabal-sandbox/bin/idris"

I typically add such alias to my ~/.bash_profile so that it’s always available.

$ idris
 ____ __ _
 / _/___/ /____(_)____
 / // __ / ___/ / ___/ Version 0.9.14.1
 _/ // /_/ / / / (__ ) http://www.idris-lang.org/
 /___/__,_/_/ /_/____/ Type :? for help
Idris is free software with ABSOLUTELY NO WARRANTY.
For details type :warranty.
Idris >

And that’s it, I now have Idris available and I can plough on. Ironically, for a language that promises to bring you dependent typed programming, the infrastructure dependency management isn’t stellar. That remains a weak point for me for all things Haskell.

Post Scriptum

  1. This is part of an exploratory journey into dependent types, an exciting concept in computer programming that I wish to use for production code in the near future. To get a glimpse of what this means, read this nice blog post, which, by the way, I think of as a model for introducing advanced programming concepts.
  2. Idris is built on top of Haskell, of which I have a recent version running on my machine. So if you’ve never had any of that kind of stuff on your machine, your mileage might be considerably longer than what I’ve shown here.

Who is set to benefit from the introduction of Swift programming language

Swift could open up a great opportunity for Apple, getting large number of developers to write programs for OSX and iOS. Conversely, Swift could be the opportunity that some would be waiting for to start leveraging their skills on Apple ecosystem. Swift could be a blow to those ecosystems as Apple is suddenly more attractive to a rising generation of developers enamoured with functional programming.

One of my first reactions to Swift was the following:

In this post I elaborate a little more on the reasons that I held this belief, that everybody wins.

  • Apple, obviously: legions of developers who might have been put off by Objective-C will now give a second look and many are likely to write code for Apple platforms.
  • Groovy, Scala, Objective Caml programmer: programmers experienced with these languages can now leverage their skills to build solutions for the Apple platform without having to learn another language. Their main hurdle would be  to get acquainted with iOS and OSX platform concepts and building blocks.
  • Scala ecosystem: the introduction of Swift might have the side effect of actually making some people understand Scala better and quicker, this because  it shares many concepts with Scala but has a more readable syntax
  • A converse effect of the above bullet point: Apple developers who make the jump to Swift, would also realise the many benefits of functional programming and adopt languages like Groovy, Scala or Erlang.

Folks looking for new opportunities should find plenty. From a business opportunity perspective, here are some potentially profitable developments:

  • Groovy backend for Clang and LLVM: this could make it possible to write native iOS and OSX code in Groovy. People used to writing web only code would suddenly be able to port their solutions to the Apple platform.
  • Cross Training Developers: this could be the best time for Groovy or Scala training organisations to tap into the masses of iOS and OSX developers scrambling to learn functional programming. Why leave that money on the table?
  • Apple and Pivotal work together: this is a bit tangential, but if Apple were interested in expanding their Cloud clout, this is a good way to do that because they suddenly would be able to target the data centre too! Just buy Pivotal and leapfrog both Microsoft and Google in one fell swoop!

I don’t see yet how Swift could benefit either Microsoft or Google ecosystems. If anything, this could be a blow to those ecosystems as Apple is suddenly more attractive to a rising generation of developers converting to functional programming.

Swift, a general purpose programming language for a specific platform

Apple releases a new programming language for its platform, called Swift. Swift synthesises Apple’s learning from years of Objective-C, but they also cherry picked from the vast pool of learning embedded in many high profile and popular languages such as OCaml, C#, Rust, Lua, and many others. Interestingly, Apple seem to have left out concurrency and parallelism, either they consider those to be outside of the scope for a modern language or they are just not ready to release something in that area. Parallelism and concurrency are first and foremost systems concerns, more at home with a systems programming language. Apple is heavily focused on the user experience, and much less on system experience (TM). This makes me believe that Apple intentionally decided that Swift would not be a systems programming language. At least not now.

Apple announced a new programming language. The inevitable reactions poured in, Apple lovers mostly rejoiced, Apple haters pounded on. Some complained the fact that it was not open source. Here are some thoughts from an observing angle.

A new programming language that launches in 2014 would aim to attract the masses and actually solve problems. If not then nobody would pay attention, regardless of their affinities. There’s been a lot of learning compounded in the software engineering discipline over the years, it would be foolish to sidestep all that and try to bring up something new. These learnings can be found in the existing languages, C#, OCaml, Erlang, Groovy, Scala, Rust, Go, Lua, and many others I might not even be familiar with. So in this context, if you were in any vendor’s shoes, you would need to embrace what’s available and bring it closer to your environment. That’s what Apple has done. 

Concurrency and parallelism

It is surprising that Swift does not seem to address concurrency and parallelism the way Erlang for example does, at least not in a visible way so far. It seems they didn’t want to ‘copy’ much from Erlang in fact. One could speculate what Apple’s rationales might be. My immediate thought, when I saw that omission, was that perhaps that would have pushed the enveloppe too far for the current generation of Apple developers. If that were included, a-la-Erlang for example, quite likely the first batch of applications delivered with Swift could be utter disasters and permanently cripple the uptake of the language. Apple would have to first create a solid infrastructure for such parallelism and concurrency constructs. That is the most important reason I can think of for leaving out such paradigms.

Another rationale to consider is the fact that unpredictability doesn’t marry well with user interface behaviour. Parallelism and concurrency are first and foremost systems concerns, more at home with a systems programming language. Apple is heavily focused on the user experience, and much less on system experience (TM). This makes me believe that Apple intentionally decided that Swift would not be a systems programming language. At least not now.

Vindication

Swift appears to be, at least from one vendor’s perspective, a vindication that verbose programming languages don’t make developers productive in this day and age. Also, letting people make simple silly preventable mistakes is not a profitable business. Every time you looked at Objective-C source code, after any amount of time with Clojure, Groovy, Lua, Erlang or OCaml, to name a few, you were left wondering why such status quo persisted. Apple gave their answer today. But they waited a long time for that. Microsoft with C# and particularly F# have long been delivering the goods in that department. Google with GO have been forging ahead and registering tremendous success. Marrying the best aspects of functional programming with sequential and object-oriented programming, deliver the most benefits to users at the moment. With Microsoft, Google, Apple, Mozilla on board, it would be hard to argue that functional programming has become mainstream. The early adopters should rightly feel vindicated, somehow.

Update: A pointer to the reference web site was missing, for those who might not be Apple developers. There is no point in including ‘hello world’ or other trivial sample code here. Swift web site is:  https://developer.apple.com/swift/.

Martin Odersky on the Simple Parts of Scala

Martin sums up perfectly the challenges facing Scala adoption in the wider community: ‘extremists’ at each end of the OO-FP divide can be prompt to come out with their ‘pitch fork’s.
I’ve always perceived Martin as a software anthropologist who’d examine long and hard where learning opportunities might be, then come up with applications where most people would get the most benefits. There is no useful reason to be dogmatic about OO vs. FP. If someone doesn’t see the benefits of either then they should just pick what works for them, or avoid Scala. Denying the merits of a programming language fulfils no useful purpose, whatever someone can be productive with is the right tool for them.

In these slides, Martin sums up perfectly the challenges facing Scala adoption in the wider community: ‘extremists’ at each end of the OO-FP divide can be prompt to come out with their ‘pitch fork’s.
I’ve always perceived Martin as a software anthropologist who’d examine long and hard where learning opportunities might be, then come up with applications where most people would get the most benefits. This is just like Anders Hejlsbeg, or Rick Hickey. What you see is that these people bring tremendous value to the software development practice, there’s much to learn in closely listening to what they have to say.

Another way I found useful to look at Scala’s embrace of both OO and FP goes like this:

  • OO provides the structural facilities, the scaffolding amenities, the brick layering capability if you wish. This is how you eventually render your product. You want to be able to tear these apart as you wish, throw parts away as you see fit. A building must obey some basic laws of physics, hygiene and citizenry to be acceptable for any housing purposes.
  • FP provides the means for generalising behaviours that one wish to share and repeatedly reuse. You want predictability, stability and control here. This would be like being able to re-arrange your whole house such that bedroom, the kitchen, the living area, the computer corner or the dining area could be all be shuffled around in an unlimited configuration without having to rebuild the whole house – this is unrealistic in real-life, but we’re talking about software here.

By combining the two aspects, you get more ‘bangs for the bucks’ as they say. With my simplistic analogies, FP would be the content of your house, and OO would be the building frame. If you do it well, you can take your house content anywhere with you, but you usually discard the building frame or leave it behind.

There is no useful reason to be dogmatic about OO vs. FP. If someone doesn’t see the benefits of either then they should just pick what works for them, or avoid Scala. Coming from Java development background, or facing with a future with Java, Scala is a great platform to build sustainable solutions on.

Martin’s Slides – warning, Slideshare might require Flash – not my wish though ;-( :

Tails, privacy for anyone anywhere. Makes sense to me

I often felt that visualisation has the potential for a more secure computing than we have seen so far. Project Tails is a good move in that direction, and is in effect a great virtualised secure solution. I am encouraged to see this happening.

I wrote before in this blog that I felt security and privacy were underserved by current technology offerings. This project lays out a vision that is very close to how I imagined it, and it looks promising.

Project Tails Logo

Macaw is just what I wanted

I get inspired when I can visually tinker with experience design. This is the reason that Macaw.co got my attenion and I adopted it almost immediately. I finally found a product that allows me to do that, from Macaw.co, a recently launched rapid web UI prototyping tool. It’s got its warts, being brand new, but I like it and I expect it to do well.

I get inspired when I can visually tinker with experience design. This is the reason that Macaw.co got my attenion and I adopted it almost immediately.

Macaw user interface
blank canvas in Macaw

I hailed Twitter Bootstrap when it was launched. It helped me and many others  with UI prototype work. Some seemed to prefer deploying Bootstrap default experience with little change. I don’t like that much. I would often customise the look & feel to make it a bit proprietary. Over time I came to realise that I didn’t particularly enjoy tweaking and tuning the myriad features exposed through the LESS assets. There is nothing wrong or limited with Bootstrap per se. Web browser development tools are also a good help for quickly trying out UI ideas. There are many online sites such as codepen that help you experiment with snippets and small features. These are all fine but, I just wasn’t getting inspired by any of it. What these tools lacked – that would get me excited – was the ability to just draw my idea of an entire page as I envision it, tinker only with the drawing, and then immediately get the code behind without resorting to any extra artifices. There is an emerging bunch of tools that allow you do that in a cost effective manner.

I discovered that my way of tinkering with web design worked better if I could do it visually. As I examine visual elements I get more ideas and I can iterate quicker. Going back and forth to the LESS source files and tuning design was too slow to entice me. I am not a designer by education so I make lots and lots of tweaking before I like something. As a kid I used to love drawing things. I would typically start by drawing a line or some randomly curved shape, then as I added to that some kind of shape would start to emerge and I would finalise my drawing that way. If I ended up drawing a horse or a house that was usually not even in my mind when I started. That was my way of playing with pen and pencil. It seems that is also how I can enjoy doing UI work.

For a long time I relied on Omnigraffle, not really a web design tool, and Pixelmator, also not a real UI web design tool. I’ve used Adobe FireWorks on and off for years. FireWorks is a nice tool but it never got me excited much. It seemed that I couldn’t find any product out there that I would want to stick to. That changed recently.

There’s a new product that allows you to visually design what you want, then get a baseline html and css code that embodies that. It’s called Macaw. I saw a brief video announcing its features and immediately decided to buy a license when it shipped. I got a copy at the beginning of this month, started using it, and it felt right away that this was how I always wanted to work with web design.

Aside from the immediate feedback, one benefit I particularly appreciate with Macaw is that the authoring assistants are intuitive and help you just the right way. As a non-negligeable bonus it generates a minimal base code that can be directly fed into an application design. Here are two radically different design I made with it.

macaw - design prototyping example
macaw – design prototyping example
Web UI Prototyping with Macaw
Web UI Prototyping with Macaw

I create these artefacts just like I would in Omnigraffle, however the whole experience of manipulating the elements fits perfectly with the job: desiging a web front-end. Once done, the code that Macaw generates is clean and minimal, it does link to jQuery though. So, I could just take the code I get from this and embed it in the application code. This approach is more to the point, and more efficient than say using a free html template downloaded from the web.

macaw generated css
macaw generated css
macaw generated source files
macaw generated source files

Before Macaw I tried several products of which many are online solutions. I won’t list them since this isn’t really a posting on product comparison. While lots of nice experiences are available I was often disappointed with the code that got generated. The problem I expect to solve with these tools is to get started with a clean slate and be able to repurpose code with minimal effort. This means that I didn’t want any dependencies in the code and that seemed almost impossible to achieve. For this reason I couldn’t settle for anything in particular and kept on to a traditional model where the UI design assets were fairly remote from the code produced to implement it. I don’t need to do that anymore, I can rely on Macaw to visually tinker with prototype designs.

Of course when it comes to serious web crafting like making animations or complex interactions Macaw wouldn’t fit the bill. Webflow for example seems to be another nice one, particularly suited to animation rich UIs. The produced code embeds relatively proprietary JavaScript code. And the business model seemed to be favourable to repeat uses or even hosted solutions. For my particular needs I saw these as compromises I didn’t really want. And honestly I would just hire a professional designer if the experience is complex or that it needs to be very sophisticated. I am an IS&T architect who believe in all-round craftsmanship. I want to deliver the whole solution. If it needs to look spiffy and flashy then I call out to professional designers.

Having used the product for a little while now, I am convinced that Macaw is the tool that I was missing so far. I can now deliver complete end-to-end experiences to clients with the tools that I am comfortable with.

Django CMS is the simplest and most elegant Web CMS I’ve seen lately

Recently I have deep-dive compared a lot of web content management (CMS) products. I’ve found that Django CMS is the simplest and most elegant way of authoring semantic HTML content on the web.

Recently I have evaluated a lot of web content management solutions, all of them open source, almost exclusively on Unix like systems, I haven’t seen anything as simple and elegant as Django CMS. I took it for a spin, it’s polished, really designed to solve fast turn-around web publishing needs. It does that brilliantly, no question about that. I particularly like the way they make all the authoring tools and widgets disappear when not in use, so that you are always looking at your content exactly as it will appear when  published.

There is one thing, in a lot of cases, to truly appreciate any piece of work, one must have a practical understanding of the challenges involved in making it. This is also true for software. Content management, web content management isn’t a sexy topic. However, without such systems it would be tedious to maintain an online  presence, difficult to consume a lot of the content produced daily by legions of people.

I was just going to use one of the web CMS products I’ve known for a long time, to power one of my web sites. In the process, I realised that I have not done any proper re-evaluation of the products available for a while. I wanted to try that first before settling on something. I thought perhaps I should go open minded about it, include a broad selection of products. As time is precious, I decided to focus on Unix/Linux based solutions for now, because that is also my target deployment platform.

For my purpose I went back to implementing a few other web CMS products, namely: Zotonic (Erlang based), Liferay Portal (JCR), Magnolia (JCR), dotCMS (JCR), several static publishing systems such as Pelican (Python), Jekyll and Octopress (Ruby), and of course WordPress (PHP, which powers this blog). Django CMS best all these in terms of simplicity and its focus on making semantic HTML content authoring a bliss. To be fair, it’s hard to compare these products I just mentioned since each actually aim to cover a breadth of needs that might be going well beyond web CMS (portal for example). But I had narrowed my scope down to the authoring and publishing processes only, web CMS functionality alone, that makes it possible to come up with a fairly reasonable comparison.

I am not yet done evaluating web content management systems, I still have on my to-do list Prismic.io (an innovative Content Repository As A Service) and a couple of .NET based web CMS.

More to come on this topic sometime later. I probably have enough material to publish a small document on this subject. We’ll see if/when I get to that. But for now, I definitely maintain that Django CMS is the most polished solution for folks looking for a simple and attractive web CMS software.