Robust systems

We are passionate advocates of robust systems, but we have a particularly broad definition of robust. We don’t mean error-free or secure or fail-safe. You have to start there, and you could spend a lifetime making something work error-free, but that’s not the full definition. Robustness isn’t just about uptime or predictability. It’s also, and perhaps equally, about whether or not the system – financial, operational, technological, whatever – generates confidence or erodes confidence.

And that has almost as much to do with how people are introduced to the system, over what period of time, and under what conditions, and whether they come to an understanding of it, than it does with coding, configuration, and business rules.

You can take the same system and deploy it in an infinite number of ways, and in some companies it will work fabulously while in others it will fail miserably.

A system is more than technology or process: it is everyone’s understanding and acceptance of how things work and why they work that way. Users have a deep need people to feel comfortable with how things work, particularly if their job depends on the system’s output. So they will amplify any confusion. You can succeed with a system that doesn’t do enough of what people want. But if it does things that are unexpected or inexplicable — meaning inexplicable to them, not to you — that’s a serious problem. Conversely, anything that builds confidence early on is worth its weight in gold, since people are the most expensive component.

A robust system isn’t one that never fails spec. It is one that users have confidence in. They don’t have to like it for it to be successful, but it needs to generate confidence.