A Fix for Software Fixes
The problems I recently mentioned were caused by a faulty BitDefender update reminded me of a debate that, while definitely being tackled plenty of times by plenty of people ever since patches and updates became such a frequent occurrence, never reached a conclusion. I’m referring to whether updates, particularly those that fix existing problems, should be written and released as soon as possible, accepting the risk of them breaking something else, or time should be taken to thoroughly test them, forcing users to wait for a solution to their current problems for days, weeks or, in case of particularly complex issues, perhaps even months longer than they’d have had to otherwise. In addition, what should be done for software that relies on immediate updates in order to perform properly, and by this I mean, for example, antivirus or accounting software?
Yes, I’m going to start from the assumption that such updates will unfortunately be necessary, both to respond to recent developments and to fix the ever-increasing number of bugs that seem to be a given at launch. Of course, it’d be preferrable to thoroughly test software during development and not release it in such a state, but the complexity requirements, the number of possible configurations, the deadlines, budgets and price targets tend to conspire against this ideal. For relatively expensive but not overly complex software developed by sufficiently large independent teams which can afford to take all the time they need before launch, bugs would be inexcusable, but that’s rarely the case, so you have to take all these factors into account when determining what flaws can be tolerated at launch and which trade-offs would be preferrable in this battle between complexity, speed, cost and reliability.
Personally, I’d sacrifice speed before anything else, at least when it comes to the launch date, and possibly also to adding post-launch enhancements. This quite clearly can’t apply to patching security vulnerabilities or critical flaws that make the product unusable for some or all of its intended users, or to things such as, to stick to the above-mentioned examples, definitions updates for antivirus software or the updates required by accounting software in order to keep up with the latest legislative changes, but when it comes to the original launch and for adding any enhancements that aren’t absolutely necessary, it’s certainly better do it later, but well, instead of on time, but poorly. If the developer needs to delay the release in order to thoroughly test the software and fix any issues noticed during such tests, they should be allowed to do so instead of forcing them to release a product that will, in effect, end up being tested directly on the users.
Past this, it starts being extremely difficult to choose… Which means that it can’t be an “one size fits all” situation. No developer, publisher or regulatory body should make a decision regarding these other trade-offs and force everyone to accept it, but instead they should offer choices, allowing users to pick what they’re most comfortable with. In terms of complexity and cost, that definitely means making different versions available, at different prices and perhaps also with different release dates, possibly making the particularly complex software modular, allowing users to select which components they mean to use and only pay for those and not the others. And in terms of speed and reliability, it most notably means clearly specifying which releases, including patches, are thoroughly tested and which are not and allowing people to choose exactly what they want to install and when, according to what’s more important for them. In addition, when it comes to insufficiently tested security updates, the vulnerabilities they fix should be clearly specified, along with any other actions the users who prefer to wait for a more reliable patch may take to protect themselves for the time being.
Of course, this will take us right back to the matter of choice and the fact that so many users aren’t fit to use a computer, much less to be trusted to make such choices on their own, which currently leads us down this rotten path of dumbing everything down as much as possible, kicking aside any needs or desires expressed by anyone who actually knows what they want. However, even leaving aside the fact that those who’d actually deserve to be offered the proper tools for what they want to do shouldn’t suffer because of those who should get to learning before actually doing anything, what seems to be missed is that just these clueless users are the ones least likely to know the first thing about dealing with such issues when they pop up, as they invariably do, and then they’ll swarm the support staff with confusing and often violent complaints because they don’t know any better, which makes it even more difficult to isolate and ultimately fix the actual problem.
The point is that, after first making sure that developers and publishers don’t promise what they don’t know they can deliver and then, after setting achievable public goals, that they thoroughly test the software and only release it when it’s actually done and, for lack of a better term, safe to use, the users should be offered the necessary information and the right to make choices, including bad ones. Make default settings only install critical updates and thoroughly tested non-critical ones, to reduce the number of potential issues to a minimum, but also tell people exactly what each update does and how thoroughly tested it is and allow them to select precisely what they want to install. On top of this, while there should of course be some default settings that ensure the program’s functionality in most common situations and the settings that are particularly tricky to handle correctly may perhaps be hidden by default, users should easily be able to unlock as much as possible in order to tailor the experience and functionality as they deem fit and, in case of problems, also to apply or perhaps even devise workarounds before thoroughly tested patches become available.
Other than those who think they’re far smarter than they really are and would start tweaking and using experimental features despite having no clue what they’re doing, who should be ignored anyway, I really don’t see how such an approach could cause problems for anyone. It will allow developers to still create complex software and offer new features even after launch while also taking advantage of the skills of their more knowledgeable users, who may point out things they might have otherwise missed, but at the same time it’ll also allow users both the freedom of choice the PC was until not so long ago known for and the option of aiming for the highest possible reliability by easily protecting themselves from potentially harmful updates.



