Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Error Prevention: Error Correction’s Forgotten Brother
Policies which led to rising price levels under Alexander the Great have led to rising price levels in America, thousands of years later. Rent control laws have led to a very similar set of consequences in Cairo, Hong Kong, Stockholm, Melbourne, and New York. So have similar agricultural policies in India and in the European Union […].
Correcting an error means fixing it after it’s already happened. That usually costs time or money or both. It takes some effort. It’s better to prevent an error when you see it coming. It’s a bit like correcting the error, but before it even happens.
Critical rationalism emphasizes the importance of error correction. That’s good! But it’s only half the battle. If known errors get repeated, then even if they get corrected each time, that’s not good enough. It’s better to stop them in their tracks. An error should need correcting only once at most – after that, the correction should ‘stick’ so that the error is never repeated as long as it’s still considered an error. And if something is known to be an error from the get go, it generally shouldn’t even be tried once – not as long as it’s considered an error.
For example, we’ve known for centuries that rent controls are a bad idea because they achieve the opposite of their intended effects: they discourage developers from building new properties, thus amplifying the scarcity and rising costs they were intended to reduce. But politicians keep passing rent-control laws anyway because they get voter support on such platforms. It’s collective amnesia. Politicians may claim not to have known that rent control would be a mistake, but even if true, that’s not a valid excuse because it’s their responsibility to know such things. After several years, emptied renter wallets, and increased homelessness, such errors may get corrected by voting the corresponding policies and politicians out of office – but the errors shouldn’t have been repeated in the first place. Worse, some see rising costs as reason to pass more aggressive rent control rather than remove it, and so the error gets entrenched. (If you personally disagree that rent control is a mistake, this particular example isn’t the point. Consider that if rent control is a known mistake, then it shouldn’t be repeated.)
The problems with this kind of collective amnesia are laid bare when applied to a single individual. Imagine someone who repeatedly makes the same mistakes. Maybe you even know someone like this in real life: they can’t hold down a job or they can’t quit alcohol. They do something they know to be a mistake – or would know, if they were being honest with themselves. This is not what a healthy, properly functioning mind does. There’s something arrested about such people. They ‘learn’ to do this in school, where they ‘learn’ to act against their own judgment and coerce themselves to do things they don’t want to do. They do something a part of them disagrees with. Likewise, we collectively do things all the time which some of us disagree with. That’s not okay if it affects those who disagree. But the government forces us to continue associating anyway. It would be one thing if rent controls only applied to those voting in favor of them and the rest could go their own way, but the government doesn’t allow freedom of association in that sense.
Critical rationalism is right to praise Western democracies for their defining ability to correct political errors without violence. But that isn’t enough. We need institutions that prevent the introduction and repetition of known errors as well. This is another way in which Popper’s criterion of democracy is insufficient. Suing and/or voting can work to correct errors, but those ways are only ‘reactive’, not preventative. They cost time and money we could save. Lacking the ability to prevent known errors is an error in and of itself. Bills should not get passed if they include policies that are already known to be errors. Ideally, there would be a political institution that mechanically blocks such bills.
Consider the fact that the US elected Biden, a senile who could barely string a coherent sentence together. That our country didn’t collapse after this disastrous voter decision is a testament to how strong its error-correction institutions are – but that he was elected, no, nominated in the first place is a testament to how severely lacking our error-prevention institutions are. (Once again, if you disagree with this particular example, I’m sure you can think of some other American politican who fits the bill.)
You don’t always have to try something out to know it’s an error. You can know that in advance, from theory, without ever trying it.1 We need institutions preventing the repetition of known errors in our personal lives as well. If somebody makes a known error over and over, there’s something wrong with them, even if they correct the error each time.
Of course, we can be mistaken about some idea being an error. If new reasoning is presented or new evidence comes to light in such a way that all known criticisms of an idea are addressed/counter-criticized, then we shouldn’t block it after all. But we shouldn’t just say, ‘trust me, it’s different this time’. That’s not good enough. Veritula already reflects all of this: basically don’t act on ideas that have even a single unaddressed criticism.
Don’t confuse error prevention with the precautionary principle. That principle says to avoid everything not known to be safe. Basically, it says to not do anything that might be an error. I’m saying not to make known errors. That’s different. Nor am I saying not to make any errors, period. That would mean to never learn. Nor am I saying to offer others unsolicited advice or even get involved in their lives to prevent them from doing something you think is a mistake. If it doesn’t affect you, then making that mistake is their prerogative. I’m merely saying you should not make errors from which you have already learned. Because that would generally be pointless.
Some idea may survive all theoretical criticism, ie criticism you throw at it before you act on it, in which case you should act on it. If it’s a risky idea, you may decide it’s worth the risk. But it may not survive practical, ie experimental criticism – new shortcomings you learned as a result of acting on the idea.
In some cases, it’s worth repeating an error on purpose, in isolation and in a controlled environment, as a learning experience. Learning by doing is different than learning from theory, and both have their place. When it comes to certain skills, some errors even have to be repeated as part of the learning experience: learning to ski, say, necessarily involves falling on your butt a few times even though others have already fallen on their butts when learning to ski. But that’s not a universal law applying to all progress. Not all errors must be repeated.
Even in skill-based pursuits like skiing, bad learning methodologies can be replaced with better ones, and then the bad ones don’t need to be repeated. If somebody is learning to ski and keeps repeating the same errors, day after day, week after week, and after one year of practicing he still falls on his butt all the time, something is wrong. He should find a better approach. (Really he should have tried a better approach after only a week; if it takes him a whole year to change his approach with that track record, that is an error in and of itself.) But many politicians are like this skier: they simply forget or ignore that the policy they propose has been tried dozens of times in the past and failed. So they metaphorically keep falling on their butts. They’re stuck and should work on getting unstuck.
In short, error correction is only half the battle. Tons of resources are wasted on correcting errors that shouldn’t have been made in the first place. Critical rationalism is right to emphasize error correction but should place more emphasis on error prevention – though without restricting the ability to try out new things, as long as they survive criticism.
Simply put, don’t do something you do know or should know to be a mistake!
-
Compare similar remarks in David Deutsch’s book The Beginning of Infinity about how scientific experiments usually only happen after a theory survives a long chain of pre-experimental criticisms. Scientists have the proper epistemological stance here: they wouldn’t send an expedition to Mt. Olympus to test the prediction that Zeus lives there. Instead, they reject any theory making this prediction for invoking the supernatural. ↩
What I’m advocating is merely the adoption of this same stance in both politics and our personal lives – and presumably everywhere else. At least I cannot think of any area that wouldn’t benefit from not acting on a theory, ie not running an experiment to test the theory, if it doesn’t survive non-experimental criticisms first. Here, ‘acting on’ is merely a lose generalization of running an experiment (I say ‘lose’ since the mere ‘acting on’ a theory may not always be critical, whereas scientific experiments are critical by design).
References
This post makes 3 references to:
What people are saying