My blog about philosophy, coding, and anything else that interests me.
When I was a child, I asked my mother why people make mistakes. Instead of answering the question, she scoffed at it – presumably because she didn’t take it literally and reinterpreted it to mean something like ‘people shouldn’t make mistakes’ or ‘how can people avoid mistakes?’. But that’s not what I meant. I wanted to understand, literally, why people make them. I wanted to understand their origin and the nature of human fallibility.
Little did I know that I was asking a key epistemological question, one that not only explains fallibility, but also our potential for progress. And key epistemological questions are relevant to the field of artificial general intelligence.
To answer the question, let’s take a brief excursion into the animal kingdom. Sharks are said to have retained a roughly identical genetic makeup for a very long time. The same is said of some kinds of bacteria. That must mean both sharks’ and those bacteria’s genes have discovered far-reaching solutions to both past and novel problems. They’re well adapted to a wide range of changing circumstances – maybe even certain circumstances they have never encountered before – but of course, they’re not adapted to dealing with all kinds of circumstances.
For example, billions of years from now, the sun will explode. A huge asteroid could strike the earth well before then. Both events would decimate the earth’s surface. Sharks with roughly the same genetic makeup as today will have no way of dealing with either challenge, so they will go extinct. Less dramatically, sharks cannot even survive on land. For sharks to never go extinct without their genes ever creating any new knowledge, no matter the change in circumstances, they would have to have already found the perfect solution to all possible problems in advance.
The same would need to be true of people. To never make any mistakes, they’d need to have already found the perfect solution to all possible problems they might encounter.
It is intuitively clear that no such perfect solution can exist. But why not, exactly? Maybe there’s a simple formula people could follow which will always lead them to success. Maybe we just haven’t discovered that formula yet.
Asking why such a perfect solution cannot exist is essentially the same as asking why people inevitably make mistakes. I believe it is this: problems are unpredictable.
To be sure, some problems can be predicted with some confidence. For example, many (most?) married couples will go through a divorce. And even if they stay together till death do them part, they will get into some fights. We may not know in advance what those fights will be about, but we know they’ll get into some.
Other problems have a different character: they’re utterly unpredictable. For example, before the invention of computers, people could not have known that hacking could become a problem. Before the discovery of fossil fuels, people could not have known that tyrants may one day try to make their consumption more difficult. And so on. So, while not every particular problem is unpredictable, problems in general are, and whether some unpredictable problem will one day become predictable is itself unpredictable.
Both kinds of unpredictability of problems follow directly from 1) the deep unpredictability of the growth of knowledge and 2) the fact that all such growth comes with new, better problems. David Deutsch has written about these two core epistemological facts in his book The Beginning of Infinity. In short, if you can’t predict new knowledge, you also can’t predict the new problems it will cause.
A defensive ‘strategy’ against new problems could then be to avoid the creation of new knowledge. (As Deutsch has written, this was the dominating strategy for much of history, and it’s making a comeback in what’s called the precautionary principle, which says to avoid everything that’s not known to be safe.) If the creation of knowledge is unpredictable in principle and leads to unpredictable problems, maybe those problems can be avoided by not creating knowledge – or so it could be argued.
This approach is a recipe for disaster because it leads to stasis. An individual or society bent on never creating new knowledge would be even less able to predict, let alone deal with, new problems, which can still arise because the creation of knowledge is not the only source of new problems. As Deutsch writes in chapter 17 of the referenced book:
There is a saying that an ounce of prevention equals a pound of cure. But that is only when one knows what to prevent. No precautions can avoid problems that we do not yet foresee. To prepare for those, there is nothing we can do but increase our ability to put things right if they go wrong. Trying to rely on the sheer good luck of avoiding bad outcomes indefinitely would simply guarantee that we would eventually fail without the means of recovering.
I think it’s not quite right that “[n]o precautions can avoid problems that we do not yet foresee.” Some precautions may have some limited reach into the unforeseen, to invoke another Deutschian concept (chapter 1). But his general point stands.
Whatever you do, you have a choice between only two alternatives: you can either create new knowledge or you can decide not to. There is no third way. In both cases, you will run into new problems, and in both cases it’s due to a lack of knowledge.
In chapter 3 of that same book, Deutsch carves into stone two epistemological insights: that “problems are inevitable” and that “problems are soluble”. In light of the above discussion, I suggest adding a third stone plate:
Because problems are unpredictable, our knowledge, though always improvable, will forever be less than ideal in some ways and horribly inadequate in others. That’s why any hypothetical ‘formula for success’ cannot exist. And that’s why people make mistakes.