Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Published · revised (v6, latest) · 9-minute read · 5 revisions

On Deutsch and Naval

A few days ago, on Feb 11, Naval Ravikant interviewed David Deutsch. The interview is titled ‘David Deutsch: Knowledge Creation and The Human Race’, and it’s mostly about his book The Beginning of Infinity (BoI). Here are my thoughts on the interview as I quote from the transcript. I suggest you read (or listen to) the interview in full first because I’ll be skipping some parts.

Early on, Naval says:

There’s a lot of counterintuitive things in [your book The Beginning of Infinity]. You’re skewering a lot of sacred dogmas. Sometimes you do it in passing with a single sentence that takes me weeks to unpack properly.

Others have made similar comments along the lines of ‘there are passages in the book that could have been chapters, and sentences that could have been passages’. I used to think of such comments as compliments – I now think of them as criticisms. If you write something that takes readers “weeks to unpack properly”, that’s not good. If you write a passage about something but should have dedicated a whole chapter to it, then you didn’t do it justice and the reader is going to miss out on a lot of information you could have provided but chose not to.

A little further down, Naval says:

We want to introduce [laymen] to the principles of optimism, The Beginning of Infinity, what sustainability means, and anthropomorphic delusions.

I think he means anthropocentric delusions.

By the way, Naval talks a lot in this interview. From strictly eyeballing it, Naval’s and Deutsch’s word count is roughly equal. Unless an interviewer feels the need to help the interviewee out in some way, I think it’s best to let the interviewee do most of the talking.

Further down, Naval asks:

What are humans, how are they unique, how are they exceptional, and how should we think of the human species relative to the other species that are on this planet?

Now Deutsch finally gets to speak. He says:

Every animal is exceptional in some way. Otherwise, we wouldn’t call them different species. There’s the bird that can fly faster than any other bird, and there’s the bird that can fly higher than any other, and so on.

I’m not a biologist, but I don’t think that every animal is exceptional in some way. Most animals are ~100% like other members from the same species. Species aren’t based around exceptionality either; they are a way of classifying organisms. According to Wikipedia, a “species is often defined as the largest group of organisms in which any two individuals of the appropriate sexes or mating types can produce fertile offspring, typically by sexual reproduction” (links removed). The article lists other ways of classifying organisms, but they don’t involve exceptionality either.

Deutsch doesn’t think every animal is exceptional either. Consider his derisive remarks about cows being “a ridiculous animal” whereas “you are a valuable person”, both of which are true. So why does he now say every animal is exceptional? To appease animal lovers?

Deutsch then says:

It’s intuitively obvious that we are unique in some way that’s more important than all those other ways.

I disagree that it’s “intuitively obvious”. If it were, Deutsch wouldn’t have needed to dedicate a chapter to it (chapter 3). Lots of people think humans are not unique and downright irrelevant in the cosmic scheme of things. Anti-anthropocentrism is common. And something that makes animals unexceptional is that, as Deutsch believes, they’re not sentient, but he doesn’t mention that in this interview. This would have been a great, controversial talking point that is not only not obvious, but also a spark for further debate among readers and listeners.

By lying about animals being exceptional, Deutsch muddies the distinction between animals and people, a distinction he makes a significant effort to explain in his book. He’s hurting his own argument.

Then Deutsch says:

As I say in The Beginning of Infinity, in many scientific laboratories around the world, there is a champagne bottle. That bottle and that fridge are physical objects. The people involved are physical objects. They all obey the laws of physics. And yet, in order to understand the behavior of the humans in regard to the champagne bottles stored for long periods in fridges—I’m thinking of aliens looking at humans—they have to understand what those humans are trying to achieve and whether they will or won’t achieve it.

The purpose of the champagne bottle is easily lost on the reader/listener. It’s for the celebration of a major discovery (as opposed to, say, being the object of studies). In BoI ch. 3, Deutsch writes:

[The cork] is going to be removed from the bottle if and when SETI succeeds in its mission to detect radio signals transmitted by an extraterrestrial intelligence.

The cork is therefore considered a proxy for such a discovery, as Deutsch explains in the book. (In fairness to Deutsch, maybe Naval just edited the interview poorly.)

A bit further down, Deutsch says:

[Alien observers] need [general relativity] to explain why this one monkey, Einstein, was taken to Sweden and given some gold.

Einstein received the Nobel Prize for his discovery of the photoelectric effect, not for general relativity.

Later on, Naval says:

[A]s far as we know, there are only two systems that create knowledge. There’s evolution and there are humans.

That’s a fudge. While the creation of all types of knowledge involves evolution, there are three types of evolution that we know of so far. First, there’s biological evolution (what Naval refers to just by “evolution”), the mechanisms of which were first approximately explained by Darwin and then improved upon by the neo-Darwinian synthesis. Second, there’s memetic evolution as discovered by Richard Dawkins, which centers around the idea of cultural replicators. Humans have memes, and some other animals, such as some other apes and cats, have memes, too. Third, there’s the evolution of ideas inside the human mind, as discovered by Popper, which involves conjectures and refutations. (I have written more about its own neo-Darwinian ‘synthesis’ through the introduction of replication here.)

In other words, we know of at least three types of evolution, and they can all create knowledge, albeit different kinds. (Deutsch would argue that only humans can create explanatory knowledge in particular – I’m not sure about that; I think biological evolution sometimes creates explanatory knowledge, too.)

Deutsch says further down:

[B]iological evolution can’t reach places that are not reachable by successive improvements, each of which allows a viable organism to exist.

Again, I’m no biologist, but I don’t think that’s true. Imagine genes that currently code for some organism that optimally functions to help its genes replicate further. That is, the genes have reached some local optimum. I see no reason why any deviation from that optimum must immediately be deleterious. Technically, if every step of the way had to be an improvement, the genes wouldn’t even be allowed to ‘stand still’ for one replication cycle, but they do (eg sharks are said not to have mutated meaningfully in millions of years; it’s not uncommon for evolution to get ‘stuck’ in local optima). I could imagine mutations that aren’t quite as good as the starting point, but still spread a bit. Then they undergo more mutations and end up being better at spreading than the starting point. Not everything has to be successive improvement in biological evolution (or any other kind of evolution).

For example, imagine that the horn of a rhinoceros helps rhinoceros genes spread optimally when its length is five inches (I’m making that number up). Imagine also that the specific interplay of environment and rhinoceros genes is such that, any longer horns are immediately deleterious while slightly shorter horns are only slightly below the optimum but not deleterious. That means genes that code for slightly shorter horns – say, four inches – will still be able to spread, albeit slightly less so (all else being equal). Then say that there’s another local optimum at three inches, and that this optimum is greater than the five-inch optimum. Since the four-inch intermediate step isn’t deleterious, the genes can reach the three-inch optimum relatively safely (again, all else being equal), and then those genes will spread better than the original ones. Another example are vestigial appendages, ie now-mutated limbs, say, that used to fulfill some function but don’t anymore. Later they can turn into a new feature that helps the genes spread better than the original function did. In short: viability does not require improvement.

Deutsch says that’s only possible for human thought (he says the following in opposition to biological evolution):

A thinking being can create something that’s a culmination of a whole load of non-viable things.

Deutsch continues:

Out of all the billions and billions of species that have ever existed, none of them has ever made a campfire, even though many of them would’ve been helped by having the genetic capacity to make campfires. The reason it didn’t happen in the biosphere is that there is no such thing as making a partially functional campfire; […].

Why couldn’t some genes spread better by coding for organisms that can rub stones together in such a way that they create sparks (but not a fire) which then, say, deter predators? Why couldn’t some genes code for something else than then also leads to rubbing stones together to create sparks (a phenomenon Deutsch calls reach, see BoI ch. 1)?

I’m open to the idea that biological evolution has some inherent limit that human thought does not have, but I don’t see why that limit lies with campfires. I think this issue requires a more rigorous explanation along the lines of which classes of transformations biological evolution can effect and why vs. those it cannot and why not (so maybe Deutsch’s Constructor Theory, which they talk about later on, has the answer).

Maybe underlying this part of the discussion is the Popperian notion that we humans can let our theories die in our place whereas organisms are the embodiment of some genetic theories and die when those theories are (sufficiently) bad. The difference there is that there is no evolution happening within a single non-human organism, whereas there is lots of evolution happening within a human mind. But then using that to say that all organisms need to be successive improvements is a fudge: it’s not ‘fair’ to compare a static organism (‘static’ in the sense that all the ‘theories’ in its genes remain unchanged in its lifetime) with a human (whose ideas do change). The more apt comparison is that of the biosphere and a human mind, because those are both pools of replicators. Then you can see that neither has to have only successive improvements.

Naval then says:

Related to that, I had the realization after reading your books that eventually we’re likely as humans to beat viruses in a resounding victory […].

No. This isn’t a valid application of probabilities. We’re not “likely” to do it.

Naval also says:

You define wealth in a beautiful way. You talk about wealth as a set of physical transformations that we can affect.

I think he means ‘effect’. He then says:

So as a society it becomes very clear that knowledge leads directly to wealth creation for everybody.

Not necessarily, but in the right culture, it can. If you live in a society that will kill you for thinking of something new or resisting social dogma (or drug or beat you, as is often still the case with children), your new knowledge is more of a burden on you.

A bit further down, Naval says:

This now gets into the realm of people demanding that if you’re going to claim that new knowledge will be created, you have to name that knowledge now. Otherwise it’s not real.

Yes. This is a serious problem and a manifestation of pessimism. It’s one of the reasons why people are so averse to the idea of stateless societies (or even just societies in which roads are built privately). They will only accept Deutsch’s claim that problems are soluble (BoI ch. 3) if they are shown all the solutions right away, which places an unfair and impossibly hard burden on optimists.

On the topic of AGI (artificial general intelligence), Naval starts by saying:

[…] I liked how in The Beginning of Infinity you laid out good explanations, because that gets to the heart of what creativity is and how we use it.

Some people use creativity to create good explanations some of the time. Creativity enables people to come up with good explanations, but it also enables them to come up with reasons not to do that. I think people should spend more time explaining irrational minds so that they avoid the trap of explaining only rational thought.

But you can use the ability to create good explanations in the negative: if some software does not have this ability, then it can’t be an AGI.

Naval does this a bit further down:

[O]n the other side, I hold up the criteria, “Can it creatively form good explanations for new things going around it?”

Why do so many people have trouble using ‘criterion’ and ‘criteria’ with the proper number? He holds up one criterion. I’ve frequently observed the same problem with ‘phenomenon’ and ‘phenomena’ – people keep getting it wrong.

Deutsch says further down, back to the topic of AGI:

You are not going to program something that has a functionality that you can’t specify.

My tentative solution to this problem is that creating AGI involves writing a conventional computer program which exhibits creativity as an emergent side effect. Like, people still have a regular computer program running on their brains – regular in the sense that 1) it’s a set of instructions the brain, like any other computer, executes and from which it cannot deviate, and that 2) this program has functionality you specify in advance. The knowledge-creation part can only be an emergent phenomenon since, as Deutsch has argued, if you already built into the program the knowledge it was supposed to come up with itself, then you’re the creator of said knowledge, not the program. You can find related articles here (this is the one about creativity as an emergent side effect) and here.

Deutsch then says:

You have to have a very jaundiced view of yourself—let alone other people—to think that what you are doing is executing a predetermined program. We all know that we are not doing that.

If my solution is right, there is no conflict between running a predetermined program (what else could it be?) and being creative (again, if that program gives rise to certain emergent properties at runtime).

Then Deutsch asks:

Has anyone tried to write a program capable of being bored? Has that claim ever been made? Even a false claim?

Many years ago I heard a (presumably false) claim that a computer had become annoyed with something, but I forget the details. Not bored, but I suppose it’s similar.

Naval says further down:

There was a big controversy on Twitter because one of the guys working in AGI who was fired from Google said, “Yes, they’ve actually created AGI and I can attest to it.” People were taking it on his authority that AGI exists. Again, that’s social confirmation. That tells you more about the person claiming there’s AGI and the people believing that there’s AGI as opposed to there actually being AGI.

I agree. The Google guy probably has no idea what AGI is, and I doubt he actually worked on it. The requisite philosophical progress just hasn’t happened yet, and not enough people know about Popper.

And Naval says:

If actual AGI existed, its effects upon reality would be unmistakable and impossible to hide, because our physical landscape and our real social landscape would be transformed in an incredible way.

I mean, I don’t advocate it, but you could disconnect it from the internet and imprison it in a computer that’s locked in some room deep underground in one of Google’s ultra-secret labs or whatever to avoid those effects and keep it a secret. But I doubt that’s happened.

Getting on the topic of education, Deutsch says a bit further down:

A hundred years ago, education of every kind was much more authoritarian than it is now; but still we’ve got a long way to go.

I wonder if social pressures around academia have gotten worse over the past few decades.

On the topic of good explanations, Naval says:

Falsifiability—I know that sounds like a very basic criterion.

Here he suddenly uses the right number for ‘criterion’.

Naval then says further down:

We have narrowed down on a new point here that has not been explicitly made before, which is that it’s the criticize-ability that is important, not necessarily the testability […].

Hans Albert previously made that point by criticizing what he calls ‘immunity against criticism’.

Deutsch says further down:

Those are criteria that come up when trying to think more precisely what testable means.

He uses ‘criteria’ right.

Then they move on to quantum physics. Although everything up to now was recorded for laymen, suddenly they get into topics the vast majority of readers/listeners won’t understand, without giving any real explanation or background to make it easier for them. Why?

Naval started the interview by saying:

My goal isn’t to do yet another podcast with David Deutsch. There are plenty of those.

Isn’t that what this podcast is? When will Deutsch present something new?


This post makes 7 references to:

There is 1 reference to this post in:

What people are saying

What are your thoughts?

You are responding to comment #. Clear


Markdown supported. cmd + enter to comment. Your comment will appear upon approval. You are responsible for what you write. Terms, privacy policy
This small puzzle helps protect the blog against automated spam.