Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

Published · revised (v3, latest) · 57-minute read · 2 revisions

Do Explain Episode 11 — A Window on Intelligence, with Dennis Hackethal — Transcript

The following is an automatically created transcript of the 11th episode of the podcast Do Explain, which originally aired here. Since this transcript was created automatically, it contains mistakes and reads like the audio recording.

Introduction: Today I’m speaking with Dennis Hackethal for the second time on “Do Explain.” Dennis is a software engineer and intelligence researcher in Silicon Valley, California, and last time I had him on we talked about the alleged dangers of artificial general intelligence (AGI), among other things. Today, he’s back to talk about his new book “A Window on Intelligence,” an exciting and original read about the philosophy of people that makes a powerful case for why we need a unification of philosophy and software engineering to continue making progress in various fields. In our conversation, we go over many of the book’s central ideas, such as how research programs for developing truly intelligent programs aren’t making progress, and how to fix it; how intelligent beings evolved biologically, and why, despite appearances, animals are not intelligent or conscious. We also discuss how psychotherapy is essentially a software-engineering problem, and how we need to think differently about space travel to eventually colonize other planets. Our conversation only gives a broad overview of the book and Dennis’s thinking, so I strongly encourage you to buy “A Window on Intelligence,” if you want to explore the ideas properly. It is now available for purchase on Amazon and soon on other platforms like Apple Books as well. Talking to Dennis is always fun and mind expanding, so relax your minds and get ready: here’s Dennis Hackethal.

Christofer Lövgren: Alright, I’m here with Dennis Hackethal. Dennis, welcome back on the podcast.

Dennis Hackethal: Hey, it’s great to be back.

C: Now, I have to say that while I haven’t read all of your book yet, what I’ve seen so far has gotten me really excited. So, I wanted to be the first to pre-order a signed copy of it right after we’re done today, if you don’t mind.

D: [Laughs] That’s great, I’ll happily oblige.

C: So, I thought we could start with something you address in the book that we already touched on slightly in the first episode, namely the… the status of modern AI research. So, if you could outline what is currently going on there, and why it is problematic, especially in the context of creating AGI — artificial general intelligence.

D: Absolutely. So, the underlying theme of the book is that there are many philosophical problems that we won’t solve, or at least will have a hard time solving, unless we investigate them under the lens of software engineering. One example of this is, you know, the question of whether animals are conscious and intelligent, or the mind-body problem. And then there are also problems that… there are many problems in software engineering that I claim we won’t solve unless we use knowledge of philosophy. And one of these problems is the problem of how to build AGI. That’s primarily a philosophical problem right now. So what we find then, is, if we take this approach, and we use knowledge of philosophy to evaluate today’s intelligence research, when we do that, we find a few problems. One problem is… the first overarching problem, I should say, is there is a crucial difference between what I call “narrow AI,” which is what the industry is currently pursuing, and what you’ve pointed out is “AGI,” which is artificial general intelligence. So…

C: Right.

D: There’s a crucial difference there because a narrow AI is not different from any other computer program that we’ve written so far, and it’s not intelligent. And I’ll get to why that is in a moment. So, when I say “narrow AI,” I’m speaking of programs like self-driving cars, or chess-playing programs, text-prediction systems, virtual assistants on our phones and all that kind of stuff. But an AGI would really be intelligent, it would be a person, so it would be creative, it would be conscious, it would be able to think and suffer, be humorous and all that kind of stuff. So, the first problem is just that, current intelligence research has just sort of forgotten about this distinction. Or they’re not aware of it and they’re not pursuing AGI, which was the original goal. The second overarching problem that we find is that, you know, admittedly narrow AI has great reach, and it has more reach than some of the programs we’ve written before, but… closely tied to the distinction between AI and AGI there’s a crucial distinction between what’s called the “inspiration phase” and the “perspiration phase,” and this distinction goes back to the inventor Thomas Edison. So, the inspiration phase, that’s where you come up with new knowledge to solve a problem, and that takes creativity. And the perspiration phase is when you execute that knowledge, and that takes no creative effort. And so the reason this distinction is important is that it gives us a criterion for judging whether a program is intelligent. If it lives in the perspiration phase, like all narrow-AI programs do, then it’s not intelligent. But if it lives in the inspiration phase, if it can create knowledge, then it is intelligent. And…

C: Mhmmm.

D: …we’ve just never built programs that live in the inspiration phase, unfortunately; at most, we’ve maybe built some to an approximation, but that is what’s required to build an AGI, because that’s the phase that AGI would live in. And then the third overarching problem with narrow AI is that the field is just beset with philosophical misconceptions.

C: Right.

D: Some of these, unfortunately, go back to Alan Turing, the father of computer science. As great as the contributions were that he made to computer science as a whole, he led researchers down blind alleys. Uh, for example, he said that what we want is machines “that can learn from experience.” Now, with, with just a little knowledge of philosophy, you can immediately tell that that cannot work, because that’s an empiricist mistake. And, the fields of machine “learning,” and with it “deep learning” or whatever other buzzwords they use, by definition, uh, they make that empiricist mistake, too. And so narrow AIs are said to “learn” something from experience, that is, by ingesting data. But we know from epistemology that you cannot create new knowledge by ingesting data. In fact, we know that an AGI could create knowledge without any inputs, and I think research programs should take that into account as a constraint, because it helps in software engineering to only focus on the thing that is absolutely necessary and not worry about extra stuff. I think one of the main problems with narrow AI research is that the only process that we know of that can create knowledge, which is what an intelligent program would need to to, is evolution. So, any effort that is not spent on understanding evolution must be futil.

C: That’s interesting. Alright, so, I want to get more into evolution as well, but before that I just want to reiterate what you said here. So, basically there is a hard distinction between AI and AGI here, where AI is… the fundamental issue with it is epistemological, because we’re pretending that, like you said, in… what’s it called… in “square quotes,” that these algorithms can learn, but in fact they are just utilizing, uh, knowledge that was already created by the programmer who created the program. So it’s reach of already existing knowledge that, uh, gets us result [sic?], rather than the data just streaming in and magically creating something new. Is that right?

D: That’s right. There is that fundamental distinction. It can be a little more difficult to see with narrow AI than programs we’ve written traditionally, such as, say, a chat program or something, where it really just… I mean, a chat program is something with rather limited reach, bec-… and it does something very specific that you can see, happening right in front of you. When we look at narrow-AI programs, they do something with admittedly far greater reach, so for example, if you… there are certain algorithms that don’t need to be what they call “explicitly” programmed, and they will do, for example, through reinforcement, through updating of parameters according to certain logic, they will then “learn” how to do something.

C: Right.

D: But this updating of parameters all follows mechanistically some predefined program that the programmer created. So, yes, there is something happening that updates something and so the behavior of the program changes. And in that sense narrow AI is different from other programs we’ve written in the past, but that doesn’t make it intelligent. And it’s even worse, in a way, because it makes it harder to tell that the program is not intelligent because it… what it’s doing is sophisticated, uhm…

C: Mhmmm.

D: But, I claim that the presence of sophisticated knowledge is not evidence of intelligence.

C: Yeah, so, I mean, I think that is an important point that people get, uhm, let’s say tricked into believing it is intelligent because it can create solutions that we didn’t have before. They are creating something, although mechanistically, by, like you say, updating parameters. For example, as far as I understand, the chess-playing algorithms that can beat human grandmasters at this point, they… do certain moves that, uh, the professionals can’t… they, they can’t understand why they do those moves but they work. So, they’re doing something novel, but it’s, uh, originating in the programmer rather than the program itself. The “fundamental” knowledge, so to speak. Even if the particular move in, say, chess, is something completely new.

D: That’s right. So, for example, in a chess-playing program, if you program the software in such a way that it exhaust-… well, not exhaustively, ‘cause that’s too much — that it tries to, as exhaustively as possible, search the space of all possible chess moves, then, yes, it will find a move that no person has ever thought of, because there’s just so many of them, and because it has that advantage, that it has the resources, the computational resources, and especially because it does it non-creatively that it’s even willing to search that vast space of possible moves. And so, yes, it’s going to come up, occasionally, with moves that no person’s ever thought of, but, of course, the… the way it does that was given by the programmer. So, the programmer can provide simple algorithms, but if these simple algorithms have sufficient reach, then implicit in these algorithms is for example, what to us seems like a novel move, but of course it was already implicit in the reach of the program that the programmer created.

C: Right. And, okay, so someone who is not familiar with the Popperian, or Deutschian way of thinking, might say then that, okay but… but how is that different from humans. We’re also pre-programmed evolutionarily to have any capacity that we have. What would you say to that?

D: I would say that we aren’t. [Laughs]

C: [Laughs]

D: I think that humans, when they’re born, they come out sort of half-baked. They have very limited… they have inborn ideas, but there’s a mechanism through which almost all of these inborn ideas — presumably, that’s what I think — they get sort of “swept away” pretty quickly. So I think these inborn ideas are a starting point, a very minimal starting point, and then sooner or later the, the person, the human himself evolves new ideas on his own. So, the difference between a person playing chess and a computer program playing chess is that the person does it creatively. The person also does it for fun, which the program cannot do yet. So, a human baby is not born with the knowledge of how to play chess. [Laughs] So, if it was inborn, it could only be given genetically. So, if wer'e going to argue that, that would mean that there must have been some selection pressure in our ancestors’ past to encode knowledge of how to play chess. But that doesn’t make any sense because it wouldn’t have given our genes any… any advantage to spread through the population of genes to know how to play chess. So, it’s a bad explanation, I think, to say that we have inborn knowledge of chess, or any other sophisticated thing that we do. We don’t have inborn knowledge of how to build space shuttles either, and we don’t have inborn knowledge of how to read. These are all things that we learn on our own. But a chess program doesn’t learn how to play chess on its own. It’s already “born” with that knowledge because that knowledge was created by the programmer.

C: Right. So, you mentioned evolution there, before, so, I know that’s a big part of this perspective here, so… I’d like you to, perhaps, begin by explaining how evolution works, and even how it got started, and then we can go into why this is highly relevant for our quest of building AGI.

D: Yes, so, basically, the theory of evolution is a theory that is supposed to be an answer to the question of how we account for all the complexity we see around us. We see things around us, such as trees and grass and animals, that appear to be designed. But we don’t see anyone who designed them. So it appears that there is a design without a designer. And, this was a mystery for the longest time, until Charles Darwin thought of a solution to this problem. If we want to go back to all the… if we want to go back to the origin of life, then it becomes easier to explain how this process may have gotten started, because one of the problems we run into if there’s no designer, there was also a time when there was no life on Earth.

C: Hmmm.

D: And that is because our planet formed under very violent conditions. So, no way there was any life on Earth when the planet was just forming, because it was hot and violent and no life could have survived there. And so, once the… once the Earth was cooling down enough, and the oceans were forming, and the oceans cooled down, that’s when molecules started to form in those oceans spontaneously. Now, we only have rough guesses about how life started on Earth, but… one of the pretty good ones I think is that some of these molecules that formed in the oceans were catalysts. And a catalyst is something that can cause a change somewhere without undergoing any net change itself. So what that means is it can then go on to perform the same change again and again. And most of these catalysts, they just transformed one kind of molecule into another and all that happened pretty haphazardly, but one day what happened is that one such catalyst created some of the components of which it itself was made. And so, if it can create enough of these components, and if they are put together again somehow, well then you get a copy of the original molecule.

C: So that would be the first replicator?

D: So, it wouldn’t be quite targeted enough yet to consider it a replicator, but if that continues, and you get ever more copies, and… initially, that happens only very indirectly and through detours, but if, through some changes, this process becomes more targeted, then those copies get even better at making copies, and at that point we start calling it a replicator. Or rather, I should say, they’re instances of a replicator, because a replicator is an abstraction. But I’ll just call them replicators for short.

C: Right.

D: And so replication is a central concept in the theory of evolution because evolution cannot work without replicators. So, imagine, uh, these self-replicating molecules floating around in those oceans that are called the “primordial soup.” Or, even simpler, just imagine you start out with a single self-replicating molecule in that soup, okay, then you can think about what will happen over time. One thing that will happen is that after a few days there will be more replicators than there were on day one, because they copy themselves.

C: Yes.

D: So these replicators, they spread through the soup, as it were. And then, another thing that happens, is that replicators occasionally make mistakes when they copy themselves, so over time you get replicators that look and work slightly different from how they started out. And so, this alone, this imperfect replication will lead to something that’s called a drift. In biology, this is called a genetic drift. And so, over time, you get pockets of the population of replicators that all look a little different. But even this isn’t quite enough yet to call it evolution because we’re still missing a central ingredient, and that’s selection. And so, once you have selection then the drift becomes… sort of the stage where evolution takes place, because selection is when individual replicators they die or they break down or whatever you want to call it. And selection is important because it’s the thing that introduces pressure, and acts on the pool of replicators sort of top down, and weeds out those replicators that aren’t as good at replicating as the others. Now, once you have these three components — replication, the introduction of errors during replication, and selection, that’s when you get evolution. And, over time, you get replicators that are adapted to the purpose of spreading through the population. And so, this is when the appearance of design first entered the stage. So.. and then also over time, replicators can become more complex, and so that is how we account for the complexity we see around us. It has the appearance of design, and that’s why we need to explain its presence through the theory of evolution. And so, at that point, you can say that the replicators contain knowledge, and that is knowledge primarily about how to spread themselves, but in order to spread themselves they may also evolve knowledge about the environment and their competitors and so forth, so, I… I should say, they don’t “know” anything in the sense of being conscious beings…

C: Yeah, they encapsulate knowledge.

D: That’s right. Their molecular structure encodes knowledge. And so, that is how those early replicators first evolved RNA, and then DNA, and then at some point they, you know, in square quotes “discovered” than building organisms around themselves is a great way to spread through the population. And so that is how life came on the scene. So life is an evolved strategy of replicators. And evolution still continues to this day, and that’s still how new species evolve.

C: I mean, it’s uhm… it’s fascinating to think about the origin of the whole thing, and I’m curious how… how long… of a time span would we see between the first molecular replicator and actual organisms, complex organisms, like mammals, and things like that. Do we know?

D: I don’t know if we know. We might. I would imagine it took a long time. And if I remember correctly, though nobody quote on this, on this, but… if I remember correctly, there was a time when the, the “soup” was basically, uh, saturated with replicators, and it took the discovery of life to, to kind of “stir up” the pool so that evolution could take place again. So, there can become… there can come a time where the pool gets sort of “stale,” and then that is perilous to evolution. So, at that point… I think the onset of life was one thing that stirred up the pool, and then the discovery of sexual reproduction was another thing that stirred up the pool.

C: Right. When you think about how com… where we are today, how complex people are, or other animals, and you think about this idea abstractly, of evolution, there’s a long jump there between, how could that possibly just happen. But, I, I think we forget that it’s been going on for a very, very long time.

D: It has been going on for a long time, so there has been plenty of time for the complexity around us to evolve.

C: Yeah.

D: Several times over, in fact, because there have been mass extinctions.

C: Right.

D: But also, what sometimes may take a little getting used to, is that evolution is a different kind of explanation. Usually, we like to explain things in terms of… in terms of a function, or purpose. Or, we want an explanation that will tell us, “if you start out with this configuration, then you will definitely get to this specific configuration. Evolution doesn’t tell you that. Evolution doesn’t tell you, "okay, if you start with a soup of replicators, you will definitely get dogs after a billion years.”

C: [Laughs] Yeah.

D: Evolution doesn’t tell you that. Evolution only tells you that something like a dog can evolve. Or something like an elephant. If we could wind the clock back a billion years, or however long it was, to that soup, and start over again, uhm, we should expect vastly different life forms to have emerged.

C: Mhmmm. Yeah, so, I thought we could tie this in with epistemology then, because… what’s the connection between evolution and epistemology here?

D: Right [laughs]. All of this doesn’t sound all that related to AGI research. But…

C: [Laughs]

D: Uhm… I mean, the first connection is that evolution is not only about biology. Even though I just gave the example of biological replicators, evolution is an abstract… or, it’s concerned with abstract entities that replicate. So, it, it explains how knowledge emerges anywhere, and because knowledge is something abstract it doesn’t matter whether it’s instantiated in DNA molecules or in people’s brains. Now, there are differences genetic knowledge and the knowledge in people’s mind, but what matters here, for the purposes of intelligence research, is that, uhm, they’re both the result of evolution in the broad sense. So, if we’re interested in building intelligent programs, then a couple of things follow from that. One is that intelligence means you can can create knowledge to solve novel problems. And, arguably, that is what those replicators in the primordial soup did. And, arguably, that is what people do. And, a program is intelligent when it’s creative, so those things are synonymous. And, the second thing that follows is that whatever happens in an intelligent program must be evolution, because, again, evolution is the only process that we know of that can create knowledge. So, AGI must be an evolutionary algorithm. This also, by the way, allows us to, to refute the vast majority of narrow AIs that are out there because most of them are not evolutionary algorithms. Most of them are neural networks, or regression-analysis algorithms, or clustering algorithms and all that sort of thing. If it’s not an evolutionary algorithm, it can’t possibly be, uh, intelligent. If it is an evolutionary algorithm… all the evolutionary algorithms we have built so far, unfortunately, I don’t think, are intelligent, either, so it’s not sufficient, but it’s a necessary condition for something to be intelligent is that it must contain an evolutionary algorithm.

C: And when you say “evolution in the broad sense,” you’re saying anything that, uh, uses replication through variation and selection?

D: That’s exactly right.

C: Because I’d say, I mean, the standard way of thinking of evolution is as a strictly biological process, but uhm… this is an important point. So, if we continue a little bit on Popperian epistemology here then, I take it both you and I are persuaded that that’s the best explanation we have of what knowledge is and how it grows in the universe. But… it seems like if Popperian epistemology was fully understood, then, we’d already be capable of building AGIs today. So, I want to ask you about some of the open problems with Popperian epistemology, and how you think we could possibly solve them.

D: Yeah, so, I agree that, uhm, Popperian epistemology is the best theory we have so far of how people work and how knowledge is created. Popper should be given huge credit for making this enormous breakthrough in epistemology, because he’s the one who realize that evolution is not only how we account for knowledge in biological adaptation, but also for the knowledge in humans. So…

C: Mhmm.

D: What he suggested is that people solve problems by making guesses and then criticizing those guesses, and alternating between these two stages. And, again, that is literally an evolutionary process, because guesses are the analog to mutations of genes in this model, and criticism is the selection. So, in a certain sense, I would call Popper the foremost AGI researcher of his time. He wouldn’t have called himself that…

C: [Laughs]

D: …but, that’s what he was, because the study of epistemology and the study of AGI is the same thing, they’re both part of the philosophy of people.

C: It’s funny that you mention that because, uh, as far as I can remember, Popper himself though that AGI wasn’t possible in principle. So that’s kind of a funny conflict right there.

D: Uhm, it is. I suppose that if he thought this, uhm… well, Popper, as brilliant as his contributions were, he made some mistakes. And, uhm, some of these I’ll mention in a second. So, whether he realized it or not, I think he was the foremost AGI researcher because again, the study of epistemology and the study of AGI, those are really the same thing. It’s just, in epistemology, you came at it from a philosophical angle, and in AGI, in the study of AGI you come at it from a software-engineering angle. But, they’re both two sides of the same coin. So, it speaks to a rather interesting conflict in himself, I think, that, if he wasn’t aware of this, which, I don’t know if he was or not, but, if he wasn’t aware of this, that they’re the same thing, uhm, that says something interesting about how we need to make more of an effort to unify philosophy and software engineering, because, again, one of the underlying themes of the book is that there are many problems we won’t solve unless we perform such a unification.

C: Yeah, and I mean it’s, uhm… First of all, I really hope that that’s correct. I’m pretty sure I’m right about this, but, in any case, you mentioned Turing before, and I mean, he… he was right about the universality of computation, that had great implications for the possibility of AGI, the necessary possibility of AGI, even. And yet, he, like you said, led us down a blind alley with the research, with his Turing Test, with his behaviorism when it came to epistemology, so… uhm…

D: Right. It’s almost… it’s actually, it’s a good example, uhm… of what we’re lacking. I’m not aware of Popper and Turing ever meeting up and talking about these things. Because, it’s almost as if, if Turing only had had Popper’s knowledge of philosophy, and if Popper had only had Turing’s knowledge of computation, neither of them would have made the mistakes that they made.

C: That’s interesting. Yeah. I mean, and it’s also good to remember that all people are fallible, and someone might come up with a theory and not understand all of its implications. That should be expected, even.

D: Yup.

C: So, what are the problems with Popperian epistemology, as you see them right now? And what are your proposed solutions?

D: Yes, so, like you said earlier, there must be open problems with his epistemology because, if there weren’t, then we could just build AGI today. One open problem is that, okay, if Popper says that we reason and we create knowledge through alternating conjectures and criticisms, the first problem that we encounter if we, uh, investigate this process, is that we don’t understand the process that gives rise to conjectures. So, when we are faced with a problem, the solution just sort of appears in our thoughts — if it does — but if it does come up, if we do think of a solution, it just sort of appears. Uhm, we don’t know what happens there, in detail, in our minds, that creates the conjecture in the first place. And, unfortunately, Popper’s epistemology doesn’t address how this underlying process works. And then there’s also several infinite regresses that have sort of snuck into his epistemology. One is that, creativity, he says, is a process that alternates between conjecture and criticism; however, the criticism that we come up is itself a result of this process. It’s… criticism itself…

C: Is a conjecture.

D: Exactly. So, you get an infinite regress. If the process of conjecture and criticism relies on criticism, which is itself [chuckles] a result of the alternating between conjecture and criticism… well, where do you go from there.

C: Mhmmm.

D: So, I think the only way to solve these first two problems is to consider what Popper described, this alternating interplay, as an emergent phenomenon, that emerges from some underlying process that does something else. Okay, and then so then there’s a third problem, also, because… I said earlier that they key ingredient, or, one of the three key ingredients of evolution is the notion of a replicator. Popperian epistemology does not have this notion yet. Now, we might be quick to say that memes provide this notion. So, memes are — it’s a term coined by Richard Dawkins in “The Selfish Gene” — memes are ideas that spread between people by copying themselves from one mind to another. So they are replicators, and they do live in people’s minds. But: memes cannot be sufficient because a creative mind can conjecture solutions to problems in isolation, completely cut off from contact with others, so, we will need to introduce the notion of a replicator that lives inside people’s minds and powers creativity, without necessarily ever spreading to another mind. Now, I happen to think that all of these problems have the same solution. But, there is another problem, and that is, uhm, a mistake I think Popper made, is that he thought at least some animals were intelligent. And…

C: Mhmmm.

D: I happen to have the unpopular opinion… [laughs]

C: [Laughs]

D: …that that is not the case. Uhm, so, I think one of the mistakes Popper made is that he thought at least some animals were creative and intelligent and conscious, and I think he said — though I can’t quote him on this — but I think I remember him saying we should grant animals, at least some animals are intelligent because they have the same neuronal structure. They also have brains and, you know, a nervous system. So, I think this mistake is related to the computational universality… that led him to the mistake that you mentioned.

C: Yeah.

D: But, I should say, or I think that animals are not creative. And, I know this is going to upset a lot of people, so, even though I don’t feel pressure to justify this idea…

C: [Laughs]

D: …I should mention from the outset that I understand the concern for animal wellbeing. I absolutely do. And I should also mention that I was actually vegan for a little while, out of exactly the same concern. But I changed my mind about it. So, before anyone goes haywire… [laughs]

C: [Laughs]

D: I understand the concern. But then again, if animals really aren’t creative, then a lot of people should be relieved. But, anyway, I… so, let me, let me expand on this. So, imagine a thought experiment. Imagine you want to make it your goal to breed dogs that can play chess.

C: Sound reasonable enough.

D: [Chuckles] This may take a long time, so, this project may not be completed during your lifetime, so let’s say you also persuade your children to continue this tradition and then they persuade their children, and so forth, until, say, 500 years from now, dogs emerge that are pretty good at playing chess. So, they will sit down with you at a table [laughs], and they will, you know, move pieces and what not, and maybe some dogs will even win, occasionally, okay. If that were the case, no doubt we would be impressed by a dog’s ability to play chess. Especially if it manages to win. And so then we would feel… we would feel “justified”, I suppose, in saying that that dog is intelligent. But, here’s the thing. If we want to determine whether or not that dog is intelligent, we have to ask ourselves, where did the trials and errors occur, the conjectures and refutations, the mutations and the selections, where did those occur to create the knowledge in the dog? And, if we answer that question, we find that those trials and errors happened across many generations of dogs. They did not happen in an individual dog’s mind. So, the dog simply inherited the result of this looong string of conjectures and refutations. And now the dog is born with the knowledge of how to play chess, because that knowledge is now genetically encoded, because, after all, we selectively bred the dog, right? So, the presence of sophisticated knowledge, like I said earlier, is not evidence of intelligent. Let me repeat that, because it’s really important and people don’t always appreciate this point: the presence of sophisticated knowledge is not evidence of intelligent. Only the ability to creatively correct errors is. Now, playing chess would be about the most sophisticated behavior that we have ever seen in an animal.

C: [Chuckles] Yeah.

D: I don’t think we’ve ever seen anything as sophisticated as that. So, if that is not evidence of intelligence, then certainly no animal behavior that we’ve ever observed is. And, this idea that the presence of knowledge, or design — ‘cause, again, if something can play chess, it has the appearance of design — thinking that the presence of knowledge implies that there must have been a designer, and in this case that designer is the dog itself, is just sort of a strange remnant of creationism, because we don’t a designer for design to emerge. Now, I gave the example of chess because there are, supposedly, like we talked about earlier, intelligent narrow AIs that can play chess. So, the problem there is the same. If you want to judge whether the chess-playing program is intelligent, well then you have to ask yourself, where did the trials and errors occur that created the knowledge of how to play chess. Those trials and errors do not occur in the program. They occur in the programmer’s mind. Now, no doubt, these trials and errors occur much faster in a programmer’s mind than they do in biological evolution when we breed dogs, so it wouldn’t take hundreds of years. But… and we’re not sexually reproducing programs, we’re only sexually reproducing dogs, but the analogy applies in terms of the breed being a breed of programs, and the selection happening in the programmer’s mind. So what happens there in the programmer’s mind is, again, literally evolution. So, only this time, it’s not biological evolution, it’s evolution that happens in a mind. So, when evolution happens in a mind, that is evidence of intelligent; not the presence of any particular knowledge, because that knowledge, no matter how sophisticated, could always have originated in evolution that occurred somewhere else: either biologically, before the birth of a chess-playing dog, or, uh, in a programmer’s mind before the “birth”, so to speak, of a chess-playing, narrow-AI program.

C: Yeah, that seems like a very novel way to attack this whole, uh, issue of animal intelligence and sentience. But, I… so, so the whole idea is basically, it’s where the knowledge, how the knowledge is created rather than if the knowledge there or not.

D: That’s exactly right. So, I think it’s important to, to draw a distinction between intelligence and what I call “smarts.” So, a program, such as a person, is intelligent, if it can create knowledge. A program is smart, if it contains sophisticated knowledge, but the smarts of a program tell you nothing about the origin of that knowledge. And we would only consider a smart program intelligent if it created that knowledge itself. And there may be intelligent programs that aren’t smart at all.

C: Yeah, let’s hover on this topic a little bit, because like you said, it’s very controversial, and… although I advocate the same kind of thinking that you just presented here, I want to push back a little bit and play with it. So, underlying this perspective, it seems to be that it’s a black and white thing to have the capacity to create knowledge. Is that right?

D: Yes, I believe it is. You either are creative or you are not. That’s right. It’s a binary matter.

C: Right. Okay. Because it seems implicit in people’s thinking who advocate that animals are, in fact, intelligent that this is somehow a scale… creativity is somehow scale, just like intelligence would be. But, yeah, so the common example that I hear, the common counter argument is that okay, but look at animals such as crows or, uhm, great apes, that seem to be able to novel and complex behaviors, like picking up sticks to pick up things from… I think the famous one is a crow trying to pick something up out of a test tube or something. And I suppose your argument would be the same here that that is a sign of sophistication, not intelligence. So that would be reach of genetic knowledge.

D: That’s right. So in all these cases, I think the mistake people make is that they correctly identify the knowledge that is at work there as sophisticated, the, the crow is smart. But that does not mean that the knowledge of how to do that originated in the crow. The other day, I saw a video online of a dog playing Jenga with its owner.

C: [Laughs]

D: And it was it was it was really cool. And it was it was offered as a… as evidence of how clearly animals must be intelligent and it’s rooted in the same mistake. Now, we could say that well, look, the argument I made earlier about how if something is inborn, we have to explain it in terms of selection pressures from the past. No way there would have been selection pressure in the past for a dog to know how to play Jenga. Right? So that would be an argument in favor of the opposing view that the dog must have learned how to play Jenga on its own. However, I think that is not the case. Now if you watch that video, you will see… and it’s amazing, really, I mean, the dog is super steady, and really manages to pull out some of the pieces without the tower falling with with its mouth.

C: [Chuckles] That’s awesome.

D: And so if you watch that, you will, if you pay close attention, you will notice something and that is that the dog does not watch the tower as it’s playing, and as it’s pulling… So it has its snout on you know, it gets its canines on on one of the pieces and it pulls it out slowly. And what is the dog watching as it does this? The owner! And that is an important distinction because if I play Jenga, I’m not going to look at the opponent. I’m going to look at the tower to make sure I’m doing it right. So I say the dog has no idea what it’s doing there. What the dog is looking for is clues in the owner’s face as to whether or not it’s getting it right. And that absolutely evolved biologically. So this is a biological adaptation, because people bred dogs who were submissive, and people bred dogs that were willing to do what the owner praised them for. And sure enough, once it successfully pulls out one of the sticks we see the owner praising the dog. So this is how over a long “learning period,” again in scare quotes because it’s not really learning, this kind of reinforcement can lead to new behavior, which no doubt seems very impressive, like playing Jenga. It’s just not… we can explain it in terms of genetically pre-existing adaptations. And once we can do that, the explanation of intelligence goes out the window because again, the sophistication of that knowledge that’s already present is not evidence of, of intelligence.

C: That’s really cool, man. I think it’s smart — or maybe intelligent, I’m not sure yet — to make that distinction between being smart and intelligent, because then you’re… I feel like there’s a moral element to the whole idea of of claiming that other animals are not intelligent…

D: [Agrees]

C: …which kind of gets a little mitigated here when you, when you still admit that animals can be smart or sophisticated. But, but so… It also reminds me of Clever Hans. Have you heard of the the horse from 1904 or something?

D: [Chuckles] Yes, I write about him.

C: [Laughs] Oh, you do! So haven’t gotten to that part. But yeah, it’s exactly the same thing that he… he was believed to be able to perform arithmetic and other intellectual things. And it just turned out that he was really good at reading his, his owner.

D: That’s right. And, and eventually they could get him to give any arbitrary answer by raising their eyebrows.

C: [Laughs] I didn’t know that.

D: Yeah. So you could give him a problem like, what is five plus five? And it was a sensation at the time, because people were just… I mean, the, the case made headlines because it was at a time when psychologists started to become interested in the question of whether or not animals are intelligent. And so this guy, I think his name — this was in Germany — I think his name was von Osten, he had this horse Clever Hans. And he claimed that this this horse could solve simple problems of arithmetic, like you said, and so you would ask it, what is five plus five, and it would tap its hoof 10 times. No doubt that is impressive and sophisticated. But again, we’re not after “how sophisticated is that?” Now, again, we could ask the same as with the with the example of the Jenga playing dog, no way there would have been value for a horse’s ancestors to know how to do basic arithmetic, it wouldn’t make sense for a biological adaptation to arise that could do this. So, the knowledge of how to perform basic arithmetic, presumably is not encoded genetically. So then why, how does the horse know how to get the right answer? Well, the horse is a domesticated breed. And part of domestication is, again, just like with the dogs, to breed animals that are submissive and cooperative. So over time, the horses evolved, for example, facial, not only facial recognition algorithms, but facial feature recognition algorithms, so that a horse could tell whether the owner was upset or pleased, and then it would enforce behavior that pleases the owner and avoid behavior that upset the owner. And so that is, again, how we explain how the horse was able to give seemingly the right answer; or it was the right answer, seemingly knowing how to do it. But again, they later found, after performing some experiments, that the clue that the horse was looking for was the excitement in the other person’s face and the excitement, the clue that it looked for to identify excitement was raised eyebrows. So, even when asked the question, what is five plus five? They could get it to give the answer 20 if they wanted to, just by waiting long enough to raise their eyebrows until it had tapped the hoof 20 times. Now you could ask, isn’t that evidence of intelligence though? That it…

C: …of understanding of some sort…

D: …of understanding, right? It understood that raised eyebrows mean excitement or contentment. And then it combined this — not only that, but it combined that with tapping its hoof and it combined that with a question that was asked. So there was some sequence of events that preceded the whole, the whole experiment.

C: Before you answer that, can I just guess? Would that be analogous to how you talked about the narrow AI simply basically updating parameters, making statistical correlations between different things or?

D: That’s exactly right. Yeah, it’s it comes down to updating parameters. And if the, if the logic behind that has reach, then there isn’t necessarily some narrow limit as to what sequence of events a horse can identify.

C: And okay, it would be exactly the same case. Whatever animal case I can bring up here, it seems like that would be the the best explanation from this perspective.

D: I think so.

C: Yeah. I mean, I heard an interesting… because I got in some, some deep shit around this with some animal activist on Twitter the other day, and she… I don’t know if you separate these two problems, because I guess you could talk about whether animals are intelligent or not, and also whether they are conscious and can suffer or not; they don’t necessarily entail each other. Do you make that separation? Or do you guess that they are connected?

D: I think they are deeply connected. So yes, I agree that it’s important to distinguish between these two concepts and that it’s not at all obvious, prima facie, that these should be the same thing; that just because an animal is not intelligent, that it couldn’t suffer.

C: Yeah.

D: So I think yes, there is something to be explained there. But I think we can explain it. So the reason I think these things are deeply connected, and that you aren’t conscious, if you’re not intelligent, is because it follows from our best explanation of what consciousness is. Unfortunately, that still isn’t a very good explanation. But it seems like consciousness has to do with error correction, because when we turn inward, so to speak, and we have simply observe what it is that we’re conscious of, it’s, it seems to always be the thing that we’re looking to understand or that we’re looking to improve or correct. So for example, if you’re a child and you’re learning how to ride a bicycle, you will be acutely aware of all the minute little details: you have to learn how to pedal, and at the same time, you have to learn how to steer and how to keep your balance. This is a very conscious, effortful process. But as you learn, as you correct the errors in your balance in your pedaling, your awareness is drawn more and more away from this process, until you get so good at it that you don’t even have to think about it anymore.

C: Right.

D: So now when I ride my bike, I’m not aware really that I’m riding my bike, I focus my attention on the road, on the things that I need to identify so that I can correct errors. If I then run over a pothole, I become very aware of that situation. Or, like popper said, if you go up a flight of stairs, and you reach the end of it thinking that there’s one more step, you also become very aware of the situation. So consciousness seems to have to do with disappointed expectations, and the identification and correction of errors. And all of that is part of the creative process. And that is why I think that consciousness and creativity are deeply intertwined and I don’t think they’re separable neatly. So, that is why I conclude that animals that aren’t… that aren’t creative also aren’t able to be conscious and therefore suffer and so forth.

Announcement: Alright folks, time for the fun stuff. If you enjoy my podcast and you want to support it, you can now become a monthly Patreon supporter at patreon.com/doexplain. Or, if you’d rather make a one-time donation, you can visit ko-fi.com/doexplain instead, that is ko-fi.com/doexplain. Perhaps ask yourself, “What would Jesus do?”, and, surely, Jesus would donate to Do Explain. Another way to make the podcast grow and improve is to tell people you know, who you think would enjoy it, to check it out because with more support and exposure, I’ll be able to improve the podcast continuously and produce more content, which is something that I would love to do. Lastly, thank you so much to all of you who’ve donated so far. It truly means the world to me, and I want to extend my gratitude. Back to enjoying the show.

C: I liked that explanation. It seems plausible to me. And in the same line, then, you could argue that why would something like consciousness and qualia evolve in animals who seem to be fully genetically determined? What selection pressure could possibly create that. And, then again, I guess since people… since we don’t have a fully developed, good explanation of consciousness, then some people would argue that it is just an epiphenomenon, and is for us as well. So that’s one way to look at it too, although I’m more, more in line with what you said…

D: Right. So some people… even if we grant that consciousness arises only as a byproduct of creativity, and that, therefore animals aren’t conscious, we could still argue that we should err on the side of caution, and treat animals with compassion and so forth. I take issue with that. And the reason I take issue with it… now, there’s nothing wrong with having compassion. But the reason I take issue with it is because it is just a modern day version of Pascal’s Wager. So, Pascal’s Wager is the idea that if you… if there is a god and you don’t believe in him, you’ll get in really big trouble. If there isn’t a god… so you better believe in him, because, if there isn’t a god, and you do believe in him, well, no big deal. It’s not like anything will happen. So, if you want to avoid, you know, eternal punishment in hell, you better believe in god either way, just to err on the side of caution. But that is a bad explanation, because we could apply that to anything we’re not entirely sure of just to err on the side of caution. And since all our explanations are tentative, we’re never really entirely sure of anything. So that is why I think this… I think we should act on our best explanations without hesitation.

C: Yeah, I would agree with that last part. But, but then couldn’t you turn that around and say, since you’re… you can’t be sure of the fact that… and that we don’t really have a great explanation yet of consciousness then… then it’s just as foolish to take that for granted and potentially treat animals badly if they can suffer. I feel like that could go by both ways, right?

D: Well, we can always turn it around. And I’m the first to admit that our explanation of consciousness is not great. But nonetheless, I claim that is it is the best one we have. And so it cannot… there’s always infinite room at the top for more improvement. It’s just that if we don’t want to be caught up in stasis and not sure what to do, if we don’t want to be paralyzed, as a general rule, we should act without hesitation on our best explanations. So, is it a great explanation of consciousness? No. But can we act on it without hesitation? I think so.

C: Hmm. I mean, I… it’s funny, because I had my… one of my best friend’s who is on board with all the other ideas, this is where we differ. I’m more in line with you and he’s more on the other side of this and I know that he — I think you were in this Twitter thread as well — but he wrote something like we should be be careful because we don’t have a good explanation and I kind of laid forth what you just said hear that to invoke fallibility, that goes both ways. So I just wanted to turn it around and give his view a spin there.

D: Right.

C: But okay, another thing, just to really exhaust this fascinating area here: the girl who got really mad with me — and I have no… I have nothing against her at all, even though I don’t think she likes me very much — but she said, how do you know that animals aren’t conscious? And to me, it’s just a matter of, no, I can’t know that, I’m not sure of that. I think she even said how can you be certain or something? Of course I can’t be, it’s just like you say, I… In this case, I don’t have a good explanation of why they would be. It’s not even a matter of me having a positive explanation of consciousness. It’s just that I have no reason to suspect that they are. All the arguments in favor of that are just bad explanations. And that in and of itself, I feel like should be enough in the same way that I don’t exclude the possibility of a god; it’s just that I haven’t heard any persuading argument for why there will be one.

D: Right? Yeah, that’s exactly right. You could… you could… the same argument, well, how do you know? Or, how can you be certain? Well, I can’t be certain that there’s no god. It’s just that he doesn’t play a role in any of our best explanations, and so I tentatively assume that there isn’t. And I live my life based on, you know, around this assumption that there is no god. And I think it’s a rational thing to do so. And…

C: Yeah.

D: The same… the same applies to whether or not animals are intelligent. And of course, it’s a universal approach to take that we should act on our best explanations and this, this question, “How do you know?”, it seeks an authoritative answer. And it’s the kind of question that Popper identified as the who-should-rule mistake. But it doesn’t only apply in politics, it applies in many areas. This is one of them. How do you know that animals aren’t conscious? Well, we don’t, not in the sense of “know” that she means there. We can’t be certain of it. But according to our best explanations, they don’t seem to be. So let’s act on our good explanations.

C: But I actually heard — to make a last objection there — I heard, when I spoke to David, and he said that he had watched a documentary on dogs, and some dogs, apparently, some breeds of dogs — [chuckles] “of dogs”; of dog — have the inborn knowledge of what a pointing finger means, which is something that, that we don’t have as, as humans. And he, the argument was something like, the easiest way to seem conscious, which some breed of dog arguably does seem conscious, could be that they actually are conscious. So it might not even be… I mean, on your account here, on the explanation that it needs creativity, then that would exclude all animals, but, but… I don’t think that cut and dry yet, I think that there might be something to say about the theory that consciousness is something that evolves in social interaction on a complex level and that humans — sorry, a certain breed of dogs — evolving in close proximity with us humans could put a selection pressure on them to evolve consciousness and in some some sense be better able to read our behavior, I suppose. And the Jenga thing could tie into that, but… Yeah, I am… I’m leaving that open, and we’ll see what we what the future tells us about this.

D: The thing about dogs… some dog breeds understanding the gesture of pointing. So, the gesture of pointing is a meme.

C: Yeah, that’s genetic.

D: Right, right. So, the… so, among people, the gesture of pointing is a meme. I doubt that babies are born with a knowledge of how to interpret, finger pointing or something.

C: Yeah.

D: But we can then creatively understand what it means; we can recreate that knowledge ourselves. However, this meme of pointing has been around for a long time. And, presumably, just as long as we’ve been breeding dogs, if not longer. So it would make sense… there was selection pressure for dogs to evolve the ability to understand pointing. So, at that point, well, the knowledge is again provided by the genes and no creativity is required on the part of the dog. So, I wouldn’t consider… it comes back to the sophisticated knowledge thing, intelligence versus smarts. The dog is smart because it can understand pointing, it’s just… I think it’s genetically driven.

C: Absolutely. I wasn’t clear on… I jumped, I merged those two together, but what I meant to say was… The pointing itself it’s just a fascinating fact. And I think that’s entirely genetic as you say, but, but but I think you can make the argument, the consciousness argument, free from that particular tidbit. So yeah, you mentioned that Popper expressed a similar sentiment that the similar neurology could be an argument for why it would be reasonable to suspect that they might be conscious as we are. And I think it’s… it’s important to mention that what we’re saying here is that the, the important part is not the biology, is not the hardware. Because once you have enough computational power in form of memory and speed to perform universal computation, then it’s the program, the creative program, that’s what we’re talking about here. And if someone uses that argument again and says, yeah, well, they have a very similar brain, so why wouldn’t they be conscious, then that’s kind of like saying, well, you and I have very similar computers. So just because I can play video in VLC Media Player, that you should be able to play video in Word.

D: That’s right. Or, or you could argue, you have a certain game installed. And let’s say you and I have exactly the same computer in terms of hardware, I would then also automatically have the same game installed. [Laughs] Which obviously is not the case; if you decide to download the game, and I don’t, we still have the same hardware, but vastly different programs on it. So yes, we… the same is true for animals and people. So, our closest ancestors and our closest relatives today — the other apes that are that are still around — have very similar brains to ours. And I would not be surprised if those if those brains were also universal computers, meaning they could, in principle, run the creative algorithm, but they just don’t contain it. And so they don’t, but they could, you know; there might be other animals where the brain is still universal in principle, but it’s effectively two too small and so there’s not enough memory to run it or something along those lines or it’s too slow to run it tractably — that can always be the case — but absolutely the in terms of hardware, we’re not very different from… from our ancestors and our closest relatives today. But in terms of software, we’re vastly different.

C: Exactly. It’s funny how you can turn that around when people say, yeah, but we’re so similar, so why would we be different? Well, we are, clearly, extremely different in our capacities. And I think to doubt that is… is just silly, even though we can still acknowledge that animals can be smart, like you said. But… and another argument in the same line is… in regards to genetic determinism, then, there are people who claim that we are also just a result of our pre-programmed genetic knowledge, and that our starting point, our starting ideas, so to speak, are ingrained and can’t be overridden. When you have a computer… I’m not sure the statistics, but I would guess I never use Internet Explorer, or Microsoft Edge maybe it’s called now. That comes as a standard in Windows. And most people just use Chrome or Firefox. Like I never use that ever. And it’s there. So I feel like that’s a good analogy to how we can override genetic starting points.

D: Yeah, it is.

C: Right. I thought you would like that as a software engineer.

D: [Chuckles] Yeah; I do — and one more thing I thought of just now as you said that: even smarts — let alone intelligence — but even smarts in animals are also a matter of software, not of hardware. So this important distinction between hardware and software is not only important when it comes to intelligence, but let’s say a dog that can play Jenga — it can do so not because it has a brain that has a certain hardware. The hardware specifics only matter to the point that it supports universal computation. Once you have that what your hardware looks like basically doesn’t matter anymore. The only thing that can constrain the functionality is the, as you said, the processing power and the memory capacity. So, if you have a dog that can play Jenga, and then you have another animal with a completely different neuronal structure, one that we’ve never even seen, might also contain a reinforcement algorithm and you could train it to play Jenga. So both hardware systems, even though the hardware is vastly different, would then contain approximately the same software. Okay, now, of course, if we deny that animals are intelligent, then that makes it harder for us to explain how people evolved, and how people people became intelligent because it means that they must have evolved from ancestors who were not intelligent.

C: Right.

D: But in a way, that is a good thing, because it encourages us to avoid mistakes that would be similar to Lamarckism, say, where we would claim that the knowledge of how to be creative was somehow already present and it just needed to be used or expressed. For example, through… some popular explanations say that it was through upright posture, but that’s a mistake. It doesn’t explain how the knowledge evolved originally. So if we assume that animals are not intelligent and that our ancestors weren’t either, then we’re setting the bar much higher for ourselves to come up with a good explanation of how intelligence evolved in people.

C: A quick comment on that — I think that… to tie it back to what we said there about the difference between hardware and software. Many people seem to argue that, yeah, there was a period in our evolutionary history where our brain size just increased tremendously for a short period of time there. And I think they got the causality wrong there. It’s not that because the… the brain got bigger and faster and had more memory we all of a sudden were more intelligent. It’s the case that there was a need for being able to better instantiate and replicated memes and so the brain adapted along with that, perhaps as a result of the ability itself rather than the other way around.

D: Well, so there… I agree that there must have been some selection pressure that led to our brains getting an increased memory capacity, but simply saying that an increased memory capacity led to intelligence, that’s Lamarckism again. That’s just stating that the knowledge of how to be intelligent was already present somehow and it was just lacking resources to be expressed. So…

C: Yeah.

D: …it’s actually creativity denial, because… or, it’s a denial of the evolution of creativity, because it assumes that evolution was somehow already there. And so it’s creationism again. It doesn’t address how it evolved. But I think we do have a good explanation of how it evolved. So consider the knowledge that our ancestors contained, which they inherited genetically. They must have known things like how to walk, how to stalk prey, and so forth. So in the relevant sense, they had ideas; not in the sense that they were conscious or creative, but they had genetically inherited ideas about how to live and and how to hunt and so forth.

C: Is that different from any other genetic knowledge? Or?

D: No, no, it’s not. Well, I mean, there is genetic knowledge that determines your physiology. And then there is genetic knowledge that determines your behavior. So I’m… it’s good that you asked: I am speaking specifically of the knowledge that determines your behavior. And I would classify ideas as being only part of that kind of knowledge. I mean, maybe we could call the knowledge of how to grow an arm an idea. It doesn’t really matter what we call it. But when I say ideas, I mean, specifically the kind of knowledge that leads to behavior. So as I said, in the relevant sense, they had ideas, but not in the sense that they were conscious or intelligent, but they had ideas. And those ideas had been adapted through biological evolution, encoded in genes and passed down over generations. And our ancestors had brains that those ideas were stored in and acted upon. And so since brains are computers, we can think of anything that is stored on a computer, including those ideas, as computer programs. So, whenever their brains invoked these ideas, that is, whenever the brains ran these programs in order to say, eat or stalk their prey or whatever they did, they… they ran these ideas as programs. And this basic mechanism seems to be the same in most animals even today. Now, picture… I want to set the stage here in a… in a single one of our ancestor’s minds [sic]. So picture a single ancestor’s mind and we can think of all the ideas in his mind as a sort of idea pool. So not a gene pool, but a pool of ideas. And over the course of that ancestor’s life the idea pool may change because some ideas affect it and they can change the state of the idea pool and they can make changes to the Idea pool. But importantly, they may be able to make changes to the idea pool without undergoing any net change themselves. And what I conjecture is that in one such ancestor, a genetic mutation was present that altered an idea and caused it to add some of the components of which it itself was made back into that ancestor’s idea pool. And that mutation was just targeted enough so that the same idea would then slowly begin to replicate. So basically, this is the same thing that happened in the primordial soup that I mentioned earlier. Only this time, the replicator is not a molecular one, but it is an idea. And as it replicates in that individual’s mind, it occasionally makes mistakes. And so what you start to see here is that evolution begins to happen; only this time, it happens within a mind, during that individual’s lifetime. And so that is how… that is how this individual is, then able to evolve new knowledge and come up with solutions to problems that his genes didn’t prepare him for. And for reasons I explain in the book, this was a jump to universality that made this ancestor creative. And as we discussed earlier, if it made him creative, that also made him conscious. So this was the first ancestor — through this genetic mutation — that was really able to experience the world around him. And it was also this genetic mutation was the spark that happened in our evolution that made people, people. And we all inherited that genetic mutation from him. And so it, it means basically, that something very similar to the origin of life happens in each of our minds when we are babies. And it makes sense because it shouldn’t be surprising that the thing that kicks off the creation of knowledge in a single mind is very similar to what kicked off the creation of knowledge in the biosphere. Because after all, they’re both about creating knowledge and they’re both about evolution.

C: Wow.

D: Once we adopt this explanation, we find the reason for why creativity is a binary matter, as you mentioned earlier, because that genetic jump to universality either happens in an organism or it does not. And we find why computational universality and explanatory universality are deeply linked. And we are also able to solve all of the open problems with Popperian epistemology that I mentioned earlier. So we can now explain, for example, what gives rise to conjectures. The thing that gives rise to conjectures is that the idea that replicates in the idea pool through imperfect replication will slowly explore the space of all possible ideas. So on occasion, it will come up with an idea that happens to be a solution to a problem that you’re looking for, and then that springs to mind. So that is how I think the mind produces new conjectures. And this is also how we solve the several infinite regresses that we find in Popper’s epistemology because this is the underlying process that is responsible for conjectures and refutations and this alternation between conjectures and refutations that emerges on a higher level.

C: Now, that’s a really nice parallel and a really nice idea. I just want to clarify, I don’t know if it’s a semantic issue I have but replication of an idea in a mind… I mean, would that entail that we have a lot of instances of the same idea in a mind because I see how an idea can replicate to another mind in the sense of being copied, or a gene making a copy of itself. But what does that mean more specifically to replicate an idea in one’s mind? Would I have two instances, more instances, of the same idea to begin with before mutation? Or do you mean that all ideas… a mutation is inevitable because you can’t replicate perfectly?

D: Well, so, sort of both. So yes, most… I mean, evolution favors high copying fidelity. So, if we say that evolution happens within a mind, then that means that yes, there are many copies of the same idea, and then slight variations of it. But what will happen over time — and remember that this evolution happens very rapidly, so, the amount of time that you have a large number of copies of the same idea might vary, it might not be that long, and competition in evolution is especially fierce between slight, slight variants. So, once a slight variant occurs, it will then compete fiercely with the other originals, with… with the other slight variants. But I think this this fact that you now have several copies of the same idea is a feature, not a bug. The reason it’s a feature is that it allows us to recover mistaken ideas, for example, about neuroplasticity, this idea that you can, if you damage your brain, if you have an accident and you forget a large chunk of things, usually the recovery… the recovery is usually explained in terms of hardware. But as we’ve already discussed, really the only thing that matters in terms of hardware is whether or not you have a computationally universal machine.

C: So you’re saying you’re having the same ideas in different areas of your brain.

D: That’s exactly right. If you if you have a copy, so to speak, a redundant copy, that is sort of a backup, or, it will inadvertently function as a backup. And it may also happen to live in another area of the brain, then it may happen to be stored in terms of neuronal structures in one part of the brain and not the one that was damaged. So some of the ideas you may lose because part of your brain was removed through an accident or something. But those ideas that live on in different parts of the brain might well contain copies of those original ideas. And that is how we explain neuroplasticity. It’s not that your brain was able to regenerate hardware or something, and therefore your ideas came back. No, no, it’s that you had redundant copies of those ideas.

C: I love that man, I get really excited now. That’s so slick, that’s really cool.

D: It’s also how we explain phenomena like something… when something’s on the tip of your tongue. It means that, I think it means that, you have a variation of an idea that’s mutated slightly away from its original meaning and you cannot find any copies of the original meaning. So now what you have to do is you have to recreate the original meaning from the slight variant, and that may take some time. And so that is the confusion we experience when something’s on the tip of our tongue. And it allows us to to explain other things too. It allows us to explain memory for example. And I mean “memory” not in terms of storing ideas or memory capacity of the brain, I mean “memory” as in “remembering things.” So the phenomenon of forgetting something… you have forgotten something when a pocket of the population of replicators that represented an idea have all replicated away and mutated away from that original meaning. And so it’s not present in your brain anymore.

C: But what decides… because some memory seems to be, to be… keep very intact and what what decides what stays intact and what doesn’t, then?

D: Yes, I think… the… so there’s, there’s no authority, so to speak, that decides, but there… I think what you’re asking is, is there a logic to why some memories are longer lived than others.

C: Yeah, yeah.

D: And I think the answer to that is the same as, why are some species in the biosphere longer lived than others. If an idea in your mind… if you have some memories that are strong and vivid, and some memories are weak or not present anymore at all, it’s because the stronger memory is strong only because, and easily retrievable because it was a good replicator. It managed to outcompete other ideas that replicated and spread through the idea pool in your mind, at its rivals’ expense. And so that is why you remember this memory vividly, and is easily retrievable, and you don’t remember so easily early events from your childhood, say.

C: Huh. Wow, I mean, I really have to read what you write about this in the book and, and digest it fully because… it’s funny when I sat and prepared for the interview, I felt like I wanted some type of critique. And one thing that I got stuck on was this very thing that… What do you mean replication in a mind, that can’t be the right use of that term. And it just comes to show now that I completely misunderstood what you meant by that. And now I’m definitely persuaded that this is a great, great idea that solves a lot of problems. That’s fascinating man. I mean, if I get this excited listening to you telling me that I can’t imagine how nice it is to actually mutate that idea yourself, and see how good it fits.

D: [Chuckles] That’s right, yeah. So, I should mention that, although we can explain a lot of things with this approach, there are still several open problems with it that I lay out in the book. So, because of course, if this was the… the solution, so to speak, then we could just build AGI tomorrow. And unfortunately, there are still open problems with this. But I lay them out in the book and my hope is that it gives us a research program for building truly intelligent programs.

C: Right. So yeah, that was gonna be my next question. If this insight has helped you at all in how to approach writing an AGI program.

D: So, to a degree it has. I think we’re still not… we’re still not quite at the point of actually writing code. But I think we get a general idea now of what the components are of the AGI algorithm. So we know several things now. So one thing we know even without this particular model is that AGI will work without any inputs. So no data to ingest is needed. That is one thing we know. The other thing we know is — and this now does relate to this particular model — it must have an evolutionary component, a part of the mind is an evolutionary algorithm. As I said, there’s a pool of ideas. And we know how to encode these ideas in terms of functions, which are just very simple programs. And we then need to instruct the computer to replicate these functions imperfectly so that new functions emerge, which is how people evolve knowledge that was not genetically given. So that is one thing we know. And we also know that there must be a top-down force operating on the idea pool in order for selection to happen. And I lay this out in the book, I think this operates very similarly, actually, to how animals operate. There are basic failsafes that are built into animals. For example, if you if you teach a dog a trick, and you then command the dog to perform the trick over and over again, it will stop performing the trick at some point. Maybe it’ll do it 10 times or 20 times but at some point, it will stop. And my conjecture is that there is a sort of failsafe built in that detects loops of a particular kind and then stops because maybe… it seems as though the animal is stuck in a rut and that it can’t get out of, which would be detrimental to get into out in the wild and it would kill you. So, it would make sense for very basic error-correction mechanisms to have evolved biologically. Now, I should be careful though to point out that when I say “error-correction mechanism” in this sense, I do not mean error correction in the creative sense, I only mean a statically pre-programmed policy to detect when the animal is doing something that might be detrimental. However — and this is where it makes the jump to a creative sentient mind — once an idea in the idea pool mutates into the form of an idea that this… that this policy, this meta algorithm as I call it can use, then error correction comes online. Now, it’s error correction in the creative sense. And so that is one other component. That is one other component in this, in this model that we need to build. So I think the evolutionary component is now not so mysterious anymore. We’ve introduced the notion of a replicator. We can solve a number of problems that way. And people have written evolutionary algorithms before. And the question was always, why aren’t they creative or conscious? One reason was because they just update some parameters. And that is not really what creating knowledge is… that’s not at all what creating knowledge is about. But the other reason is that even if they were mutating ideas in terms of these simple programs that I call functions, there wasn’t this, this instance, this sort of policy or meta algorithm that, that had runtime error correction. And that stirred up the idea pool again and again. And it is this mechanism that stirs up the idea pool in each of us, that allows us to jump from one thought to another, and to address one problem after another and to change our minds and so on, which is something present-day evolutionary algorithms can’t yet do. So these are the kinds of things that we need to work on. And it is especially this meta algorithm that we may be able to backwards engineer by observing animals. So this is where animal research comes in, and can be really promising…

C: Right.

D: …just not in the sense of how are animals creative, but there may be a component… And since, after all, we evolved from non-creative ancestors, so the underlying structure should still be similar. And so hopefully we can, we can backwards engineer that by studying animals closely.

C: To quick follow-ups to that. The first one is, do you think this way of thinking — let’s say we can finally implement that in a program and make progress there — do you think that would shine any light on the so-called hard problem of consciousness? That is how, why it feels like something to have a certain instantiation of knowledge or, or a certain program. And the second thing is, is implementing this something you’re working on at a practical level as well yourself? Or are you mainly working on the philosophical front?

D: So I’ll answer the second question first. We don’t know enough about the meta algorithm yet, this policy that seems to operate in animals and in us to, to replicate and to implement it. So first, we will need to do more philosophical work. But yes, evolutionary algorithms… people have written and I’ve written and I think that part should be relatively clear to do the this, this policy and this, this meta algorithm is really the main thing to focus on now before we can implement it. As to your first question, will this solve the hard problem of consciousness? My hope is that it can contribute to it. I don’t know if it will solve it. But this is this is another instance where software engineering might hopefully help solve philosophical problems. I conjecture that consciousness is a piece of software. And so there’s… there’s really nothing that mysterious about it. And I think once we know the explanation, we will say, well, it was just too simple to see it, I don’t think it’s anything mysterious or heavily complex that’s going on. It may require a new mode of explanation, we don’t know. But…

C: Right.

D: If we truly build a creative program, it will be conscious. And we could, in principle, investigate its source code to see what it is that it runs there that makes it conscious. However, I have some moral quibbles with that related to privacy. And that is because, well, you’ve now instantiated a person and this person will need to be… I think it deserves some… he or she or, I say “it”, because it won’t have sex, but…

C: [Chuckles] Yeah.

D: …this, this person, it will require some protection, at least initially, because it will be vulnerable in the sense that it, you could inspect its source code at any given moment you like and simply try to reverse engineer all the ideas to see what’s going on in its mind. And that is just a violation of its privacy. That’s why I have some moral quibbles with it. Now, what I suggest we do, once we build this thing is to raise it, to help it learn. And once it can make up its — and to respect its privacy and not look into the evolving idea pool — and once it is… once it has learned enough to communicate with us, and we can ask it, hey, are you okay with opening yourself up in this way? May we inspect your code? If it agrees, then we should proceed. But I wouldn’t want to violate its privacy.

C: That’s very respectful and gentleman of you, Dennis. But so [chuckles] well, first of all, I want to make the comment that it’s a very interesting time to be alive. And I hope I get to see when we make this breakthrough in my lifetime, that’d be cool.

D: Me, too.

C: But I have another question there. When I think you were talking about animal minds, I think you used that term. And I’m just curious, ‘cause I know that David makes a distinction between… he says that animals don’t have minds, he uses “mind” exclusively for the creative program. Do you use it in a wider sense, then? And what what does “mind” entail in your usage of the term?

D: Ah, yes, I think I do use it in a different sense. So I speak of animals having minds, but just not creative minds. So a mind in an animal, you can just think of all the software that runs on the animal’s brain.

C: Right. Okay, yeah.

D: And I call it a mind because it is… that software is still an emergent property of that brain and it is substrate independent. We could call it something else; we could we could reserve the term “mind” just for the creative mind. But yeah, I do consider animals to have minds, just not creative minds, and then people, through this genetic mutation, their minds became creative.

C: So creativity would, on such a view, be more of a function of the mind rather than the entirety of the mind. Which is similar to how we use it in cognitive science, which I study.

D: That’s right. I mean, I think it is the distinguishing factor between animals and people and so forth. And it is the most important function of a person’s mind. But yes, to be precise, I suppose we could make that distinction; we could say, yes, the mind is the creative algorithm and then some, it’s just that the creative algorithm, because it explores this possible space of ideas is so all-encompassing and universal, that it makes up basically all of the mind.

C: Right. Okay. And it’s always the… it has the last word in any instance. So… but, let’s see here. Yeah, so I thought maybe we could end on this… this remarkable philosophical discovery that David Deutsch made in The Beginning of Infinity, that people are these programs, they are not their physiology. We are bootstrapped to our physiology, but we are essentially our creative minds. And you write in this, this book, that it’s highly relevant for things like helping people with mental ailments, making progress in anti-aging research, and venturing out into space, and so on. So this is a very novel perspective that psychotherapy would mainly be a matter of software engineering. So I’d like you to just explicate that a little bit.

D: That’s right. So part of the book is, is basically just taking David’s proposal that people are software seriously and exploring the ramifications of that. And these ramifications extend across several fields that you just mentioned, effective psychotherapy, anti-aging, space travel. So in terms of psychotherapy, if we take the notion that people are programs seriously, and that programs are substrate independent, that means they are independent of the hardware that they run on — they are an emergent phenomena — then that means that if you give someone a pill to help with a mental ailment, whatever happens there, it cannot be the entire solution. I think, if anything, it’s only a very small part to the solution. But whatever happens there can only work on a physical level. And that is the wrong level of emergence to explain how to cure mental ailments. So, I suggest that if we develop this technology, we will, in the future, once we have it, we will look back on present-day psychotherapy and psychology research and deem it unimaginably crude because what we’re dealing with when we have a mental ailment is… it must be a rogue idea. It must be software that is wrong. It’s not hardware that is wrong. If it is hardware that is wrong, it can only affect you in terms of memory capacity or processing power. Because our computers are universal, there is no other way. So when you pop a pill, and it makes you happy, say, that is an interpretation of sense data. Just like you interpret the sense data that’s streaming in through your eyes, you also interpret the sense data and the sensations you get from a pill that you take. But the interpretation you make of that is up to you. That is an idea that has to evolve in your mind. And so I think psychotherapy is and should be seen as a branch of software engineering, because that’s really what it is. The brain is a computer and people are programs and any ideas we have are, therefore, also programs. Now if you have, let’s say, a hangup, or a loop or you had a terrible experience, and you simply cannot rid yourself of that memory, say. That means there is a… an idea that replicates in your mind at the expense of good memories and healthy memories that you prefer. So there’s something in that pool of ideas that allows it to keep spreading and popping pills is not going to help that. [Chuckles] You have to explain it on the right level of emergence again. So this ties into what I said earlier about respecting someone’s privacy in terms of opening up their… their function pool and investigating it. And it also requires a slightly different… or, I should say, a very different technology, which ends up with the same result because to our AGI children, this effective psychotherapy in the form of software engineering will be available out of the box because they will run not on human bodies but on computer hardware. So they would just inherit this ability by default. And so then, you know, presumably there will be professional psychotherapists that are really just a specialized kind of software engineer that can help an AGI with mental ailments.

C: [Chuckles] No, so it’s basically just being really good at debugging.

D: Yes, that’s exactly right. It’s… it’s a debugging issue, there’s a bug in the program. And if we want to make the same technology available for humans, that requires a different avenue. And I don’t know much about this avenue, and I don’t think we do generally, when it comes to a sort of brain interface. And I don’t know how much this has been explored. And I know there’s some efforts around this. But it would basically require porting your mind from the brain onto some other piece of hardware so that you could then start inspecting the code and modifying the code, otherwise you won’t be able to perform this effective psychotherapy. So I focus in the book on just performing this effective psychotherapy or allowing it to be performed with the AGI’s approval on AGIs because, it will, like I said, it will be available to them by default.

C: That’s really cool. I, fundamentally, I think you’re absolutely right about that, that it is an epistemological… software issue. But since we are bootstrapped to our physiology, as it were, seemingly crude interventions like… let’s say dampening pain or increasing serotonin in the synapse can still have an impact, obviously can still have an impact and help people in their creative problem solving if only as a short term mediator to find an actual long-lasting solution. I don’t think you’re… you were disputing that per se. Or did I get you wrong?

D: No, I wasn’t. So if you… let’s say you have to go in for an outpatient surgery. And you have to, you know, somebody has to cut, I don’t know, part of your… your hand open or your foot or whatever it may be, and they give you a local anesthetic. Well, then, of course, the… the signals that would travel… that would normally travel from that part of your body, the nerve signals that would normally travel from that part of the body to your brain are now inhibited, and they won’t travel. So there is nothing for you to interpret there. And that’s why you don’t experience the suffering that results from that pain. But, if… if that sense data does stream in, then it is up to interpretation. Now we could ask, why is it that all… most if not all people avoid pain? How come, if this… you know, if it’s up if it’s up to us, and it’s all a matter of interpretation, how is it that some people enjoy… I mean, some people do enjoy pain to a certain level. And we do enjoy pain if we go to the gym, for example. Sam Harris has given this example if you were to wake up in the morning with the same kind of exhaustion and, and pain that you feel in your muscles after a good workout session at the gym, you would be [chuckles] very worried that something terrible is happening. We see here that it is a matter of interpretation.

C: Yes.

D: But of course, if you put your your hand on a hot stove, it’s going to hurt no matter what. So why is that seemingly consistent across people? Well, what I suggest here is that we can still explain this in terms of the evolution that occurs in someone’s mind. Because oftentimes, evolutionary algorithms converge onto the same solutions for the same problems. And so, even if we observe behavior, and interpretations of sense data that that are consistent across many people, that is still perfectly compatible with the idea that sense data is subject to interpretation and with the idea that every person is unique in the sense that their idea pool is is evolving differently than other people.

C: But so the idea there wouldn’t be that it’s just… we’re just genetically programmed to avoid pain, and that’s immutable — it’s a matter of the idea, the interpretation, that pain is to be avoided, that pain is bad is the most useful one, hence that selection pressure makes it continue in most of our minds.

D: Yes, so I would not be surprised… I think both. I would not be surprised if we were in… if we had an inborn idea, genetically given to avoid pain. And then we can override it, for example, by going to the gym and enjoying the limited amount of pain that we experience there. But we can also take it seriously, if we choose to. If for example, we put our hand on a hot stove.

C: Mmm.

D: So then another consequence of this… this fact that people are software that we can take seriously, in addition to effective psychotherapy is the implications this has for venturing out into space. So I’m of the opinion that if we stay on Earth forever, then we’re hedging our bets. And we shouldn’t be putting all our eggs in one basket. And unfortunately, currently, space travel is still a very dangerous undertaking and a very expensive one at that.

C: Mmm.

D: The… it’s always the… the most precious load of any space shuttle is the people on that space shuttle. And that is also what makes the mission dangerous. So if we send a rover to Mars, and there’s no people on board, that is very expensive still, but it’s not very dangerous. There’s no lives being lost. Worst case, you would lose the rover which then you have to build it again, which is a bummer, but at least nobody dies. Now, if we take the notion seriously that people are programs, well, that means we also have to take seriously the notion that programs generally, as I said, are substrate independent. So, they are are not tethered to their hardware. And we routinely move programs from one computer to another. Like you said earlier, if you download a web browser, then it’s… that software is sitting on some server somewhere and you download it. So if we take seriously, again, the notion that people have software that means that all the technologies we’ve built around software, such as transferring software from… from one computer to another, will automatically be available to people, as long as they are not tethered to their bodies. Unfortunately, humans are tethered to their bodies, we don’t yet know how to untether them; that would require the same brain interface that I mentioned earlier for… that’s required for effective psychotherapy. But for AGIs, this technology will again be available out of the box because, again, it’s trivial to move a program from one computer to another. That includes computers that are miles apart and millions of miles apart. So what does this mean for space exploration. So if you want to send somebody to Mars, if you… if you want to send them physically, depending on the constellation of the Earth and Mars around the sun, it takes at least half a year to get there. It can take up to 300 days, so almost a year. However, if you send a person in terms of uploading a program, well, that can travel over radio waves, and those travel at the speed of light. So there is this notion that people couldn’t possibly travel at the speed of light. And the reason people say this is because well, you’d have to accelerate their bodies. And as you do this, and you accelerate their bodies and you approach the speed of light, their mass increases, and at the speed of light, their mass would be infinite. So you would require an infinite amount of energy to push that mass through space, so therefore, people can never travel at the speed of light. This is a mistake. This is a misconception that is rooted in the reductionist mistake that people are their physiology, their bodies. They’re not. People are programs. So people can, in fact, travel at the speed of light. Because all you would need to do is untether — [chuckles] I say “all you need to do,” but in principle, it is not forbidden by the laws of physics to untether yourself from your body, and then travel at the speed of light through space through radio waves to whatever destination in space you want to get to. So this makes space travel not only faster, but inherently safer, too, and cheaper, because now what you do is, if you want to colonize Mars, you don’t have to terraform it, you don’t even have to send people there physically. All you need to do is, you send a computer to Mars, like the rovers we already have there, you establish a signal with it, and then you upload a person to it. And, in fact, we could already do this on Mars because we already have rovers present there, and we communicate with them, they send pictures back to us and we send commands to them. And… now, if we travel to Mars at the speed of light, it takes about three minutes. So effectively, you’re reducing the time it takes to travel through space, to the time it takes to send a piece of hardware, a computer there, once — which takes, again, half a year or something — and once it is there, you can go back and forth at the speed of light. I think untethering is a challenge that we will need to master. But this is another technology that will be available to our AGI children out of the box. They will just be able to do this. So… I don’t see any other way given the huge distances that we’re facing in… in space travel, I don’t really see another way to tractably colonize the solar system and beyond. Because everything else is too slow and too dangerous. So I hope that this will increase our odds for success.

C: Well, yeah, that’s a very novel way of looking at it. And yeah, I feel like I really have to adjust my worldview today. I’m gonna have to… I’m gonna have weird dreams tonight, Dennis, I think.

D: [Chuckles]

C: [Chuckles] But uhm, yeah, it’s been a pleasure. I would love to have you back. And I will encourage everyone to go buy your book. And, where can they get the book?

D: The book is called “A Window on Intelligence” and it’s available on Amazon as paperback and ebook. And it’ll be available on other platforms like Apple books soon.

C: And if people want to know more about you, where can they… where can they find you?

D: You can find me on Twitter, just google Dennis Hackethal, or look for Dennis Hackethal on Twitter. And there’s also a website for the book called windowonintelligence.com.

C: Alright, that’s perfect. Thanks again for coming on. And hopefully, it won’t be the last time.

D: It was a great pleasure to be here.

If you enjoyed this article, follow me on Twitter for more content like it.


What people are saying

What are your thoughts?

You are responding to comment #. Clear

Preview

Markdown supported. cmd + enter to comment. Your comment will appear upon approval. You are responsible for what you write. Terms, privacy policy
This small puzzle helps protect the blog against automated spam.

Preview