Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

History of post ‘Animal-Sentience FAQ’

Versions are sorted from most recent to oldest with the original version at the bottom. Changes are highlighted relative to the next, i.e. older, version underneath. Only changed lines and their surrounding lines are shown, except for the original version, which is shown in full.

Revision 3 · · View this (the most recent) version (v4)

Link to post on sleepwalking

@@ -212,7 +212,7 @@ if (incoming_electric_signal > 1000) {
}
```

Generally speaking, most overestimate what consciousness is required for – not just for animals but also for people (but, to be clear, I do think people *are* conscious). We do tons of things unconsciously all the time. Also, -sleepwalkers+[sleepwalkers](/posts/sleepwalking) can navigate their surroundings, pour drinks and prepare food. People talk in their sleep. If I recall correctly, Popper once wrote somewhere that children sometimes *hold conversations* in their sleep. Conversely, people underestimate how powerful genetic preprogramming is in animals and what it can account for (while ironically <em>over</em>estimating the power of genetic preprogramming in people, where, following Deutsch, I believe it has little power).

<h3 id="many-animals-live-in-herds-and-lead-complex-social">
  <a href="#many-animals-live-in-herds-and-lead-complex-social">Many animals live in herds and lead complex social lives, coordinate to hunt, etc.</a>

Revision 2 · · View this version (v3)

@@ -214,7 +214,6 @@ if (incoming_electric_signal > 1000) {

Generally speaking, most overestimate what consciousness is required for – not just for animals but also for people (but, to be clear, I do think people *are* conscious). We do tons of things unconsciously all the time. Also, sleepwalkers can navigate their surroundings, pour drinks and prepare food. People talk in their sleep. If I recall correctly, Popper once wrote somewhere that children sometimes *hold conversations* in their sleep. Conversely, people underestimate how powerful genetic preprogramming is in animals and what it can account for (while ironically <em>over</em>estimating the power of genetic preprogramming in people, where, following Deutsch, I believe it has little power).


<h3 id="many-animals-live-in-herds-and-lead-complex-social">
  <a href="#many-animals-live-in-herds-and-lead-complex-social">Many animals live in herds and lead complex social lives, coordinate to hunt, etc.</a>
</h3>

Revision 1 · · View this version (v2)

@@ -1,245 +1,245 @@
# Animal-Sentience FAQ

Some responses to the most common criticisms of and questions about the view that animals are not sentient. To reference, click on a heading. Feel free to share.

My views on animal sentience are heavily influenced by David Deutsch and Elliot Temple, plus background knowledge from Karl Popper. Thanks to Logan Chipkin for commenting on a draft of this post.

<h3 id="do-you-think-non-human-animals-can-suffer">
  <a href="#do-you-think-non-human-animals-can-suffer">Do you think non-human animals can suffer?</a>
</h3>

No.

<h3 id="why-can-t-non-human-animals-suffer">
  <a href="#why-can-t-non-human-animals-suffer">Why can’t non-human animals suffer?</a>
</h3>

Because all they do is mindlessly execute inborn algorithms which are the result of biological evolution. (This is Deutsch's view in my own words.)

<h3 id="but-you-don-t-doubt-that-humans-can-suffer">
  <a href="#but-you-don-t-doubt-that-humans-can-suffer">But you don’t doubt that humans can suffer?</a>
</h3>

No, I do not.

<h3 id="we-are-closely-related-to-many-other-species-human">
  <a href="#we-are-closely-related-to-many-other-species-human">We are closely related to many other species. Humans are animals, too. Why don’t animals suffer if we do? The genetic difference is minor.</a>
</h3>

The genetic difference is indeed minor, but we also share many genes with plants, which are not conscious. Hardware differences in general are small enough that many animals’ hardware *could* be programmed to be conscious.

The real difference is one of *software*, not hardware (compare p. 414 of Deutsch's book *The Beginning of Infinity*). A small subset of our DNA codes for a self-replicating idea. Once invoked, this idea evolves into many different ideas during a person’s lifetime. This is the *genetic jump* to creativity. That body of knowledge can grow to be much larger than the genetic knowledge we inherit. This explains how humans can learn so much that isn’t genetically baked in. This ability to learn *is* what makes people conscious. Read my article [‘The Neo-Darwinian Theory of the Mind’](/posts/the-neo-darwinian-theory-of-the-mind) for more information on this.

<h3 id="but-animals-can-learn-too">
  <a href="#but-animals-can-learn-too">But animals can learn, too!</a>
</h3>

They can certainly change their behavior in useful ways, yes. (Compare Deutsch's [remark](https://www.artbrain.org/image-gallery/journal-neuroaesthetics-6/hans-ulrich-obrist-interview-with-david-deutsch/#:~:text=Learning%2C%20perhaps%2C%20in%20that%20any%20useful%20change%20can%20be%20considered%20learning.) "Learning, perhaps, in that any useful change can be considered learning.") How they do this is akin to present-day artificial-'intelligence' algorithms such as reinforcement ‘learning’. But those algorithms aren’t conscious. People have a completely different learning algorithm – which we don’t fully understand yet – which makes them conscious and constitutes *real* learning. If animals had that same algorithm, you wouldn’t need to, say, train a dog to sit via reinforcement. You’d *explain* to the dog why it should sit, and then it might decide to do so. (But a creative being generally won’t like being told to sit on command.)

<h3 id="if-you-don-t-understand-consciousness-how-can-you-">
  <a href="#if-you-don-t-understand-consciousness-how-can-you-">If you don’t understand consciousness, how can you possibly claim that animals aren’t conscious?</a>
</h3>

You don’t need to be a trained pianist to tell whether somebody can play the piano reasonably well. The audience generally doesn’t have the knowledge to play the piano well enough themselves, but they can still tell because they know what good piano-playing *isn’t*.

Similarly, we have good explanations for what consciousness *isn’t*. Again, the mindless execution of inborn algorithms (except for the creative one) isn’t consciousness.

You can compare animals to present-day computers in this regard. Like animals, our computers contain highly sophisticated knowledge. The question is: where does this knowledge come from? In the case of our computers, it comes from the programmers. Who’s the animals’ programmer? Biological evolution.

In both cases, the knowledge was ‘inherited’ from an outside source. Both the computers and animals are not the *creators* of the knowledge they contain. But they’d *need* to be the creators to be conscious (see *The Beginning of Infinity* ch. 7). All they’re concerned with is, again, the mindless execution of algorithms they already contain. *Something that merely executes pre-existing algorithms is not and cannot be conscious – it is mindless.* The only alternative that is left is that consciousness has to do with the *creation* of knowledge. What other aspect to information processing could there be?

<h3 id="isn-t-consciousness-a-matter-of-how-complex-or-sop">
  <a href="#isn-t-consciousness-a-matter-of-how-complex-or-sop">Isn’t consciousness a matter of how complex or sophisticated the animal is?</a>
</h3>

This is a common view: humans are said to be more complex than other apes and so on down some hierarchy. The chain of complexity is usually pictured akin to this (in descending order): humans > other apes > fish > insects > single-celled organisms. Here we encounter a problem already: people underestimate how complex even a single cell is, and if they saw an animal doing the things some of the single cell’s components do, they’d attribute consciousness based on the ‘sufficient-complexity criterion’. But they don’t attribute consciousness to the cell’s components.

Two people are more complex than one person. But two people are not any more conscious than each person individually, nor is there any shared consciousness between them (other than in some woo-woo sense of them having shared ideas or compassion for each other or something, which isn’t what consciousness is). Even if animals are conscious, the biosphere as a whole isn’t, although it’s much more complex than any animal by itself.

More deeply, the reason complexity/sophistication (I prefer to think in terms of sophistication) cannot determine consciousness is this: sophistication and consciousness are completely orthogonal. Again, no matter how sophisticated an inborn algorithm is, since it can be executed *mindlessly*, in computer fashion, that sophistication cannot be evidence of consciousness. Executing a pre-existing algorithm does not require consciousness, no matter how complex the algorithm may be. Many of the algorithms our computers execute mindlessly are highly sophisticated – more sophisticated than much of what the biosphere has ever managed to create.

To distinguish clearly between sophistication and consciousness, I have introduced the distinction between *smart* and *intelligent* in [my book](https://www.amazon.com/Window-Intelligence-Philosophy-Evolution-Implications/dp/1734696133/):

An entity is *smart* if it *contains* sophisticated knowledge. Where that knowledge came from – whether the entity created the knowledge itself or not – is not relevant to determine whether it is smart. Many animals and many computers are smart because they contain sophisticated knowledge.

An entity is *intelligent* if it can *create* knowledge. (To be clear, it needs to be a *coherent* entity. The biosphere can create knowledge through biological evolution but is not coherent like a person.)

Do you see how these two qualities are unrelated? Our computers containing sophisticated programs are smart but not intelligent. A human baby is intelligent but not very smart yet because it hasn’t had much time to learn. All people are intelligent, but only some of them are smart. Intelligence, as David Deutsch argues, is a *binary* matter: you either have it or you don’t. In other words, [the genetic jump to creativity](/posts/the-neo-darwinian-theory-of-the-mind) either happens or it doesn’t. But smarts can exist in degrees and can get ever better.

Thinking animals are conscious is often due to fudging the difference between smart and intelligent.

By the way, thinking that complexity/sophistication (in other words: design) is evidence of the existence of a conscious, intelligent being, is *creationism*. When applied to animals it is creationism in disguise. Instead of attributing intelligence to a supernatural being, it’s attributed to individual animals, while the real creator – biological evolution – isn't mentioned. This is ironic because animal-rights activists often reject creationism and instead follow and appreciate science (e.g. they’ll cite neurobiology to argue that animals are conscious, see below).

<h3 id="what-does-intelligence-have-to-do-with-consciousne">
  <a href="#what-does-intelligence-have-to-do-with-consciousne">What does intelligence have to do with consciousness?</a>
</h3>

Most people think of intelligence as the degree of sophistication of someone’s knowledge. As I’ve just explained in my answer to the previous question, that isn’t the case. Intelligence is the ability to *create new* knowledge, sophisticated or not. (Compare Deutsch's definition of creativity (I use the term 'creativity' synonymously with 'intelligence'), which says it is the ability to create new *explanations* in particular. *The Beginning of Infinity*, ch. 1 glossary.) If something *isn’t* intelligent, all it can do (if that!) is mindlessly execute pre-existing knowledge. It’s utterly algorithmic. So consciousness must live somewhere in intelligence.

<h3 id="do-you-have-any-evidence-of-animals-being-algorith">
  <a href="#do-you-have-any-evidence-of-animals-being-algorith">Do you have any evidence of animals being algorithmic?</a>
</h3>

Yes, see:

- My post ['Buggy Dogs'](/posts/buggy-dogs)
- Elliot Temple's post ['Algorithmic Animal Behavior'](https://direct.curi.us/272-algorithmic-animal-behavior)

You can be more successful with animals if you treat them as algorithmic than if you treat them as sentient beings with free will, as evidenced by [this video](https://twitter.com/Rainmaker1973/status/1438457641471291399) vs [this video](https://twitter.com/_Islamicat/status/1438480429833666562).

That’s because they *are* algorithmic.

<h3 id="but-sometimes-humans-are-algorithmic-too">
  <a href="#but-sometimes-humans-are-algorithmic-too">But sometimes humans are algorithmic too.</a>
</h3>

Yes, but the difference is in how they deal with that. They can reflect on the situation and recognize when they’re stuck. They can correct the error through creative means.

The point is not that humans never make mistakes while animals do. Both make mistakes. The difference is, again, in how they deal with mistakes. Like dogs pointlessly swimming in mid-air when held above water (see my post ['Buggy Dogs'](/posts/buggy-dogs)). Should they stop and ‘correct’ the error it’s through other inborn algorithms taking over, e.g. because the dog’s energy is too depleted to keep ‘swimming’, not because the dog understands its mistake and makes a conscious decision to correct it. You can tell by the dog trying to ‘swim’ again (I imagine) under the same conditions after regaining energy.

Sometimes humans *are* algorithmic in strikingly similar ways, such as when they repeat pointless religious rituals over and over. Here the difference lies, among other things, in how to get them out of that. With a dog, you have to use reinforcement learning: whenever the dog tries to ‘swim’ mid-air, you have to yell ‘no’ at it, use electric shocks, or something of that nature. Over dozens of iterations, the dog’s swimming behavior may gradually fade until it disappears completely. But with humans, yelling ‘no’ when they enact a ritual won’t do much. In fact, it might be counterproductive: they may continue the practice out of spite, because they dislike the person yelling at them, because they think others are ‘too dumb to see the obvious’ (like animal-rights activists think, see below), or for any number of reasons they themselves come up with. To stop enacting the ritual, they need to be *persuaded* that it is pointless. Only intelligent beings can persuade and be persuaded; dogs cannot.-EDIT 2021-10-23:

In addition, when humans behave algorithmically, it's *because* they're not being creative. Such as when mindlessly enacting a ritual.

<h3 id="maybe-animals-are-just-less-conscious-than-humans">
  <a href="#maybe-animals-are-just-less-conscious-than-humans">Maybe animals are just <em>less</em> conscious than humans.</a>
</h3>

Although we humans are sometimes more or less aware of certains things – e.g., I am currently more aware of my computer screen and keyboard than my desk – the *ability* to be conscious is the ability to suffer, and that ability is something you either have or don’t. Animals don’t. Again, the genetic jump to creativity – and with it, consciousness – either happens or it doesn’t.

<h3 id="our-nervous-systems-and-those-of-other-mammals-are">
  <a href="#our-nervous-systems-and-those-of-other-mammals-are">Our nervous systems and those of other mammals are so similar. Surely they can feel pain.</a>
</h3>

Feeling pain – i.e., suffering – is not a matter of hardware but of *software*. A nervous system by itself doesn’t give you the requisite software. Conversely, you could program a computer to be able to suffer (we don’t currently know how to do that) even though computers *don’t* have nervous systems. In other words, nervous systems are neither necessary nor sufficient for suffering. *Some* physical substrate is needed to instantiate consciousness, but it need not be a nervous system.

At most, nervous systems can implement the infrastructure for pain signals to travel to the brain, where (in conscious beings!) pain is then (sometimes, but not always) interpreted and experienced as suffering.

Whether animals are sentient and can suffer is an *epistemological* question, not a neurobiological one.

<h3 id="when-you-cut-off-a-dog-s-paw-it-cries-out-in-pain-">
  <a href="#when-you-cut-off-a-dog-s-paw-it-cries-out-in-pain-">When you cut off a dog’s paw, it cries out in pain. <em>Obviously</em> it’s conscious.</a>
</h3>

You mean like how people used to point up at the sun as ‘obvious’ evidence that it revolves around the earth?

The truth is hard to come by, not easy. Also, [evidence is ambiguous](/posts/evidence-is-ambiguous). When you say something is obvious, you are not describing an objective property of truth, but the sensation of effortlessness you have when invoking an existing explanation to interpret some evidence. We need to be *critical* of our existing explanations. That can be difficult but it can help us get closer to the truth.

<h3 id="shouldn-t-we-treat-animals-as-conscious-beings-whi">
  <a href="#shouldn-t-we-treat-animals-as-conscious-beings-whi">Shouldn’t we treat animals as conscious beings which can suffer <em>just in case</em> they can?</a>
</h3>

This is a modern-day version of Pascal’s wager and invalid for the same reason. While our explanations are always tentative, our actions need not be. The best thing we can ever do is act on our best explanations. We might be wrong, but then we can always course correct. And sometimes our best action is to just deliberate in peace.

<h3 id="animals-often-exhibit-behavior-very-similar-to-hum">
  <a href="#animals-often-exhibit-behavior-very-similar-to-hum">Animals often exhibit behavior very similar to humans who <em>do</em> suffer.</a>
</h3>

You can’t infer internal states from behavior. A robot programmed to scream ‘ouch’ when you hit it also exhibits behavior very similar to humans who are hit and suffer as a result. So behavior alone doesn’t tell us much.

David Deutsch’s constructor theory may one day tell us how and whether the kinds of transformations animals or biological evolution can cause differ from those that people can cause. That way, if you pointed your telescope at a distant planet and saw evidence of a particular transformation, you would know that, say, people live on or have visited that planet, and that animals *could not* have caused that transformation. That’s why I said ‘behavior alone doesn’t tell us *much*’ – it could tell us *something*, but currently it’s not the most important factor.

<h3 id="it-s-impossible-to-prove-or-disprove-consciousness">
  <a href="#it-s-impossible-to-prove-or-disprove-consciousness">It’s impossible to prove or disprove consciousness in both humans <em>and</em> animals.</a>
</h3>

Correct, but we’re not after proof, we’re after good explanations. Proof/certainty is epistemologically uninteresting and unnecessary. All our knowledge is *conjectural*, as Popper says.

<h3 id="have-you-no-heart">
  <a href="#have-you-no-heart">Have you no heart?</a>
</h3>

A few years ago I was vegan for about four months out of concern for animals until I quit for health reasons. I used to think animals are conscious and can suffer. I still think that *if* they can suffer it’s immoral to kill them, and I have more in common with animal-rights activists on this point than most meat eaters do. Indeed, *if* animals can suffer, industrial meat production may be one of the worst crimes ever committed. Meat eaters who think animals can suffer have a hopelessly self-contradictory moral stance and should make up their minds.

In short, I understand where animal-rights activists come from. I used to think animals are conscious. But I changed my mind.

<h3 id="why-did-you-change-your-mind">
  <a href="#why-did-you-change-your-mind">Why did you change your mind?</a>
</h3>

I heard about Descartes' argument that animals are robots in middle school. I think this was the first time I encountered the view that animals aren’t conscious. There we also discussed that Pascal’s wager isn’t a valid response to concerns about animal consciousness (or anything, really). Later on, philosopher Elliot Temple offered his views on animal consciousness (or rather, lack thereof) to me, showed me the connection to epistemology, and explained various animal behaviors through the use of inborn algorithms. I then re-read *The Beginning of Infinity* and found several arguments in favor of the view that animals aren’t conscious (they’re not very explicit – -they’re+[they’re in -there+there](/posts/views-on-animal-sentience-in-the-beginning-of-i) but you kind of need to know how to look for and how to read them).

These arguments weren’t quite enough to convince me. I continued to think animals may be conscious, that consciousness may be orthogonal to creativity, and that it could come in degrees. But I think those arguments did the necessary prep work; importantly, I now understood that whether animals are conscious is an *epistemological* question. What *did* eventually convince me was my [neo-Darwinian approach to the mind](/posts/the-neo-darwinian-theory-of-the-mind), which explains the evolution of creativity, and with it, consciousness, through a genetic jump. This jump has a binary nature: it either happens or it doesn’t. From there, it followed that creativity, and with it consciousness, is binary (which is also Deutsch’s argument) and that humans are conscious while animals are not because they haven’t undergone the same genetic jump.

<h3 id="but-thinking-that-animals-can-t-suffer-is-just-pla">
  <a href="#but-thinking-that-animals-can-t-suffer-is-just-pla">But thinking that animals can’t suffer is just plain cruel!</a>
</h3>

Not if they really can’t suffer.

As I wrote [here](/posts/the-animal-rights-community-is-based-on-fear-a), many are pressured into caring for animals because they don’t want to seem cruel. You shouldn’t intimidate others into submission to spread your ideas, and you shouldn't adopt ideas because you're pressured into it.

A related problem with many animal studies is that they’re done by people who love animals. After all, that’s why they’re interested in studying them. Their love for animals casts serious doubt on how objective their studies can be; on how much they can contribute to the body of knowledge about animal sentience. They *can* still make objective progress in that area, but I’d guess it’s harder for them.

<h3 id="explain-animal-behavior-x">
  <a href="#explain-animal-behavior-x">Explain animal behavior X.</a>
</h3>

You’re welcome to throw some animal behavior at me which you think is evidence of consciousness and I’ll do my best to explain it through the mindless execution of inborn algorithms plus logic of the situation. Leave a comment at the bottom of the page. But keep in mind that I’ve already explained above that *no matter how sophisticated*, behavior can *always* be the result of the mindless execution of inborn algorithms. So if I can't think of a particular way, that's not a refutation of my views. But if I can, it may be illuminating.

As an example, somebody told me he tried tricking his dog into getting a bath by offering treats. Once the dog came close enough to the bathroom to hear running bathwater, it decided to back away and ignore the treats. The dog owner interpreted this as evidence that the dog was conscious and asked me to explain how it could be otherwise. In other words: what genetic programming could have led to this behavior? This programming, for example:

```js
let wanting_treat = true;

while (wanting_treat) {
  move_toward_treat();

  if (hear_bathwater()) {
    wanting_treat = false;
    back_away();
  }
}

// see any consciousness in this code??
```

I don’t find the dog’s behavior any more mysterious or in need of consciousness than a Roomba backing away from the top of a staircase so it doesn’t fall.

I’m also happy to explain how a robot could do what animals do without being conscious (which is exactly the same as asking how an animal could do something without being conscious but from what feels like a slightly different point of view). For example, I was asked how a robot could respond to pain without being conscious:

```js
let incoming_electric_signal = get_electric_signal();

// The electric signal could represent the temperature of whatever
// the robot just touched, say. The higher the temperature, the
// greater the number returned by `get_electric_signal`.

// If some threshold is passed:
if (incoming_electric_signal > 1000) {
  // Detected pain!
  say('OUCH');
}
```

Generally speaking, most overestimate what consciousness is required for – not just for animals but also for people (but, to be clear, I do think people *are* conscious). We do tons of things unconsciously all the time. Also, sleepwalkers can navigate their surroundings, pour drinks and prepare food. People talk in their sleep. If I recall correctly, Popper once wrote somewhere that children sometimes *hold conversations* in their sleep. Conversely, people underestimate how powerful genetic preprogramming is in animals and what it can account for (while ironically <em>over</em>estimating the power of genetic preprogramming in people, where, following Deutsch, I believe it has little power).


<h3 id="many-animals-live-in-herds-and-lead-complex-social">
  <a href="#many-animals-live-in-herds-and-lead-complex-social">Many animals live in herds and lead complex social lives, coordinate to hunt, etc.</a>
</h3>

See my remark [above](#isn-t-consciousness-a-matter-of-how-complex-or-sop) on complexity to see why it doesn’t require consciousness. Also, social interactions with other animals, hunting behavior and strategies, can all be genetically preprogrammed.

<h3 id="do-you-have-any-credentials-proving-your-expertise">
  <a href="#do-you-have-any-credentials-proving-your-expertise">Do you have any credentials proving your expertise in fields related to animal consciousness?</a>
</h3>

Maybe. Maybe not. Who cares? We shouldn’t judge ideas by source or the source’s credentials but by content only.

<h3 id="how-could-one-change-your-mind">
  <a href="#how-could-one-change-your-mind">How could one change your mind?</a>
</h3>

I will change my mind and conclude that animals are conscious if you do any one of the following:

- Explain why, above some threshold, increased complexity requires consciousness and *could not* have been preprogrammed, and show that animals exist whose complexity in behavior is already above that threshold
- Show an error in my understanding of epistemology that affects my conclusions about animals (whether this actually changes my mind depends on the error you find)
- Convince me that present-day robots doing relatively complex things like walking around and jumping over obstacles are already conscious and that we just didn’t realize it. Better yet, convince me that robots doing rather *simple* things are conscious
- Convince me that animals are intelligent (the reason that will work is that I think consciousness arises from intelligence and *only* from intelligence) while keeping in mind the distinction between smart and intelligent I laid out above
- Convince me that consciousness could arise through something other than intelligence, and that animals have that other thing
- Offer a better explanation of the mind than my [neo-Darwinian one](/posts/the-neo-darwinian-theory-of-the-mind). Animal consciousness must follow from your explanation. In particular, replace the genetic jump to creativity with something else that occurred during the evolution of animals *and* humans

In addition, say one day we have a working explanation of how the mind works and we can program it on a computer to make artificial general intelligence. We know how consciousness works and what gives rise to it. Using that explanation, we build a device that can be pointed at some object and displays `true` or `false` to indicate whether the object is conscious. If I am convinced by the explanation of how the mind works as well as by the explanation of how the device works, and the device repeatedly displays `true` when pointed at various animals, I will change my mind as well.

If the device instead used a *continuous* dial to indicate consciousness rather than a boolean one, that would change my mind about consciousness being a binary thing. If, when pointed at a rock, the device reads ‘10%’ or something (what exactly that would mean would depend on our best explanations of how the device works), I may even embrace panpsychism.

Original · · View this version (v1)

# Animal-Sentience FAQ

Some responses to the most common criticisms of and questions about the view that animals are not sentient. To reference, click on a heading. Feel free to share.

My views on animal sentience are heavily influenced by David Deutsch and Elliot Temple, plus background knowledge from Karl Popper. Thanks to Logan Chipkin for commenting on a draft of this post.

<h3 id="do-you-think-non-human-animals-can-suffer">
  <a href="#do-you-think-non-human-animals-can-suffer">Do you think non-human animals can suffer?</a>
</h3>

No.

<h3 id="why-can-t-non-human-animals-suffer">
  <a href="#why-can-t-non-human-animals-suffer">Why can’t non-human animals suffer?</a>
</h3>

Because all they do is mindlessly execute inborn algorithms which are the result of biological evolution. (This is Deutsch's view in my own words.)

<h3 id="but-you-don-t-doubt-that-humans-can-suffer">
  <a href="#but-you-don-t-doubt-that-humans-can-suffer">But you don’t doubt that humans can suffer?</a>
</h3>

No, I do not.

<h3 id="we-are-closely-related-to-many-other-species-human">
  <a href="#we-are-closely-related-to-many-other-species-human">We are closely related to many other species. Humans are animals, too. Why don’t animals suffer if we do? The genetic difference is minor.</a>
</h3>

The genetic difference is indeed minor, but we also share many genes with plants, which are not conscious. Hardware differences in general are small enough that many animals’ hardware *could* be programmed to be conscious.

The real difference is one of *software*, not hardware (compare p. 414 of Deutsch's book *The Beginning of Infinity*). A small subset of our DNA codes for a self-replicating idea. Once invoked, this idea evolves into many different ideas during a person’s lifetime. This is the *genetic jump* to creativity. That body of knowledge can grow to be much larger than the genetic knowledge we inherit. This explains how humans can learn so much that isn’t genetically baked in. This ability to learn *is* what makes people conscious. Read my article [‘The Neo-Darwinian Theory of the Mind’](/posts/the-neo-darwinian-theory-of-the-mind) for more information on this.

<h3 id="but-animals-can-learn-too">
  <a href="#but-animals-can-learn-too">But animals can learn, too!</a>
</h3>

They can certainly change their behavior in useful ways, yes. (Compare Deutsch's [remark](https://www.artbrain.org/image-gallery/journal-neuroaesthetics-6/hans-ulrich-obrist-interview-with-david-deutsch/#:~:text=Learning%2C%20perhaps%2C%20in%20that%20any%20useful%20change%20can%20be%20considered%20learning.) "Learning, perhaps, in that any useful change can be considered learning.") How they do this is akin to present-day artificial-'intelligence' algorithms such as reinforcement ‘learning’. But those algorithms aren’t conscious. People have a completely different learning algorithm – which we don’t fully understand yet – which makes them conscious and constitutes *real* learning. If animals had that same algorithm, you wouldn’t need to, say, train a dog to sit via reinforcement. You’d *explain* to the dog why it should sit, and then it might decide to do so. (But a creative being generally won’t like being told to sit on command.)

<h3 id="if-you-don-t-understand-consciousness-how-can-you-">
  <a href="#if-you-don-t-understand-consciousness-how-can-you-">If you don’t understand consciousness, how can you possibly claim that animals aren’t conscious?</a>
</h3>

You don’t need to be a trained pianist to tell whether somebody can play the piano reasonably well. The audience generally doesn’t have the knowledge to play the piano well enough themselves, but they can still tell because they know what good piano-playing *isn’t*.

Similarly, we have good explanations for what consciousness *isn’t*. Again, the mindless execution of inborn algorithms (except for the creative one) isn’t consciousness.

You can compare animals to present-day computers in this regard. Like animals, our computers contain highly sophisticated knowledge. The question is: where does this knowledge come from? In the case of our computers, it comes from the programmers. Who’s the animals’ programmer? Biological evolution.

In both cases, the knowledge was ‘inherited’ from an outside source. Both the computers and animals are not the *creators* of the knowledge they contain. But they’d *need* to be the creators to be conscious (see *The Beginning of Infinity* ch. 7). All they’re concerned with is, again, the mindless execution of algorithms they already contain. *Something that merely executes pre-existing algorithms is not and cannot be conscious – it is mindless.* The only alternative that is left is that consciousness has to do with the *creation* of knowledge. What other aspect to information processing could there be?

<h3 id="isn-t-consciousness-a-matter-of-how-complex-or-sop">
  <a href="#isn-t-consciousness-a-matter-of-how-complex-or-sop">Isn’t consciousness a matter of how complex or sophisticated the animal is?</a>
</h3>

This is a common view: humans are said to be more complex than other apes and so on down some hierarchy. The chain of complexity is usually pictured akin to this (in descending order): humans > other apes > fish > insects > single-celled organisms. Here we encounter a problem already: people underestimate how complex even a single cell is, and if they saw an animal doing the things some of the single cell’s components do, they’d attribute consciousness based on the ‘sufficient-complexity criterion’. But they don’t attribute consciousness to the cell’s components.

Two people are more complex than one person. But two people are not any more conscious than each person individually, nor is there any shared consciousness between them (other than in some woo-woo sense of them having shared ideas or compassion for each other or something, which isn’t what consciousness is). Even if animals are conscious, the biosphere as a whole isn’t, although it’s much more complex than any animal by itself.

More deeply, the reason complexity/sophistication (I prefer to think in terms of sophistication) cannot determine consciousness is this: sophistication and consciousness are completely orthogonal. Again, no matter how sophisticated an inborn algorithm is, since it can be executed *mindlessly*, in computer fashion, that sophistication cannot be evidence of consciousness. Executing a pre-existing algorithm does not require consciousness, no matter how complex the algorithm may be. Many of the algorithms our computers execute mindlessly are highly sophisticated – more sophisticated than much of what the biosphere has ever managed to create.

To distinguish clearly between sophistication and consciousness, I have introduced the distinction between *smart* and *intelligent* in [my book](https://www.amazon.com/Window-Intelligence-Philosophy-Evolution-Implications/dp/1734696133/):

An entity is *smart* if it *contains* sophisticated knowledge. Where that knowledge came from – whether the entity created the knowledge itself or not – is not relevant to determine whether it is smart. Many animals and many computers are smart because they contain sophisticated knowledge.

An entity is *intelligent* if it can *create* knowledge. (To be clear, it needs to be a *coherent* entity. The biosphere can create knowledge through biological evolution but is not coherent like a person.)

Do you see how these two qualities are unrelated? Our computers containing sophisticated programs are smart but not intelligent. A human baby is intelligent but not very smart yet because it hasn’t had much time to learn. All people are intelligent, but only some of them are smart. Intelligence, as David Deutsch argues, is a *binary* matter: you either have it or you don’t. In other words, [the genetic jump to creativity](/posts/the-neo-darwinian-theory-of-the-mind) either happens or it doesn’t. But smarts can exist in degrees and can get ever better.

Thinking animals are conscious is often due to fudging the difference between smart and intelligent.

By the way, thinking that complexity/sophistication (in other words: design) is evidence of the existence of a conscious, intelligent being, is *creationism*. When applied to animals it is creationism in disguise. Instead of attributing intelligence to a supernatural being, it’s attributed to individual animals, while the real creator – biological evolution – isn't mentioned. This is ironic because animal-rights activists often reject creationism and instead follow and appreciate science (e.g. they’ll cite neurobiology to argue that animals are conscious, see below).

<h3 id="what-does-intelligence-have-to-do-with-consciousne">
  <a href="#what-does-intelligence-have-to-do-with-consciousne">What does intelligence have to do with consciousness?</a>
</h3>

Most people think of intelligence as the degree of sophistication of someone’s knowledge. As I’ve just explained in my answer to the previous question, that isn’t the case. Intelligence is the ability to *create new* knowledge, sophisticated or not. (Compare Deutsch's definition of creativity (I use the term 'creativity' synonymously with 'intelligence'), which says it is the ability to create new *explanations* in particular. *The Beginning of Infinity*, ch. 1 glossary.) If something *isn’t* intelligent, all it can do (if that!) is mindlessly execute pre-existing knowledge. It’s utterly algorithmic. So consciousness must live somewhere in intelligence.

<h3 id="do-you-have-any-evidence-of-animals-being-algorith">
  <a href="#do-you-have-any-evidence-of-animals-being-algorith">Do you have any evidence of animals being algorithmic?</a>
</h3>

Yes, see:

- My post ['Buggy Dogs'](/posts/buggy-dogs)
- Elliot Temple's post ['Algorithmic Animal Behavior'](https://direct.curi.us/272-algorithmic-animal-behavior)

You can be more successful with animals if you treat them as algorithmic than if you treat them as sentient beings with free will, as evidenced by [this video](https://twitter.com/Rainmaker1973/status/1438457641471291399) vs [this video](https://twitter.com/_Islamicat/status/1438480429833666562).

That’s because they *are* algorithmic.

<h3 id="but-sometimes-humans-are-algorithmic-too">
  <a href="#but-sometimes-humans-are-algorithmic-too">But sometimes humans are algorithmic too.</a>
</h3>

Yes, but the difference is in how they deal with that. They can reflect on the situation and recognize when they’re stuck. They can correct the error through creative means.

The point is not that humans never make mistakes while animals do. Both make mistakes. The difference is, again, in how they deal with mistakes. Like dogs pointlessly swimming in mid-air when held above water (see my post ['Buggy Dogs'](/posts/buggy-dogs)). Should they stop and ‘correct’ the error it’s through other inborn algorithms taking over, e.g. because the dog’s energy is too depleted to keep ‘swimming’, not because the dog understands its mistake and makes a conscious decision to correct it. You can tell by the dog trying to ‘swim’ again (I imagine) under the same conditions after regaining energy.

Sometimes humans *are* algorithmic in strikingly similar ways, such as when they repeat pointless religious rituals over and over. Here the difference lies, among other things, in how to get them out of that. With a dog, you have to use reinforcement learning: whenever the dog tries to ‘swim’ mid-air, you have to yell ‘no’ at it, use electric shocks, or something of that nature. Over dozens of iterations, the dog’s swimming behavior may gradually fade until it disappears completely. But with humans, yelling ‘no’ when they enact a ritual won’t do much. In fact, it might be counterproductive: they may continue the practice out of spite, because they dislike the person yelling at them, because they think others are ‘too dumb to see the obvious’ (like animal-rights activists think, see below), or for any number of reasons they themselves come up with. To stop enacting the ritual, they need to be *persuaded* that it is pointless. Only intelligent beings can persuade and be persuaded; dogs cannot.

EDIT 2021-10-23: In addition, when humans behave algorithmically, it's *because* they're not being creative. Such as when mindlessly enacting a ritual.

<h3 id="maybe-animals-are-just-less-conscious-than-humans">
  <a href="#maybe-animals-are-just-less-conscious-than-humans">Maybe animals are just <em>less</em> conscious than humans.</a>
</h3>

Although we humans are sometimes more or less aware of certains things – e.g., I am currently more aware of my computer screen and keyboard than my desk – the *ability* to be conscious is the ability to suffer, and that ability is something you either have or don’t. Animals don’t. Again, the genetic jump to creativity – and with it, consciousness – either happens or it doesn’t.

<h3 id="our-nervous-systems-and-those-of-other-mammals-are">
  <a href="#our-nervous-systems-and-those-of-other-mammals-are">Our nervous systems and those of other mammals are so similar. Surely they can feel pain.</a>
</h3>

Feeling pain – i.e., suffering – is not a matter of hardware but of *software*. A nervous system by itself doesn’t give you the requisite software. Conversely, you could program a computer to be able to suffer (we don’t currently know how to do that) even though computers *don’t* have nervous systems. In other words, nervous systems are neither necessary nor sufficient for suffering. *Some* physical substrate is needed to instantiate consciousness, but it need not be a nervous system.

At most, nervous systems can implement the infrastructure for pain signals to travel to the brain, where (in conscious beings!) pain is then (sometimes, but not always) interpreted and experienced as suffering.

Whether animals are sentient and can suffer is an *epistemological* question, not a neurobiological one.

<h3 id="when-you-cut-off-a-dog-s-paw-it-cries-out-in-pain-">
  <a href="#when-you-cut-off-a-dog-s-paw-it-cries-out-in-pain-">When you cut off a dog’s paw, it cries out in pain. <em>Obviously</em> it’s conscious.</a>
</h3>

You mean like how people used to point up at the sun as ‘obvious’ evidence that it revolves around the earth?

The truth is hard to come by, not easy. Also, [evidence is ambiguous](/posts/evidence-is-ambiguous). When you say something is obvious, you are not describing an objective property of truth, but the sensation of effortlessness you have when invoking an existing explanation to interpret some evidence. We need to be *critical* of our existing explanations. That can be difficult but it can help us get closer to the truth.

<h3 id="shouldn-t-we-treat-animals-as-conscious-beings-whi">
  <a href="#shouldn-t-we-treat-animals-as-conscious-beings-whi">Shouldn’t we treat animals as conscious beings which can suffer <em>just in case</em> they can?</a>
</h3>

This is a modern-day version of Pascal’s wager and invalid for the same reason. While our explanations are always tentative, our actions need not be. The best thing we can ever do is act on our best explanations. We might be wrong, but then we can always course correct. And sometimes our best action is to just deliberate in peace.

<h3 id="animals-often-exhibit-behavior-very-similar-to-hum">
  <a href="#animals-often-exhibit-behavior-very-similar-to-hum">Animals often exhibit behavior very similar to humans who <em>do</em> suffer.</a>
</h3>

You can’t infer internal states from behavior. A robot programmed to scream ‘ouch’ when you hit it also exhibits behavior very similar to humans who are hit and suffer as a result. So behavior alone doesn’t tell us much.

David Deutsch’s constructor theory may one day tell us how and whether the kinds of transformations animals or biological evolution can cause differ from those that people can cause. That way, if you pointed your telescope at a distant planet and saw evidence of a particular transformation, you would know that, say, people live on or have visited that planet, and that animals *could not* have caused that transformation. That’s why I said ‘behavior alone doesn’t tell us *much*’ – it could tell us *something*, but currently it’s not the most important factor.

<h3 id="it-s-impossible-to-prove-or-disprove-consciousness">
  <a href="#it-s-impossible-to-prove-or-disprove-consciousness">It’s impossible to prove or disprove consciousness in both humans <em>and</em> animals.</a>
</h3>

Correct, but we’re not after proof, we’re after good explanations. Proof/certainty is epistemologically uninteresting and unnecessary. All our knowledge is *conjectural*, as Popper says.

<h3 id="have-you-no-heart">
  <a href="#have-you-no-heart">Have you no heart?</a>
</h3>

A few years ago I was vegan for about four months out of concern for animals until I quit for health reasons. I used to think animals are conscious and can suffer. I still think that *if* they can suffer it’s immoral to kill them, and I have more in common with animal-rights activists on this point than most meat eaters do. Indeed, *if* animals can suffer, industrial meat production may be one of the worst crimes ever committed. Meat eaters who think animals can suffer have a hopelessly self-contradictory moral stance and should make up their minds.

In short, I understand where animal-rights activists come from. I used to think animals are conscious. But I changed my mind.

<h3 id="why-did-you-change-your-mind">
  <a href="#why-did-you-change-your-mind">Why did you change your mind?</a>
</h3>

I heard about Descartes' argument that animals are robots in middle school. I think this was the first time I encountered the view that animals aren’t conscious. There we also discussed that Pascal’s wager isn’t a valid response to concerns about animal consciousness (or anything, really). Later on, philosopher Elliot Temple offered his views on animal consciousness (or rather, lack thereof) to me, showed me the connection to epistemology, and explained various animal behaviors through the use of inborn algorithms. I then re-read *The Beginning of Infinity* and found several arguments in favor of the view that animals aren’t conscious (they’re not very explicit – they’re in there but you kind of need to know how to look for and how to read them).

These arguments weren’t quite enough to convince me. I continued to think animals may be conscious, that consciousness may be orthogonal to creativity, and that it could come in degrees. But I think those arguments did the necessary prep work; importantly, I now understood that whether animals are conscious is an *epistemological* question. What *did* eventually convince me was my [neo-Darwinian approach to the mind](/posts/the-neo-darwinian-theory-of-the-mind), which explains the evolution of creativity, and with it, consciousness, through a genetic jump. This jump has a binary nature: it either happens or it doesn’t. From there, it followed that creativity, and with it consciousness, is binary (which is also Deutsch’s argument) and that humans are conscious while animals are not because they haven’t undergone the same genetic jump.

<h3 id="but-thinking-that-animals-can-t-suffer-is-just-pla">
  <a href="#but-thinking-that-animals-can-t-suffer-is-just-pla">But thinking that animals can’t suffer is just plain cruel!</a>
</h3>

Not if they really can’t suffer.

As I wrote [here](/posts/the-animal-rights-community-is-based-on-fear-a), many are pressured into caring for animals because they don’t want to seem cruel. You shouldn’t intimidate others into submission to spread your ideas, and you shouldn't adopt ideas because you're pressured into it.

A related problem with many animal studies is that they’re done by people who love animals. After all, that’s why they’re interested in studying them. Their love for animals casts serious doubt on how objective their studies can be; on how much they can contribute to the body of knowledge about animal sentience. They *can* still make objective progress in that area, but I’d guess it’s harder for them.

<h3 id="explain-animal-behavior-x">
  <a href="#explain-animal-behavior-x">Explain animal behavior X.</a>
</h3>

You’re welcome to throw some animal behavior at me which you think is evidence of consciousness and I’ll do my best to explain it through the mindless execution of inborn algorithms plus logic of the situation. Leave a comment at the bottom of the page. But keep in mind that I’ve already explained above that *no matter how sophisticated*, behavior can *always* be the result of the mindless execution of inborn algorithms. So if I can't think of a particular way, that's not a refutation of my views. But if I can, it may be illuminating.

As an example, somebody told me he tried tricking his dog into getting a bath by offering treats. Once the dog came close enough to the bathroom to hear running bathwater, it decided to back away and ignore the treats. The dog owner interpreted this as evidence that the dog was conscious and asked me to explain how it could be otherwise. In other words: what genetic programming could have led to this behavior? This programming, for example:

```js
let wanting_treat = true;

while (wanting_treat) {
  move_toward_treat();
  
  if (hear_bathwater()) {
    wanting_treat = false;
    back_away();
  }
}

// see any consciousness in this code??
```

I don’t find the dog’s behavior any more mysterious or in need of consciousness than a Roomba backing away from the top of a staircase so it doesn’t fall.

I’m also happy to explain how a robot could do what animals do without being conscious (which is exactly the same as asking how an animal could do something without being conscious but from what feels like a slightly different point of view). For example, I was asked how a robot could respond to pain without being conscious:

```js
let incoming_electric_signal = get_electric_signal();

// The electric signal could represent the temperature of whatever
// the robot just touched, say. The higher the temperature, the
// greater the number returned by `get_electric_signal`.

// If some threshold is passed:
if (incoming_electric_signal > 1000) {
  // Detected pain!
  say('OUCH');
}
```

Generally speaking, most overestimate what consciousness is required for – not just for animals but also for people (but, to be clear, I do think people *are* conscious). We do tons of things unconsciously all the time. Also, sleepwalkers can navigate their surroundings, pour drinks and prepare food. People talk in their sleep. If I recall correctly, Popper once wrote somewhere that children sometimes *hold conversations* in their sleep. Conversely, people underestimate how powerful genetic preprogramming is in animals and what it can account for (while ironically <em>over</em>estimating the power of genetic preprogramming in people, where, following Deutsch, I believe it has little power).


<h3 id="many-animals-live-in-herds-and-lead-complex-social">
  <a href="#many-animals-live-in-herds-and-lead-complex-social">Many animals live in herds and lead complex social lives, coordinate to hunt, etc.</a>
</h3>

See my remark [above](#isn-t-consciousness-a-matter-of-how-complex-or-sop) on complexity to see why it doesn’t require consciousness. Also, social interactions with other animals, hunting behavior and strategies, can all be genetically preprogrammed.

<h3 id="do-you-have-any-credentials-proving-your-expertise">
  <a href="#do-you-have-any-credentials-proving-your-expertise">Do you have any credentials proving your expertise in fields related to animal consciousness?</a>
</h3>

Maybe. Maybe not. Who cares? We shouldn’t judge ideas by source or the source’s credentials but by content only.

<h3 id="how-could-one-change-your-mind">
  <a href="#how-could-one-change-your-mind">How could one change your mind?</a>
</h3>

I will change my mind and conclude that animals are conscious if you do any one of the following:

- Explain why, above some threshold, increased complexity requires consciousness and *could not* have been preprogrammed, and show that animals exist whose complexity in behavior is already above that threshold
- Show an error in my understanding of epistemology that affects my conclusions about animals (whether this actually changes my mind depends on the error you find)
- Convince me that present-day robots doing relatively complex things like walking around and jumping over obstacles are already conscious and that we just didn’t realize it. Better yet, convince me that robots doing rather *simple* things are conscious
- Convince me that animals are intelligent (the reason that will work is that I think consciousness arises from intelligence and *only* from intelligence) while keeping in mind the distinction between smart and intelligent I laid out above
- Convince me that consciousness could arise through something other than intelligence, and that animals have that other thing
- Offer a better explanation of the mind than my [neo-Darwinian one](/posts/the-neo-darwinian-theory-of-the-mind). Animal consciousness must follow from your explanation. In particular, replace the genetic jump to creativity with something else that occurred during the evolution of animals *and* humans

In addition, say one day we have a working explanation of how the mind works and we can program it on a computer to make artificial general intelligence. We know how consciousness works and what gives rise to it. Using that explanation, we build a device that can be pointed at some object and displays `true` or `false` to indicate whether the object is conscious. If I am convinced by the explanation of how the mind works as well as by the explanation of how the device works, and the device repeatedly displays `true` when pointed at various animals, I will change my mind as well.

If the device instead used a *continuous* dial to indicate consciousness rather than a boolean one, that would change my mind about consciousness being a binary thing. If, when pointed at a rock, the device reads ‘10%’ or something (what exactly that would mean would depend on our best explanations of how the device works), I may even embrace panpsychism.