Dennis Hackethal’s Blog
My blog about philosophy, coding, and anything else that interests me.
Animal-Sentience FAQ
Some responses to the most common criticisms of and questions about the view that animals are not sentient. To reference, click on a heading. Feel free to share.
My views on animal sentience are heavily influenced by David Deutsch and Elliot Temple, plus background knowledge from Karl Popper. Thanks to Logan Chipkin for commenting on a draft of this post.
Do you think non-human animals can suffer?
No.
Why can’t non-human animals suffer?
Because all they do is mindlessly execute inborn algorithms which are the result of biological evolution. (This is Deutsch's view in my own words.)
But you don’t doubt that humans can suffer?
No, I do not.
We are closely related to many other species. Humans are animals, too. Why don’t animals suffer if we do? The genetic difference is minor.
The genetic difference is indeed minor, but we also share many genes with plants, which are not conscious. Hardware differences in general are small enough that many animals’ hardware could be programmed to be conscious.
The real difference is one of software, not hardware (compare p. 414 of Deutsch's book The Beginning of Infinity). A small subset of our DNA codes for a self-replicating idea. Once invoked, this idea evolves into many different ideas during a person’s lifetime. This is the genetic jump to creativity. That body of knowledge can grow to be much larger than the genetic knowledge we inherit. This explains how humans can learn so much that isn’t genetically baked in. This ability to learn is what makes people conscious. Read my article ‘The Neo-Darwinian Theory of the Mind’ for more information on this.
But animals can learn, too!
They can certainly change their behavior in useful ways, yes. (Compare Deutsch's remark "Learning, perhaps, in that any useful change can be considered learning.") How they do this is akin to present-day artificial-'intelligence' algorithms such as reinforcement ‘learning’. But those algorithms aren’t conscious. People have a completely different learning algorithm – which we don’t fully understand yet – which makes them conscious and constitutes real learning. If animals had that same algorithm, you wouldn’t need to, say, train a dog to sit via reinforcement. You’d explain to the dog why it should sit, and then it might decide to do so. (But a creative being generally won’t like being told to sit on command.)
If you don’t understand consciousness, how can you possibly claim that animals aren’t conscious?
You don’t need to be a trained pianist to tell whether somebody can play the piano reasonably well. The audience generally doesn’t have the knowledge to play the piano well enough themselves, but they can still tell because they know what good piano-playing isn’t.
Similarly, we have good explanations for what consciousness isn’t. Again, the mindless execution of inborn algorithms (except for the creative one) isn’t consciousness.
You can compare animals to present-day computers in this regard. Like animals, our computers contain highly sophisticated knowledge. The question is: where does this knowledge come from? In the case of our computers, it comes from the programmers. Who’s the animals’ programmer? Biological evolution.
In both cases, the knowledge was ‘inherited’ from an outside source. Both the computers and animals are not the creators of the knowledge they contain. But they’d need to be the creators to be conscious (see The Beginning of Infinity ch. 7). All they’re concerned with is, again, the mindless execution of algorithms they already contain. Something that merely executes pre-existing algorithms is not and cannot be conscious – it is mindless. The only alternative that is left is that consciousness has to do with the creation of knowledge. What other aspect to information processing could there be?
Isn’t consciousness a matter of how complex or sophisticated the animal is?
This is a common view: humans are said to be more complex than other apes and so on down some hierarchy. The chain of complexity is usually pictured akin to this (in descending order): humans > other apes > fish > insects > single-celled organisms. Here we encounter a problem already: people underestimate how complex even a single cell is, and if they saw an animal doing the things some of the single cell’s components do, they’d attribute consciousness based on the ‘sufficient-complexity criterion’. But they don’t attribute consciousness to the cell’s components.
Two people are more complex than one person. But two people are not any more conscious than each person individually, nor is there any shared consciousness between them (other than in some woo-woo sense of them having shared ideas or compassion for each other or something, which isn’t what consciousness is). Even if animals are conscious, the biosphere as a whole isn’t, although it’s much more complex than any animal by itself.
More deeply, the reason complexity/sophistication (I prefer to think in terms of sophistication) cannot determine consciousness is this: sophistication and consciousness are completely orthogonal. Again, no matter how sophisticated an inborn algorithm is, since it can be executed mindlessly, in computer fashion, that sophistication cannot be evidence of consciousness. Executing a pre-existing algorithm does not require consciousness, no matter how complex the algorithm may be. Many of the algorithms our computers execute mindlessly are highly sophisticated – more sophisticated than much of what the biosphere has ever managed to create.
To distinguish clearly between sophistication and consciousness, I have introduced the distinction between smart and intelligent in my book:
An entity is smart if it contains sophisticated knowledge. Where that knowledge came from – whether the entity created the knowledge itself or not – is not relevant to determine whether it is smart. Many animals and many computers are smart because they contain sophisticated knowledge.
An entity is intelligent if it can create knowledge. (To be clear, it needs to be a coherent entity. The biosphere can create knowledge through biological evolution but is not coherent like a person.)
Do you see how these two qualities are unrelated? Our computers containing sophisticated programs are smart but not intelligent. A human baby is intelligent but not very smart yet because it hasn’t had much time to learn. All people are intelligent, but only some of them are smart. Intelligence, as David Deutsch argues, is a binary matter: you either have it or you don’t. In other words, the genetic jump to creativity either happens or it doesn’t. But smarts can exist in degrees and can get ever better.
Thinking animals are conscious is often due to fudging the difference between smart and intelligent.
By the way, thinking that complexity/sophistication (in other words: design) is evidence of the existence of a conscious, intelligent being, is creationism. When applied to animals it is creationism in disguise. Instead of attributing intelligence to a supernatural being, it’s attributed to individual animals, while the real creator – biological evolution – isn't mentioned. This is ironic because animal-rights activists often reject creationism and instead follow and appreciate science (e.g. they’ll cite neurobiology to argue that animals are conscious, see below).
What does intelligence have to do with consciousness?
Most people think of intelligence as the degree of sophistication of someone’s knowledge. As I’ve just explained in my answer to the previous question, that isn’t the case. Intelligence is the ability to create new knowledge, sophisticated or not. (Compare Deutsch's definition of creativity (I use the term 'creativity' synonymously with 'intelligence'), which says it is the ability to create new explanations in particular. The Beginning of Infinity, ch. 1 glossary.) If something isn’t intelligent, all it can do (if that!) is mindlessly execute pre-existing knowledge. It’s utterly algorithmic. So consciousness must live somewhere in intelligence.
Aren’t you fudging sentience and sapience?
The popular conception says sentience (roughly, the ability to feel) and sapience (in this context, the ability to think) are separate/orthogonal. Following Deutsch, I don’t think they are. I think they’re a package deal and cannot be achieved independently:
It is conceivable that there are other levels of universality between AI and ‘universal explainer/constructor’, and perhaps separate levels for those associated attributes like consciousness. But those attributes all seem to have arrived in one jump to universality in humans, and, although we have little explanation of any of them, I know of no plausible argument that they are at different levels or can be achieved independently of each other. So I tentatively assume that they cannot.
To be sure, Deutsch writes this about artificial intelligence, but the same argument applies to animals as well (and any other entity one might be investigating re sentience).
For a specific idea about how this “jump to universality in humans” might have occurred in our evolutionary history, read my neo-Darwinian approach.
Building on Deutsch, I think it’s the ability to be critical that is necessary, possibly sufficient, for sentience and sapience. Once an entity has this ability, it is both sentient and sapient. There’s lots of evidence of animals being uncritical and unaware of their mistakes as a result. The lack of critical attitude causes a lack of sentience (and a lack of sapience).
Do you have any evidence of animals being algorithmic?
Yes, see:
- This list of many videos
- My post 'Buggy Dogs'
- Elliot Temple's post 'Algorithmic Animal Behavior'
You can be more successful with animals if you treat them as algorithmic than if you treat them as sentient beings with free will, as evidenced by this video vs this video.
That’s because they are algorithmic.
But sometimes humans are algorithmic too.
Yes, but the difference is in how they deal with that. They can reflect on the situation and recognize when they’re stuck. They can correct the error through creative means.
The point is not that humans never make mistakes while animals do. Both make mistakes. The difference is, again, in how they deal with mistakes. Like dogs pointlessly swimming in mid-air when held above water (see my post 'Buggy Dogs'). Should they stop and ‘correct’ the error it’s through other inborn algorithms taking over, e.g. because the dog’s energy is too depleted to keep ‘swimming’, not because the dog understands its mistake and makes a conscious decision to correct it. You can tell by the dog trying to ‘swim’ again (I imagine) under the same conditions after regaining energy.
Sometimes humans are algorithmic in strikingly similar ways, such as when they repeat pointless religious rituals over and over. Here the difference lies, among other things, in how to get them out of that. With a dog, you have to use reinforcement learning: whenever the dog tries to ‘swim’ mid-air, you have to yell ‘no’ at it, use electric shocks, or something of that nature. Over dozens of iterations, the dog’s swimming behavior may gradually fade until it disappears completely. But with humans, yelling ‘no’ when they enact a ritual won’t do much. In fact, it might be counterproductive: they may continue the practice out of spite, because they dislike the person yelling at them, because they think others are ‘too dumb to see the obvious’ (like animal-rights activists think, see below), or for any number of reasons they themselves come up with. To stop enacting the ritual, they need to be persuaded that it is pointless. Only intelligent beings can persuade and be persuaded; dogs cannot.
In addition, when humans behave algorithmically, it's because they're not being creative. Such as when mindlessly enacting a ritual.
Maybe animals are just less conscious than humans.
Although we humans are sometimes more or less aware of certains things – e.g., I am currently more aware of my computer screen and keyboard than my desk – the ability to be conscious is the ability to suffer, and that ability is something you either have or don’t. Animals don’t. Again, the genetic jump to creativity – and with it, consciousness – either happens or it doesn’t.
Our nervous systems and those of other mammals are so similar. Surely they can feel pain.
Feeling pain – i.e., suffering – is not a matter of hardware but of software. A nervous system by itself doesn’t give you the requisite software. Conversely, you could program a computer to be able to suffer (we don’t currently know how to do that) even though computers don’t have nervous systems. In other words, nervous systems are neither necessary nor sufficient for suffering. Some physical substrate is needed to instantiate consciousness, but it need not be a nervous system.
At most, nervous systems can implement the infrastructure for pain signals to travel to the brain, where (in conscious beings!) pain is then (sometimes, but not always) interpreted and experienced as suffering.
Whether animals are sentient and can suffer is an epistemological question, not a neurobiological one.
When you cut off a dog’s paw, it cries out in pain. Obviously it’s conscious.
You mean like how people used to point up at the sun as ‘obvious’ evidence that it revolves around the earth?
The truth is hard to come by, not easy. Also, evidence is ambiguous. When you say something is obvious, you are not describing an objective property of truth, but the sensation of effortlessness you have when invoking an existing explanation to interpret some evidence. We need to be critical of our existing explanations. That can be difficult but it can help us get closer to the truth.
Shouldn’t we treat animals as conscious beings which can suffer just in case they can?
This is a modern-day version of Pascal’s wager and invalid for the same reason. While our explanations are always tentative, our actions need not be. The best thing we can ever do is act on our best explanations. We might be wrong, but then we can always course correct. And sometimes our best action is to just deliberate in peace.
Animals often exhibit behavior very similar to humans who do suffer.
You can’t infer internal states from behavior. A robot programmed to scream ‘ouch’ when you hit it also exhibits behavior very similar to humans who are hit and suffer as a result. So behavior alone doesn’t tell us much.
David Deutsch’s constructor theory may one day tell us how and whether the kinds of transformations animals or biological evolution can cause differ from those that people can cause. That way, if you pointed your telescope at a distant planet and saw evidence of a particular transformation, you would know that, say, people live on or have visited that planet, and that animals could not have caused that transformation. That’s why I said ‘behavior alone doesn’t tell us much’ – it could tell us something, but currently it’s not the most important factor.
It’s impossible to prove or disprove consciousness in both humans and animals.
Correct, but we’re not after proof, we’re after good explanations. Proof/certainty is epistemologically uninteresting and unnecessary. All our knowledge is conjectural, as Popper says.
Have you no heart?
A few years ago I was vegan for about four months out of concern for animals until I quit for health reasons. I used to think animals are conscious and can suffer. I still think that if they can suffer it’s immoral to kill them, and I have more in common with animal-rights activists on this point than most meat eaters do. Indeed, if animals can suffer, industrial meat production may be one of the worst crimes ever committed. Meat eaters who think animals can suffer have a hopelessly self-contradictory moral stance and should make up their minds.
In short, I understand where animal-rights activists come from. I used to think animals are conscious. But I changed my mind.
Why did you change your mind?
I heard about Descartes' argument that animals are robots in middle school. I think this was the first time I encountered the view that animals aren’t conscious. There we also discussed that Pascal’s wager isn’t a valid response to concerns about animal consciousness (or anything, really). Later on, philosopher Elliot Temple offered his views on animal consciousness (or rather, lack thereof) to me, showed me the connection to epistemology, and explained various animal behaviors through the use of inborn algorithms. I then re-read The Beginning of Infinity and found several arguments in favor of the view that animals aren’t conscious (they’re not very explicit – they’re in there but you kind of need to know how to look for and how to read them).
These arguments weren’t quite enough to convince me. I continued to think animals may be conscious, that consciousness may be orthogonal to creativity, and that it could come in degrees. But I think those arguments did the necessary prep work; importantly, I now understood that whether animals are conscious is an epistemological question. What did eventually convince me was my neo-Darwinian approach to the mind, which explains the evolution of creativity, and with it, consciousness, through a genetic jump. This jump has a binary nature: it either happens or it doesn’t. From there, it followed that creativity, and with it consciousness, is binary (which is also Deutsch’s argument) and that humans are conscious while animals are not because they haven’t undergone the same genetic jump.
But thinking that animals can’t suffer is just plain cruel!
Not if they really can’t suffer.
As I wrote here, many are pressured into caring for animals because they don’t want to seem cruel. You shouldn’t intimidate others into submission to spread your ideas, and you shouldn't adopt ideas because you're pressured into it.
A related problem with many animal studies is that they’re done by people who love animals. After all, that’s why they’re interested in studying them. Their love for animals casts serious doubt on how objective their studies can be; on how much they can contribute to the body of knowledge about animal sentience. They can still make objective progress in that area, but I’d guess it’s harder for them.
Explain animal behavior X.
You’re welcome to throw some animal behavior at me which you think is evidence of consciousness and I’ll do my best to explain it through the mindless execution of inborn algorithms plus logic of the situation. Leave a comment at the bottom of the page. But keep in mind that I’ve already explained above that no matter how sophisticated, behavior can always be the result of the mindless execution of inborn algorithms. So if I can't think of a particular way, that's not a refutation of my views. But if I can, it may be illuminating.
As an example, somebody told me he tried tricking his dog into getting a bath by offering treats. Once the dog came close enough to the bathroom to hear running bathwater, it decided to back away and ignore the treats. The dog owner interpreted this as evidence that the dog was conscious and asked me to explain how it could be otherwise. In other words: what genetic programming could have led to this behavior? This programming, for example:
let wanting_treat = true;
while (wanting_treat) {
move_toward_treat();
if (hear_bathwater()) {
wanting_treat = false;
back_away();
}
}
// see any consciousness in this code??
I don’t find the dog’s behavior any more mysterious or in need of consciousness than a Roomba backing away from the top of a staircase so it doesn’t fall.
I’m also happy to explain how a robot could do what animals do without being conscious (which is exactly the same as asking how an animal could do something without being conscious but from what feels like a slightly different point of view). For example, I was asked how a robot could respond to pain without being conscious:
let incoming_electric_signal = get_electric_signal();
// The electric signal could represent the temperature of whatever
// the robot just touched, say. The higher the temperature, the
// greater the number returned by `get_electric_signal`.
// If some threshold is passed:
if (incoming_electric_signal > 1000) {
// Detected pain!
say('OUCH');
}
Generally speaking, most overestimate what consciousness is required for – not just for animals but also for people (but, to be clear, I do think people are conscious). We do tons of things unconsciously all the time. Also, sleepwalkers can navigate their surroundings, pour drinks and prepare food. People talk in their sleep. If I recall correctly, Popper once wrote somewhere that children sometimes hold conversations in their sleep. Conversely, people underestimate how powerful genetic preprogramming is in animals and what it can account for (while ironically overestimating the power of genetic preprogramming in people, where, following Deutsch, I believe it has little power).
Many animals live in herds and lead complex social lives, coordinate to hunt, etc.
See my remark above on complexity to see why it doesn’t require consciousness. Also, social interactions with other animals, hunting behavior and strategies, can all be genetically preprogrammed.
Do you have any credentials proving your expertise in fields related to animal consciousness?
Maybe. Maybe not. Who cares? We shouldn’t judge ideas by source or the source’s credentials but by content only.
How could one change your mind?
I will change my mind and conclude that animals are conscious if you do any one of the following:
- Explain why, above some threshold, increased complexity requires consciousness and could not have been preprogrammed, and show that animals exist whose complexity in behavior is already above that threshold
- Show an error in my understanding of epistemology that affects my conclusions about animals (whether this actually changes my mind depends on the error you find)
- Convince me that present-day robots doing relatively complex things like walking around and jumping over obstacles are already conscious and that we just didn’t realize it. Better yet, convince me that robots doing rather simple things are conscious
- Convince me that animals are intelligent (the reason that will work is that I think consciousness arises from intelligence and only from intelligence) while keeping in mind the distinction between smart and intelligent I laid out above
- Convince me that consciousness could arise through something other than intelligence, and that animals have that other thing
- Offer a better explanation of the mind than my neo-Darwinian one. Animal consciousness must follow from your explanation. In particular, replace the genetic jump to creativity with something else that occurred during the evolution of animals and humans
In addition, say one day we have a working explanation of how the mind works and we can program it on a computer to make artificial general intelligence. We know how consciousness works and what gives rise to it. Using that explanation, we build a device that can be pointed at some object and displays true
or false
to indicate whether the object is conscious. If I am convinced by the explanation of how the mind works as well as by the explanation of how the device works, and the device repeatedly displays true
when pointed at various animals, I will change my mind as well.
If the device instead used a continuous dial to indicate consciousness rather than a boolean one, that would change my mind about consciousness being a binary thing. If, when pointed at a rock, the device reads ‘10%’ or something (what exactly that would mean would depend on our best explanations of how the device works), I may even embrace panpsychism.
References
This post makes 7 references to:
- Post ‘Buggy Dogs’
- Post ‘Evidence Is Ambiguous’
- Post ‘Evidence of Animal Insentience’
- Post ‘Sleepwalking’
- Post ‘The Neo-Darwinian Theory of the Mind’
- Post ‘The ‘Animal-Rights’ Community Is Based on Fear and Intimidation’
- Post ‘Views on Animal Sentience in The Beginning of Infinity’
There are 14 references to this post in:
- Post ‘Animal-Sentience Discussion Tree’
- Post ‘Chapter Order in The Beginning of Infinity’
- Post ‘Do Objectivists Attribute Moral Value to Animals?’
- Post ‘Fauci vs Beagle Puppies’
- Post ‘Help: My Kid Doesn’t Want to Be Vegan Anymore!’
- Post ‘On Deutsch and Naval’
- Post ‘Views on Animal Sentience in The Beginning of Infinity’
- Post ‘Why Do Humans Have Fewer Genes than Flies?’
- Post ‘“Why is human evolution so interesting to learn about and study?”’
- Comment #146 on post ‘Views on Animal Sentience in The Beginning of Infinity’
- Comment #477 on post ‘Choosing between Theories’
- Comment #547 on post ‘Choosing between Theories’
- Comment #590 on post ‘Choosing between Theories’
- Comment #592 on post ‘Choosing between Theories’
What people are saying
What was important about the persuasiveness of this argument for me depended on answering this:
At that I see a couple of approaches.
1. We don’t understand whether consciousness is needed for creativity.
We haven’t programmed creativity yet. We have programmed enough uncreative algorithms (including imitating those of particular animals, say in robots) to understand that, there, consciousness has no role to play.
2. Human consciousness is (apparently) coincident with human creativity.
We are conscious, but not of everything which is the execution of some algorithm in our brains. In fact sometimes (e.g. I think during some stages of sleep, or under anaesthesia) we are completely unconscious.
So why are we conscious—to the degree that we are—when we are conscious? Conversely, why are we unconscious—to the degree that we are—when we are unconscious?
This second is an interesting approach since as you think over examples creativity always seems to neatly fit the gap!
Reply
Adam, you provided a blockquote without a source. Where's that quote from? Or was it instead meant to be emphasized text which you wrote?
To address what you wrote:
I think it's the other way round: creativity is needed for consciousness. The latter arises only from the former.
It seems to have to do with automation, among other things. Once you've automated riding your bike you're not conscious of all the minute movements you make, just the overall experience. Whereas when you first learn to ride a bike you're aware of the smallest movements because you need to correct lots of errors in them.
We also seem to be aware only of ideas which have spread sufficiently through our minds.
I speculate more about why we are conscious of some thing and not others in the referenced post 'The Neo-Darwinian Theory of the Mind'.
Another thing you may find interesting is fleeting properties of computer programs, which I have been thinking about lately. What's promising about them is that they don't exist before runtime, meaning they need to be (and can be!) created first. They don't already exist just by virtue of the program existing, which seems to be common for properties of present-day programs. You can read more about them in my article 'What Makes Creative Computer Programs Different from Non-creative Ones?'.
Reply
Emphasized text would've been more appropriate (I was 'quoting' a hypothetical thought), but I just got excited using markdown.
And I realise I was unclear.
More clearly, I was responding to something like:
If you argue that:
False
)False
) => not pWhy do you not argue that:
and contradict the fact that humans are conscious?
Of course a person may not make the first argument, but if ones does — and I think I do... I don't know is there a problem there? — then it seems like one needs to explain why (or convey that there must be some explanation why), not just:
but in fact why consciousness is needed for at least the execution of a specific version of a creative program humans are running -- maybe you were thinking it is not needed for creativity per se, which is fair and interesting -- to avoid making the second argument. I guess you agree with this, although maybe I'm missing something. Because if consciousness does not play some functional role for people, why did it persist over evolutionary time? If I could apply that same criterion of dismissing consciousness in animals for its lack of necessity to their programming, to dismiss consciousness in humans, I suppose I would end up concluding that consciousness is not a matter of software
Reply
To be clear, I don't argue that
because this is based on the misconception that evolution is always adaptive and constitutes progress/fulfills some function. There are plenty of examples in the biosphere of 'adaptations' not fulfilling any apparent purpose or even being plain disadvantageous.
Having said that, in the particular case of creativity, I think consciousness arises as an emergent byproduct of creativity, and creativity is hugely advantageous. For one thing, any genetic mutation that reduces a gene's ability to do its 'job' (meaning most mutations) can be 'fixed' at runtime by creativity.
But I think you know that I'm not arguing that animals are conscious, so if by 'you' you mean a hypothetical 'someone': well, they'd be wrong to argue that, for the reasons I just said.
I'm not sure we know that either way. As a conjecture, I have an inkling that it is false.
Again, I view the chain of causation the other way round. Do you have a refutation of that view?
See above.
To be clear to others who read this: I'm not arguing from necessity. The quote continues:
Why/how does that follow?
Reply
In the interest of making progress I'm not going to respond to everything you said. Main points:
First:
Yes I don't agree with my previous comment, accepting the first argument. There are several conceivable ways consciousness could have evolved without itself at any point aiding in the persistence of genes (which is ultimately what I had meant by 'playing a functional role').
Second I wonder about how these two comments combine:
From your post:
Also:
Do you think we should be able to "see consciousness" in a program for a person? For example, as an "emergent byproduct of creativity," do you think consciousness features explicitly when describing a person from a certain high-level programming language, which language compiles to a lower-level one featuring creativity?
Because a stronger reading of "emergent byproduct of creativity" is to interpret some kind of natural relationship where consciousness 'jumps on' to a creative program without featuring in an alternative description of it. In that case would you be able to see consciousness in the code for a person? It seems like you would need an additional 'linking theory' so you could determine what and when parts of the code must 'light up' with consciousness.
Reply
I think consciousness may not feature explicitly in code in the sense that you could 'read it off', but code that is conscious when run would be novel in an unexpected way. I don't expect it to look just like any old program people have written before.
When I wrote
I meant to point out that that code looks just like any other: it's the same old mindless execution of pre-existing knowledge.
Regarding the linking theory to see which parts of the code 'light up' with consciousness – I really like that phrasing by the way – I expect once we understand consciousness we will know what to look for in code to tell, without running it, whether it (or parts of it) will be conscious when run. In other words, I'd guess a theory of consciousness would come with such a linking theory.
Reply