Dennis Hackethal’s Blog

My blog about philosophy, coding, and anything else that interests me.

History of post ‘Analyzing The Cambridge Declaration on Consciousness’

Versions are sorted from most recent to oldest with the original version at the bottom. Changes are highlighted relative to the next, i.e. older, version underneath. Only changed lines and their surrounding lines are shown, except for the original version, which is shown in full.

Revision 2 · · View this (the most recent) version (v3)

Use proper footnote formatting

@@ -1,7 +1,7 @@
# Analyzing *The Cambridge Declaration on Consciousness*
Let's analyze [The Cambridge Declaration on Consciousness](http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf), keeping the following things in mind:

- If an organism is *smart* (i.e., contains sophisticated knowledge), that does *not* mean it is *intelligent* (i.e., can *create* new knowledge). Why? Because the sophisticated knowledge may well have originated somewhere else and not in the organism (cf. Deutsch, *The Beginning of Infinity*, ch. 7). For example, it can be inherited genetically and then the organism just needs to execute it mindlessly. Therefore, no matter how sophisticated animal behavior may be, it is not evidence of intelligence/creativity. And if creativity is required to be conscious—as it seems to -be*—then+be[^1]—then sophisticated behavior isn't evidence of consciousness either.
- Intelligence and consciousness are *software*. And, like all software, they can be run on computers, which can be made of pretty much anything. It doesn't matter if that computer is made of metal and silicon or chewing gum and vacuum tubes. So neuroscience won't tell us anything about consciousness, for the same reason you can study metal and silicon all you want, it won't tell you anything about how, say, a word processor (or consciousness) works. (cf. [this](https://www.cbc.ca/radio/tapestry/the-new-human-1.4696724/oxford-physicist-predicts-ai-will-be-human-in-all-but-name-1.4696754) interview with David Deutsch)
- Fancy titles and complicated sentence structures shouldn't intimidate us into agreement.

@@ -95,4 +95,4 @@ If you read the whole text, you will find that it is hard to follow. You may eve

Then again, Hawking was there, and the hotel had a French name, so what they’re saying must be true.

-*+[^1]: EDIT: Following a suggestion, I'd like to expand on why creativity seems to be required for consciousness to arise. Creativity/intelligence is the ability to create new knowledge. Imagine an organism that is *not* creative: that means its knowledge—in the *objective* sense, not in any subjective sense—remains mostly unchanged during its lifetime. All the organism can do, therefore, is *execute* that knowledge, like an automaton. Automata are not conscious. Therefore, if an organism is creative, then it's not an automaton—and only then could it possibly be conscious. (That still leaves room for the possibility that creativity is only necessary and not sufficient for consciousness to arise—though I do guess that it's sufficient—but either way, automata are not conscious. They do things *mindlessly*.)

Revision 1 · · View this version (v2)

Fix misquotes

@@ -1,98 +1,98 @@
# Analyzing *The Cambridge Declaration on Consciousness*
Let's analyze [The Cambridge Declaration on Consciousness](http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf), keeping the following things in mind:

- If an organism is *smart* (i.e., contains sophisticated knowledge), that does *not* mean it is *intelligent* (i.e., can *create* new knowledge). Why? Because the sophisticated knowledge may well have originated somewhere else and not in the organism (cf. Deutsch, *The Beginning of Infinity*, ch. 7). For example, it can be inherited genetically and then the organism just needs to execute it mindlessly. Therefore, no matter how sophisticated animal behavior may be, it is not evidence of intelligence/creativity. And if creativity is required to be conscious—as it seems to -be\*—then+be*—then sophisticated behavior isn't evidence of consciousness either.
- Intelligence and consciousness are *software*. And, like all software, they can be run on computers, which can be made of pretty much anything. It doesn't matter if that computer is made of metal and silicon or chewing gum and vacuum tubes. So neuroscience won't tell us anything about consciousness, for the same reason you can study metal and silicon all you want, it won't tell you anything about how, say, a word processor (or consciousness) works. (cf. [this](https://www.cbc.ca/radio/tapestry/the-new-human-1.4696724/oxford-physicist-predicts-ai-will-be-human-in-all-but-name-1.4696754) interview with David Deutsch)
- Fancy titles and complicated sentence structures shouldn't intimidate us into agreement.

The first (!) sentence reads:

> On this day of July 7, 2012, a prominent international group of cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists gathered at The University of Cambridge to reassess the neurobiological substrates of conscious experience and related behaviors in human and non-human animals.

Writing "On this day of July 7, 2012" is already oddly formal/ceremonial. It's supposed to give off the impression that this document is very important.

> +[…] a prominent international group +[…]

How cosmopolitan! Who cares that they're prominent and international? What bearing does this have on the matter of consciousness?

> +[…] cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists gathered at The University of Cambridge +[…]

These are lots of impressive-sounding words that essentially say: we know what we're talking about, you don't. Since they’re all neuro-somethings from Cambridge, surely what they're saying must be true! But with the understanding that consciousness is software (or a least a phenomenon emerging from it), we can already tell that these people won't have anything useful to say unless they step away from the hardware and study software instead.

> +[…] to reassess the neurobiological substrates of conscious experience and related behaviors in human and non-human animals.

If they had wanted clarity, they could have just written: "to think about consciousness in all animals." What *is* clear is that they did not want clarity.

Skipping some. Then:

> Studies of non-human animals have shown that homologous brain circuits correlated with conscious experience and perception can be selectively facilitated and disrupted to assess whether they are in fact necessary for those experiences. Moreover, in humans, new non-invasive techniques are readily available to survey the correlates of consciousness.

~Nobody is going to know what “homologous” means. [This source](https://www.vocabulary.com/dictionary/homologous) says it means “similar in function.” Why not just write that? In any case, there’s a problem with focusing on correlates: correlation is not causation. These researchers know this, but they ignore it because they know of no better way to study consciousness. I vaguely recall either Karl Popper or Konrad Lorenz quoting somebody else, whose name I forget and whom I will paraphrase (from poor memory): even if we found that conscious states correlated perfectly with certain neural patterns, all that would tell us is that [*psychophysical parallelism*](https://en.wikipedia.org/wiki/Psychophysical_parallelism) is indeed very parallel—but it would not tell us how consciousness works! In other words: we need *explanations*, not correlations.

> The neural substrates of emotions do not appear to be confined to cortical structures.

That may be so, but this is just a special case of the more general principle that computers can be made of pretty much anything, as long as they can process information. Also, “emotions” is a big word that can easily be misunderstood to imply subjective experiences—but studying hardware cannot tell you anything about subjective experiences because they’re abstract, not material. The rest of that paragraph is basically made of long, impressive-sounding sentences meant to support their point above.

> Birds appear to offer, in their behavior, neurophysiology, and neuroanatomy a striking case of parallel evolution of consciousness. Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots. Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought. Moreover, certain species of birds have been found to exhibit neural sleep patterns similar to those of mammals, including REM sleep and, as was demonstrated in zebra finches, neurophysiological patterns, previously thought to require a mammalian neocortex. Magpies in particular have been shown to exhibit striking similarities to humans, great apes, dolphins, and elephants in studies of mirror self-recognition.

Now, I don’t know what magpies are, but this is the mistake I mentioned in the first bullet point at the beginning of this post. They’re blurring the lines between smarts and intelligence. That isn’t just wordplay on my part—these really are distinct concepts, and one does not imply the other.

Why would a similarity in sleep patterns between birds and mammals say anything about either one of them being conscious? They don’t say—it’s just an implied assertion. Same goes for similarities to humans, in particular.

Considering self-recognition as evidence of consciousness is a widespread mistake. David Deutsch recently issued a neat challenge after what feels like the millionth researcher claimed that animal x is conscious because it seems to recognize itself in the mirror:

> Someone please write a smartphone app that recognises itself in the mirror.<br><br>And when it does, yells triumphantly that it is self-aware. https://twitter.com/newscientist/status/1372904582276321286</p>&mdash; David Deutsch (@DavidDeutschOxf) <a href="https://twitter.com/DavidDeutschOxf/status/1372908980868104198?ref_src=twsrc%5Etfw">March 19, 2021</a>

Well, I met the challenge and wrote [such an app](https://h22jy.csb.app/). I can assure you that it isn’t conscious, even though it is quite capable of recognizing the device it runs once you point it at a mirror.

Deutsch rightly points out that self-recognition has nothing to do with consciousness. The underlying mistake is the fudging of smarts and intelligence: self-recognition no doubt takes a sophisticated algorithm, but biological evolution may well have endowed animals (horses in this case) with such an algorithm. Horses then execute it mindlessly, just like my app, which I endowed with the algorithm. In the horse’s case, it was biological evolution that created the knowledge; in the app’s case, it was me. It wasn’t the horse and it wasn’t the app. But *people* [really do](/posts/recovering-from-blindness) create their own shape-recognition algorithms (and almost all their other knowledge), and that creative ability is what makes them conscious.

> In humans, the effect of certain hallucinogens appears to be associated with a disruption in cortical feedforward and feedback processing. Pharmacological interventions in non-human animals with compounds known to affect conscious behavior in humans can lead to similar perturbations in behavior in non-human animals.

Yeah, so?

> In humans, there is evidence to suggest that awareness is correlated with cortical activity […]

See my comment above on parallelism. The evidence they speak of is not, in fact, evidence.

The last paragraph -reads:+reads (formatting and footnote indicator removed):

> We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. -Nonhuman+Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

Let’s break it up:

> We declare the following: +[…]

is oddly ceremonial again (and rather pretentious, I should add).

> The absence of a neocortex does not appear to preclude an organism from experiencing affective states.

That is true—but nor does the absence of a brain entirely, if replaced with different hardware. That’s because of computational universality, i.e., the thing about being able to run consciousness on a computer made of metal and silicon or chewing gum and vacuum tubes. So yeah, you could run consciousness on a MacBook instead of on a brain, even though MacBooks do not have neocortices, and it will experience those hoity-toity “affective states.”

> Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors.

Types of hardware (“substrates”) are not indicative of consciousness any more than having a MacBook is indicative of running a word processor on that MacBook. It may have a word processor installed or it may not—either way, it’s the same hardware. So why should it be any different if we substitute consciousness for the word processor? Also, machines have “intentional behaviors” too, but they have them completely mindlessly. Intentional behaviors are not evidence of consciousness. A Roomba for example does certain things on purpose, but you wouldn’t claim it’s conscious, would you?

> Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness.

Evidence cannot be weighted. This is bad science (as is their reductionist mistake of studying hardware instead of software). And again, the substrate—neurological or not—makes no difference as long as it can process information. So, in humans it may be “neurological substrates” that give rise to consciousness, sure. But that’s no more an argument than saying my MacBook runs a word processor and therefore yours runs one too.

A footnote says:

> The Cambridge Declaration on Consciousness was written by Philip Low and edited by Jaak Panksepp, Diana Reiss, David Edelman, Bruno Van Swinderen, Philip Low and Christof Koch. The Declaration was publicly proclaimed in Cambridge, UK, on July 7, 2012, at the Francis Crick Memorial Conference on Consciousness in Human and non-Human Animals, at Churchill College, University of Cambridge, by Low, Edelman and Koch. The Declaration was signed by the conference participants that very evening, in the presence of Stephen Hawking, in the Balfour Room at the Hotel du Vin in Cambridge, UK. The signing ceremony was memorialized by CBS 60 Minutes.

Who cares?

These are the main mistakes in the declaration:

- Fudging smarts and intelligence
- Neglecting computational universality and making reductionist mistakes as a result—i.e., studying hardware while trying to understand software (and maybe not even realizing that they’re trying to understand software)
- Failing to see that consciousness is an *epistemological* problem, not a neuroscientific/biological one
- The bad science of “weighing” evidence and looking for correlations rather than *guessing bold explanations* and then using evidence to rule out guessed explanations
- Sacrificing clarity for complicated, intimidating words and sentence structures

If you read the whole text, you will find that it is hard to follow. You may even feel like you’re not worthy, and that you could never understand what these bright minds think about. Don’t let them do that to you.

Then again, Hawking was there, and the hotel had a French name, so what they’re saying must be true.

-\*+* EDIT: Following a suggestion, I'd like to expand on why creativity seems to be required for consciousness to arise. Creativity/intelligence is the ability to create new knowledge. Imagine an organism that is *not* creative: that means its knowledge—in the *objective* sense, not in any subjective sense—remains mostly unchanged during its lifetime. All the organism can do, therefore, is *execute* that knowledge, like an automaton. Automata are not conscious. Therefore, if an organism is creative, then it's not an automaton—and only then could it possibly be conscious. (That still leaves room for the possibility that creativity is only necessary and not sufficient for consciousness to arise—though I do guess that it's sufficient—but either way, automata are not conscious. They do things *mindlessly*.)

Original · · View this version (v1)

# Analyzing *The Cambridge Declaration on Consciousness*
Let's analyze [The Cambridge Declaration on Consciousness](http://fcmconference.org/img/CambridgeDeclarationOnConsciousness.pdf), keeping the following things in mind:

- If an organism is *smart* (i.e., contains sophisticated knowledge), that does *not* mean it is *intelligent* (i.e., can *create* new knowledge). Why? Because the sophisticated knowledge may well have originated somewhere else and not in the organism (cf. Deutsch, *The Beginning of Infinity*, ch. 7). For example, it can be inherited genetically and then the organism just needs to execute it mindlessly. Therefore, no matter how sophisticated animal behavior may be, it is not evidence of intelligence/creativity. And if creativity is required to be conscious—as it seems to be\*—then sophisticated behavior isn't evidence of consciousness either.
- Intelligence and consciousness are *software*. And, like all software, they can be run on computers, which can be made of pretty much anything. It doesn't matter if that computer is made of metal and silicon or chewing gum and vacuum tubes. So neuroscience won't tell us anything about consciousness, for the same reason you can study metal and silicon all you want, it won't tell you anything about how, say, a word processor (or consciousness) works. (cf. [this](https://www.cbc.ca/radio/tapestry/the-new-human-1.4696724/oxford-physicist-predicts-ai-will-be-human-in-all-but-name-1.4696754) interview with David Deutsch)
- Fancy titles and complicated sentence structures shouldn't intimidate us into agreement.

The first (!) sentence reads:

> On this day of July 7, 2012, a prominent international group of cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists gathered at The University of Cambridge to reassess the neurobiological substrates of conscious experience and related behaviors in human and non-human animals.

Writing "On this day of July 7, 2012" is already oddly formal/ceremonial. It's supposed to give off the impression that this document is very important.

> a prominent international group

How cosmopolitan! Who cares that they're prominent and international? What bearing does this have on the matter of consciousness?

> cognitive neuroscientists, neuropharmacologists, neurophysiologists, neuroanatomists and computational neuroscientists gathered at The University of Cambridge

These are lots of impressive-sounding words that essentially say: we know what we're talking about, you don't. Since they’re all neuro-somethings from Cambridge, surely what they're saying must be true! But with the understanding that consciousness is software (or a least a phenomenon emerging from it), we can already tell that these people won't have anything useful to say unless they step away from the hardware and study software instead.

> to reassess the neurobiological substrates of conscious experience and related behaviors in human and non-human animals.

If they had wanted clarity, they could have just written: "to think about consciousness in all animals." What *is* clear is that they did not want clarity.

Skipping some. Then:

> Studies of non-human animals have shown that homologous brain circuits correlated with conscious experience and perception can be selectively facilitated and disrupted to assess whether they are in fact necessary for those experiences. Moreover, in humans, new non-invasive techniques are readily available to survey the correlates of consciousness. 

~Nobody is going to know what “homologous” means. [This source](https://www.vocabulary.com/dictionary/homologous) says it means “similar in function.” Why not just write that? In any case, there’s a problem with focusing on correlates: correlation is not causation. These researchers know this, but they ignore it because they know of no better way to study consciousness. I vaguely recall either Karl Popper or Konrad Lorenz quoting somebody else, whose name I forget and whom I will paraphrase (from poor memory): even if we found that conscious states correlated perfectly with certain neural patterns, all that would tell us is that [*psychophysical parallelism*](https://en.wikipedia.org/wiki/Psychophysical_parallelism) is indeed very parallel—but it would not tell us how consciousness works! In other words: we need *explanations*, not correlations.

> The neural substrates of emotions do not appear to be confined to cortical structures.

That may be so, but this is just a special case of the more general principle that computers can be made of pretty much anything, as long as they can process information. Also, “emotions” is a big word that can easily be misunderstood to imply subjective experiences—but studying hardware cannot tell you anything about subjective experiences because they’re abstract, not material. The rest of that paragraph is basically made of long, impressive-sounding sentences meant to support their point above.

> Birds appear to offer, in their behavior, neurophysiology, and neuroanatomy a striking case of parallel evolution of consciousness. Evidence of near human-like levels of consciousness has been most dramatically observed in African grey parrots. Mammalian and avian emotional networks and cognitive microcircuitries appear to be far more homologous than previously thought. Moreover, certain species of birds have been found to exhibit neural sleep patterns similar to those of mammals, including REM sleep and, as was demonstrated in zebra finches, neurophysiological patterns, previously thought to require a mammalian neocortex. Magpies in particular have been shown to exhibit striking similarities to humans, great apes, dolphins, and elephants in studies of mirror self-recognition.

Now, I don’t know what magpies are, but this is the mistake I mentioned in the first bullet point at the beginning of this post. They’re blurring the lines between smarts and intelligence. That isn’t just wordplay on my part—these really are distinct concepts, and one does not imply the other.

Why would a similarity in sleep patterns between birds and mammals say anything about either one of them being conscious? They don’t say—it’s just an implied assertion. Same goes for similarities to humans, in particular.

Considering self-recognition as evidence of consciousness is a widespread mistake. David Deutsch recently issued a neat challenge after what feels like the millionth researcher claimed that animal x is conscious because it seems to recognize itself in the mirror:

> Someone please write a smartphone app that recognises itself in the mirror.<br><br>And when it does, yells triumphantly that it is self-aware. https://twitter.com/newscientist/status/1372904582276321286</p>&mdash; David Deutsch (@DavidDeutschOxf) <a href="https://twitter.com/DavidDeutschOxf/status/1372908980868104198?ref_src=twsrc%5Etfw">March 19, 2021</a>

Well, I met the challenge and wrote [such an app](https://h22jy.csb.app/). I can assure you that it isn’t conscious, even though it is quite capable of recognizing the device it runs once you point it at a mirror.

Deutsch rightly points out that self-recognition has nothing to do with consciousness. The underlying mistake is the fudging of smarts and intelligence: self-recognition no doubt takes a sophisticated algorithm, but biological evolution may well have endowed animals (horses in this case) with such an algorithm. Horses then execute it mindlessly, just like my app, which I endowed with the algorithm. In the horse’s case, it was biological evolution that created the knowledge; in the app’s case, it was me. It wasn’t the horse and it wasn’t the app. But *people* [really do](/posts/recovering-from-blindness) create their own shape-recognition algorithms (and almost all their other knowledge), and that creative ability is what makes them conscious.

> In humans, the effect of certain hallucinogens appears to be associated with a disruption in cortical feedforward and feedback processing. Pharmacological interventions in non-human animals with compounds known to affect conscious behavior in humans can lead to similar perturbations in behavior in non-human animals.

Yeah, so?

> In humans, there is evidence to suggest that awareness is correlated with cortical activity […]

See my comment above on parallelism. The evidence they speak of is not, in fact, evidence.

The last paragraph reads:

> We declare the following: “The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Nonhuman animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates.”

Let’s break it up:

> We declare the following:

is oddly ceremonial again (and rather pretentious, I should add).

> The absence of a neocortex does not appear to preclude an organism from experiencing affective states.

That is true—but nor does the absence of a brain entirely, if replaced with different hardware. That’s because of computational universality, i.e., the thing about being able to run consciousness on a computer made of metal and silicon or chewing gum and vacuum tubes. So yeah, you could run consciousness on a MacBook instead of on a brain, even though MacBooks do not have neocortices, and it will experience those hoity-toity “affective states.”

> Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors.

Types of hardware (“substrates”) are not indicative of consciousness any more than having a MacBook is indicative of running a word processor on that MacBook. It may have a word processor installed or it may not—either way, it’s the same hardware. So why should it be any different if we substitute consciousness for the word processor? Also, machines have “intentional behaviors” too, but they have them completely mindlessly. Intentional behaviors are not evidence of consciousness. A Roomba for example does certain things on purpose, but you wouldn’t claim it’s conscious, would you?

> Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness.

Evidence cannot be weighted. This is bad science (as is their reductionist mistake of studying hardware instead of software). And again, the substrate—neurological or not—makes no difference as long as it can process information. So, in humans it may be “neurological substrates” that give rise to consciousness, sure. But that’s no more an argument than saying my MacBook runs a word processor and therefore yours runs one too.

A footnote says:

> The Cambridge Declaration on Consciousness was written by Philip Low and edited by Jaak Panksepp, Diana Reiss, David Edelman, Bruno Van Swinderen, Philip Low and Christof Koch. The Declaration was publicly proclaimed in Cambridge, UK, on July 7, 2012, at the Francis Crick Memorial Conference on Consciousness in Human and non-Human Animals, at Churchill College, University of Cambridge, by Low, Edelman and Koch. The Declaration was signed by the conference participants that very evening, in the presence of Stephen Hawking, in the Balfour Room at the Hotel du Vin in Cambridge, UK. The signing ceremony was memorialized by CBS 60 Minutes.

Who cares?

These are the main mistakes in the declaration:

- Fudging smarts and intelligence
- Neglecting computational universality and making reductionist mistakes as a result—i.e., studying hardware while trying to understand software (and maybe not even realizing that they’re trying to understand software)
- Failing to see that consciousness is an *epistemological* problem, not a neuroscientific/biological one
- The bad science of “weighing” evidence and looking for correlations rather than *guessing bold explanations* and then using evidence to rule out guessed explanations
- Sacrificing clarity for complicated, intimidating words and sentence structures

If you read the whole text, you will find that it is hard to follow. You may even feel like you’re not worthy, and that you could never understand what these bright minds think about. Don’t let them do that to you.

Then again, Hawking was there, and the hotel had a French name, so what they’re saying must be true.

\* EDIT: Following a suggestion, I'd like to expand on why creativity seems to be required for consciousness to arise. Creativity/intelligence is the ability to create new knowledge. Imagine an organism that is *not* creative: that means its knowledge—in the *objective* sense, not in any subjective sense—remains mostly unchanged during its lifetime. All the organism can do, therefore, is *execute* that knowledge, like an automaton. Automata are not conscious. Therefore, if an organism is creative, then it's not an automaton—and only then could it possibly be conscious. (That still leaves room for the possibility that creativity is only necessary and not sufficient for consciousness to arise—though I do guess that it's sufficient—but either way, automata are not conscious. They do things *mindlessly*.)